source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
61,838 | If I have something like a UILabel linked to a xib file, do I need to release it on dealloc of my view? The reason I ask is because I don't alloc it, which makes me think I don't need to release it either?eg (in the header): IBOutlet UILabel *lblExample; in the implementation: ....[lblExample setText:@"whatever"];....-(void)dealloc{ [lblExample release];//?????????} | If you follow what is now considered to be best practice, you should release outlet properties, because you should have retained them in the set accessor: @interface MyController : MySuperclass { Control *uiElement;}@property (nonatomic, retain) IBOutlet Control *uiElement;@end@implementation MyController@synthesize uiElement;- (void)dealloc { [uiElement release]; [super dealloc];}@end The advantage of this approach is that it makes the memory management semantics explicit and clear, and it works consistently across all platforms for all nib files . Note: The following comments apply only to iOS prior to 3.0. With 3.0 and later, you should instead simply nil out property values in viewDidUnload. One consideration here, though, is when your controller might dispose of its user interface and reload it dynamically on demand (for example, if you have a view controller that loads a view from a nib file, but on request -- say under memory pressure -- releases it, with the expectation that it can be reloaded if the view is needed again). In this situation, you want to make sure that when the main view is disposed of you also relinquish ownership of any other outlets so that they too can be deallocated. For UIViewController, you can deal with this issue by overriding setView: as follows: - (void)setView:(UIView *)newView { if (newView == nil) { self.uiElement = nil; } [super setView:aView];} Unfortunately this gives rise to a further issue. Because UIViewController currently implements its dealloc method using the setView: accessor method (rather than simply releasing the variable directly), self.anOutlet = nil will be called in dealloc as well as in response to a memory warning... This will lead to a crash in dealloc . The remedy is to ensure that outlet variables are also set to nil in dealloc : - (void)dealloc { // release outlets and set variables to nil [anOutlet release], anOutlet = nil; [super dealloc];} | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/61838",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6044/"
]
} |
61,872 | We are rewriting our legacy accounting system in VB.NET and SQL Server. We brought in a new team of .NET/ SQL Programmers to do the rewrite. Most of the system is already completed with the dollar amounts using floats. The legacy system language, I programmed in, did not have a float, so I probably would have used a decimal. What is your recommendation? Should the float or decimal data type be used for dollar amounts? What are some of the pros and cons for either? One con mentioned in our daily scrum was you have to be careful when you calculate an amount that returns a result that is over two decimal positions. It sounds like you will have to round the amount to two decimal positions. Another con is all displays and printed amounts have to have a format statement that shows two decimal positions. I noticed a few times where this was not done and the amounts did not look correct. (i.e. 10.2 or 10.2546) A pro is the float-only approach takes up eight bytes on disk where the decimal would take up nine bytes (decimal 12,2). | Should Float or Decimal data type be used for dollar amounts? The answer is easy. Never floats. NEVER ! Floats were according to IEEE 754 always binary, only the new standard IEEE 754R defined decimal formats. Many of the fractional binary parts can never equal the exact decimal representation. Any binary number can be written as m/2^n ( m , n positive integers), any decimal number as m/(2^n*5^n) .As binaries lack the prime factor 5 , all binary numbers can be exactly represented by decimals, but not vice versa. 0.3 = 3/(2^1 * 5^1) = 0.30.3 = [0.25/0.5] [0.25/0.375] [0.25/3.125] [0.2825/3.125] 1/4 1/8 1/16 1/32 So you end up with a number either higher or lower than the given decimal number. Always. Why does that matter? Rounding. Normal rounding means 0..4 down, 5..9 up. So it does matter if the result iseither 0.049999999999 .... or 0.0500000000 ... You may know that it means 5 cent, but the the computer does not know that and rounds 0.4999 ... down (wrong) and 0.5000 ... up (right). Given that the result of floating point computations always contain small error terms, the decision is pure luck. It gets hopeless if you want decimal round-to-even handling with binary numbers. Unconvinced? You insist that in your account system everything is perfectly ok?Assets and liabilities equal? Ok, then take each of the given formatted numbers of each entry, parse them and sum them with an independent decimal system! Compare that with the formatted sum. Oops, there is something wrong, isn't it? For that calculation, extreme accuracy and fidelity was required (we used Oracle'sFLOAT) so we could record the "billionth's of a penny" being accured. It doesn't help against this error. Because all people automatically assume that the computer sums right, and practically no one checks independently. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/61872",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4964/"
]
} |
61,882 | In a typical handheld/portable embedded system device Battery life is a major concern in design of H/W, S/W and the features the device can support. From the Software programming perspective, one is aware of MIPS, Memory(Data and Program) optimized code.I am aware of the H/W Deep sleep mode, Standby mode that are used to clock the hardware at lower Cycles or turn of the clock entirel to some unused circutis to save power, but i am looking for some ideas from that point of view: Wherein my code is running and it needs to keep executing, given this how can I write the code "power" efficiently so as to consume minimum watts? Are there any special programming constructs, data structures, control structures which i should look at to achieve minimum power consumption for a given functionality. Are there any s/w high level design considerations which one should keep in mind at time of code structure design, or during low level design to make the code as power efficient(Least power consuming) as possible? | Like 1800 INFORMATION said, avoid polling; subscribe to events and wait for them to happen Update window content only when necessary - let the system decide when to redraw it When updating window content, ensure your code recreates as little of the invalid region as possible With quick code the CPU goes back to deep sleep mode faster and there's a better chance that such code stays in L1 cache Operate on small data at one time so data stays in caches as well Ensure that your application doesn't do any unnecessary action when in background Make your software not only power efficient, but also power aware - update graphics less often when on battery, disable animations, less hard drive thrashing And read some other guidelines . ;) Recently a series of posts called "Optimizing Software Applications for Power" , started appearing on Intel Software Blogs. May be of some use for x86 developers. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/61882",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2759376/"
]
} |
61,902 | I want to embed a wikipedia article into a page but I don't want all the wrapper (navigation, etc.) that sits around the articles. I saw it done here: http://www.dayah.com/periodic/ . Click on an element and the iframe is displayed and links to the article only (no wrapper). So how'd they do that? Seems like JavaScript handles showing the iframe and constructing the href but after browsing the pages javascript ( http://www.dayah.com/periodic/Script/interactivity.js ) I still can't figure out how the url is built. Thanks. | The periodic table example loads the printer-friendly version of the wiki artice into an iframe. http://en.wikipedia.org/wiki/Potasium ? printable=yes it's done in function click_wiki(e) (line 534, interactivity.js) var article = el.childNodes[0].childNodes[n_name].innerHTML;...window.frames["WikiFrame"].location.replace("http://" + language + ".wikipedia.org/w/index.php?title=" + encodeURIComponent(article) + "&printable=yes"); | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/61902",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5234/"
]
} |
61,953 | Let's say I have the following simple enum: enum Response{ Yes = 1, No = 2, Maybe = 3} How can I bind this enum to a DropDownList control so that the descriptions are displayed in the list as well as retrieve the associated numeric value (1,2,3) once an option has been selected? | I probably wouldn't bind the data as it's an enum, and it won't change after compile time (unless I'm having one of those stoopid moments). Better just to iterate through the enum: Dim itemValues As Array = System.Enum.GetValues(GetType(Response))Dim itemNames As Array = System.Enum.GetNames(GetType(Response))For i As Integer = 0 To itemNames.Length - 1 Dim item As New ListItem(itemNames(i), itemValues(i)) dropdownlist.Items.Add(item)Next Or the same in C# Array itemValues = System.Enum.GetValues(typeof(Response));Array itemNames = System.Enum.GetNames(typeof(Response));for (int i = 0; i <= itemNames.Length - 1 ; i++) { ListItem item = new ListItem(itemNames[i], itemValues[i]); dropdownlist.Items.Add(item);} | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/61953",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4872/"
]
} |
61,963 | I want to import an oracle dump into a different tablespace. I have a tablespace A used by User A. I've revoked DBA on this user and given him the grants connect and resource. Then I've dumped everything with the command exp a/*** owner=a file=oracledump.DMP log=log.log compress=y Now I want to import the dump into the tablespace B used by User B. So I've given him the grants on connect and resource (no DBA). Then I've executed the following import: imp b/*** file=oracledump.DMP log=import.log fromuser=a touser=b The result is a log with lots of errors: IMP-00017: following statement failed with ORACLE error 20001: "BEGIN DBMS_STATS.SET_TABLE_STATSIMP-00003: ORACLE error 20001 encounteredORA-20001: Invalid or inconsistent input values After that, I've tried the same import command but with the option statistics=none. This resulted in the following errors: ORA-00959: tablespace 'A_TBLSPACE' does not exist How should this be done? Note: a lot of columns are of type CLOB. It looks like the problems have something to do with that. Note2: The oracle versions are a mixture of 9.2, 10.1, and 10.1 XE. But I don't think it has to do with versions. | You've got a couple of issues here. Firstly , the different versions of Oracle you're using is the reason for the table statistics error - I had the same issue when some of our Oracle 10g Databases got upgraded to Release 2, and some were still on Release 1 and I was swapping .DMP files between them. The solution that worked for me was to use the same version of exp and imp tools to do the exporting and importing on the different Database instances. This was easiest to do by using the same PC (or Oracle Server) to issue all of the exporting and importing commands. Secondly , I suspect you're getting the ORA-00959: tablespace 'A_TBLSPACE' does not exist because you're trying to import a .DMP file from a full-blown Oracle Database into the 10g Express Edition (XE) Database, which, by default, creates a single, predefined tablespace called USERS for you. If that's the case, then you'll need to do the following.. With your .DMP file, create a SQL file containing the structure (Tables): imp <xe_username>/<password>@XE file=<filename.dmp> indexfile=index.sql full=y Open the indexfile (index.sql) in a text editor that can do find and replace over an entire file, and issue the following find and replace statements IN ORDER (ignore the single quotes.. '): Find: 'REM<space>' Replace: <nothing> Find: '"<source_tablespace>"' Replace: '"USERS"' Find: '...' Replace: 'REM ...' Find: 'CONNECT' Replace: 'REM CONNECT' Save the indexfile, then run it against your Oracle Express Edition account (I find it's best to create a new, blank XE user account - or drop and recreate if I'm refreshing): sqlplus <xe_username>/<password>@XE @index.sql Finally run the same .DMP file you created the indexfile with against the same account to import the data, stored procedures, views etc: imp <xe_username>/<password>@XE file=<filename.dmp> fromuser=<original_username> touser=<xe_username> ignore=y You may get pages of Oracle errors when trying to create certain objects such as Database Jobs as Oracle will try to use the same Database Identifier, which will most likely fail as you're on a different Database. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/61963",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5043/"
]
} |
61,967 | Let's say I have the following simple table variable: declare @databases table( DatabaseID int, Name varchar(15), Server varchar(15))-- insert a bunch rows into @databases Is declaring and using a cursor my only option if I wanted to iterate through the rows? Is there another way? | First of all you should be absolutely sure you need to iterate through each row — set based operations will perform faster in every case I can think of and will normally use simpler code. Depending on your data it may be possible to loop using just SELECT statements as shown below: Declare @Id intWhile (Select Count(*) From ATable Where Processed = 0) > 0Begin Select Top 1 @Id = Id From ATable Where Processed = 0 --Do some processing here Update ATable Set Processed = 1 Where Id = @Id End Another alternative is to use a temporary table: Select *Into #TempFrom ATableDeclare @Id intWhile (Select Count(*) From #Temp) > 0Begin Select Top 1 @Id = Id From #Temp --Do some processing here Delete #Temp Where Id = @IdEnd The option you should choose really depends on the structure and volume of your data. Note: If you are using SQL Server you would be better served using: WHILE EXISTS(SELECT * FROM #Temp) Using COUNT will have to touch every single row in the table, the EXISTS only needs to touch the first one (see Josef's answer below). | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/61967",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4872/"
]
} |
61,972 | Best recommendations for accessing and manipulation of sqlite databases from JavaScript. | Well, if you are working on client side JavaScript, I think you will be out of luck... browsers tend to sandbox the JavaScript environment so you don't have access to the machine in any kind of general capacity like accessing a database. If you are talking about an SQLite DB on the server end accessed from the client end, you could set up an AJAX solution that invokes some server side code to access it. If you are talking about Rhino or some other server side JavaScript, you should look into the host language's API access into SQLite (such as the JDBC for Rhino). Perhaps clarify your question a bit more...? | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/61972",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6390/"
]
} |
62,029 | I use the VS2008 command prompt for builds, TFS access etc. and the cygwin prompt for grep, vi and unix-like tools. Is there any way I can 'import' the vcvars32.bat functionality into the cygwin environment so I can call "tfs checkout" from cygwin itself? | According to this page you need to: "Depending on your preference, you can either add the variables required for compilation direct to your environment, or use the vcvars32.bat script to set them for you. Note you have to compile from a cygwin bash shell, to use vcvars32, first run a DOS shell, then run vcvars32.bat, then run cygwin.bat from the directory where you installed cygwin. You can speed this up by adding the directory containgin vcvars32 (somewhere under \Microsoft Visual Studio\VC98\bin) and the directory containing cygwin.bat to your path." | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/62029",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/45603/"
]
} |
62,044 | I'm trying to construct a find command to process a bunch of files in a directory using two different executables. Unfortunately, -exec on find doesn't allow to use pipe or even \| because the shell interprets that character first. Here is specifically what I'm trying to do (which doesn't work because pipe ends the find command): find /path/to/jpgs -type f -exec jhead -v {} | grep 123 \; -print | Try this find /path/to/jpgs -type f -exec sh -c 'jhead -v {} | grep 123' \; -print Alternatively you could try to embed your exec statement inside a sh script and then do: find -exec some_script {} \; | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/62044",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3499/"
]
} |
62,069 | I've been asked to screen some candidates for a MySQL DBA / Developer position for a role that requires an enterprise level skill set. I myself am a SQL Server person so I know what I would be looking for from that point of view with regards to scalability / design etc but is there anything specific I should be asking with regards to MySQL? I would ideally like to ask them about enterprise level features of MySQL that they would typically only use when working on a big database. Need to separate out the enterprise developers from the home / small website kind of guys. Thanks. | Although SQL Server and MySQL are both RDBMs, MySQL has many unique features that can illustrate the difference between novice and expert. Your first step should be to ensure that the candidate is comfortable using the command line, not just GUI tools such as phpMyAdmin. During the interview, try asking the candidate to write MySQL code to create a database table or add a new index. These are very basic queries, but exactly the type that GUI tools prevent novices from mastering. You can double-check the answers with someone who is more familiar with MySQL. Can the candidate demonstrate knowledge of how JOINs work? For example, try asking the candidate to construct a query that returns all rows from Table One where no matching entries exist in Table Two. The answer should involve a LEFT JOIN. Ask the candidate to discuss backup strategies, and the various strengths and weaknesses of each. The candidate should know that backing up the database files directly is not an effective strategy unless all the tables are MyISAM. The candidate should definitely mention mysqldump as a cornerstone for backups. More sophisticated backup solutions include ibbackup/innobackup and LVM snapshots. Ideally, the candidate should also discuss how backups can affect performance (a common solution is to use a slave server for taking backups). Does the candidate have experience with replication? What are some of the common replication configurations and the various advantages of each? The most common setup is master-slave, allowing the application to offload SELECT queries to slave servers, along with taking backups using a slave to prevent performance issues on the master. Another common setup is master-master, the main benefit being the ability to make schema changes without impacting performance. Make sure the candidate discusses common issues such as cloning a slave server ( mysqldump + notation of the binlog position ), load distribution using a load balancer or MySQL proxy, resolving slave lag by breaking larger queries into chunks, and how to promote a slave to become a new master. How would the candidate troubleshoot performance issues? Do they have sufficient knowledge of the underlying operating system and hardware to diagnose whether a bottleneck is CPU bound, IO bound, or network bound? Can they demonstrate how to use EXPLAIN to discover indexing problems? Do they mention the slow query log or configuration options such as the key buffer, tmp table size, innodb buffer pool size, etc? Does the candidate appreciate the subtleties of each storage engine? (MyISAM, InnoDB, and MEMORY are the main ones). Do they understand how each storage engine optimizes queries, and how locking is handled? At the least, the candidate should mention that MyISAM issues a table-level lock whereas InnODB uses row-level locking. What is the safest way to make schema changes to a live database? The candidate should mention master-master replication, as well as avoiding the locking and performance issues of ALTER TABLE by creating a new table with the desired configuration and using mysqldump or INSERT INTO ... SELECT followed by RENAME TABLE. Lastly, the only true measurement of a pro is experience. If the candidate cannot point to specific experience managing large data sets in a high availability environment, they might not be able to back up any knowledge they possess on a purely intellectual level. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/62069",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/942/"
]
} |
62,110 | Does anyone know of any good tutorials on ADO.NET Entity Framework? There are a few useful links here at Stack OverFlow , and I've found one tutorial at Jason's DotNet Architecture Blog , but can anyone recommend any other good tutorials? Any tutorials available from Microsoft, either online or as part of any conference/course material? | Microsoft offers .NET 3.5 Enhancements Training Kit it contains documentation and sample code for ADO.NET EF | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/62110",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6452/"
]
} |
62,137 | I've just heard the term covered index in some database discussion - what does it mean? | A covering index is an index that contains all of, and possibly more, the columns you need for your query. For instance, this: SELECT *FROM tablenameWHERE criteria will typically use indexes to speed up the resolution of which rows to retrieve using criteria , but then it will go to the full table to retrieve the rows. However, if the index contained the columns column1, column2 and column3 , then this sql: SELECT column1, column2FROM tablenameWHERE criteria and, provided that particular index could be used to speed up the resolution of which rows to retrieve, the index already contains the values of the columns you're interested in, so it won't have to go to the table to retrieve the rows, but can produce the results directly from the index. This can also be used if you see that a typical query uses 1-2 columns to resolve which rows, and then typically adds another 1-2 columns, it could be beneficial to append those extra columns (if they're the same all over) to the index, so that the query processor can get everything from the index itself. Here's an article: Index Covering Boosts SQL Server Query Performance on the subject. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/62137",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5466/"
]
} |
62,151 | I've been wondering what exactly are the principles of how the two properties work. I know the second one is universal and basically doesn't deal with time zones, but can someone explain in detail how they work and which one should be used in what scenario? | DateTime.UtcNow tells you the date and time as it would be in Coordinated Universal Time, which is also called the Greenwich Mean Time time zone - basically like it would be if you were in London England, but not during the summer. DateTime.Now gives the date and time as it would appear to someone in your current locale. I'd recommend using DateTime.Now whenever you're displaying a date to a human being - that way they're comfortable with the value they see - it's something that they can easily compare to what they see on their watch or clock. Use DateTime.UtcNow when you want to store dates or use them for later calculations that way (in a client-server model) your calculations don't become confused by clients in different time zones from your server or from each other. | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/62151",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1801/"
]
} |
62,153 | Several times now I've been faced with plans from a team that wants to build their own bug tracking system - Not as a product, but as an internal tool. The arguments I've heard in favous are usually along the lines of : Wanting to 'eat our own dog food' in terms of some internally built web framework Needing some highly specialised report, or the ability to tweak some feature in some allegedly unique way Believing that it isn't difficult to build a bug tracking system What arguments might you use to support buying an existing bug tracking system? In particular, what features sound easy but turn out hard to implement, or are difficult and important but often overlooked? | First, look at these Ohloh metrics: Trac: 44 KLoC, 10 Person Years, $577,003Bugzilla: 54 KLoC, 13 Person Years, $714,437 Redmine: 171 KLoC, 44 Person Years, $2,400,723 Mantis: 182 KLoC, 47 Person Years, $2,562,978 What do we learn from these numbers? We learn that building Yet Another Bug Tracker is a great way to waste resources! So here are my reasons to build your own internal bug tracking system: You need to neutralize all the bozocoders for a decade or two. You need to flush some money to avoid budget reduction next year. Otherwise don't. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/62153",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/797/"
]
} |
62,188 | To commemorate the public launch of Stack Overflow, what's the shortest code to cause a stack overflow? Any language welcome. ETA: Just to be clear on this question, seeing as I'm an occasional Scheme user: tail-call "recursion" is really iteration, and any solution which can be converted to an iterative solution relatively trivially by a decent compiler won't be counted. :-P ETA2: I've now selected a “best answer”; see this post for rationale. Thanks to everyone who contributed! :-) | All these answers and no Befunge? I'd wager a fair amount it's shortest solution of them all: 1 Not kidding. Try it yourself: http://www.quirkster.com/iano/js/befunge.html EDIT: I guess I need to explain this one. The 1 operand pushes a 1 onto Befunge's internal stack and the lack of anything else puts it in a loop under the rules of the language. Using the interpreter provided, you will eventually--and I mean eventually --hit a point where the Javascript array that represents the Befunge stack becomes too large for the browser to reallocate. If you had a simple Befunge interpreter with a smaller and bounded stack--as is the case with most of the languages below--this program would cause a more noticeable overflow faster. | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/62188",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13/"
]
} |
62,201 | I've got a rails application where users have to log in. Therefore in order for the application to be usable, there must be one initial user in the system for the first person to log in with (they can then create subsequent users). Up to now I've used a migration to add a special user to the database. After asking this question , it seems that I should be using db:schema:load, rather than running the migrations, to set up fresh databases on new development machines. Unfortunately, this doesn't seem to include the migrations which insert data, only those which set up tables, keys etc. My question is, what's the best way to handle this situation: Is there a way to get d:s:l to include data-insertion migrations? Should I not be using migrations at all to insert data this way? Should I not be pre-populating the database with data at all? Should I update the application code so that it handles the case where there are no users gracefully, and lets an initial user account be created live from within the application? Any other options? :) | Try a rake task. For example: Create the file /lib/tasks/bootstrap.rake In the file, add a task to create your default user: namespace :bootstrap do desc "Add the default user" task :default_user => :environment do User.create( :name => 'default', :password => 'password' ) end desc "Create the default comment" task :default_comment => :environment do Comment.create( :title => 'Title', :body => 'First post!' ) end desc "Run all bootstrapping tasks" task :all => [:default_user, :default_comment] end Then, when you're setting up your app for the first time, you can do rake db:migrate OR rake db:schema:load, and then do rake bootstrap:all. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/62201",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3974/"
]
} |
62,241 | Is there an easy way to avoid dealing with text encoding problems? | You can't really avoid dealing with the text encoding issues, but there are existing solutions in Apache Commons: Reader to InputStream : ReaderInputStream Writer to OutputStream : WriterOutputStream You just need to pick the encoding of your choice. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/62241",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3885/"
]
} |
62,264 | I recently asked about keyword expansion in Git and I'm willing to accept the design not to really support this idea in Git. For better or worse, the project I'm working on at the moment requires SVN keyword expansion like this: svn propset svn:keywords "Id" expl3.dtx to keep this string up-to-date: $Id: expl3.dtx 803 2008-09-11 14:01:58Z will $ But I would quite like to use Git to do my version control. Unfortunately, git-svn doesn't support this, according to the docs: "We ignore all SVN properties except svn:executable" But it doesn't seem too tricky to have this keyword stuff emulated by a couple of pre/post commit hooks. Am I the first person to want this? Does anyone have some code to do this? | What's going on here: Git is optimized to switch between branches as quickly as possible. In particular, git checkout is designed to not touch any files that are identical in both branches. Unfortunately, RCS keyword substitution breaks this. For example, using $Date$ would require git checkout to touch every file in the tree when switching branches. For a repository the size of the Linux kernel, this would bring everything to a screeching halt. In general, your best bet is to tag at least one version: $ git tag v0.5.whatever ...and then call the following command from your Makefile: $ git describe --tagsv0.5.15.1-6-g61cde1d Here, git is telling me that I'm working on an anonymous version 6 commits past v0.5.15.1, with an SHA1 hash beginning with g61cde1d . If you stick the output of this command into a *.h file somewhere, you're in business, and will have no problem linking the released software back to the source code. This is the preferred way of doing things. If you can't possibly avoid using RCS keywords, you may want to start with this explanation by Lars Hjemli . Basically, $Id$ is pretty easy, and you if you're using git archive , you can also use $Format$ . But, if you absolutely cannot avoid RCS keywords, the following should get you started: git config filter.rcs-keyword.clean 'perl -pe "s/\\\$Date[^\\\$]*\\\$/\\\$Date\\\$/"'git config filter.rcs-keyword.smudge 'perl -pe "s/\\\$Date[^\\\$]*\\\$/\\\$Date: `date`\\\$/"'echo '$Date$' > test.htmlecho 'test.html filter=rcs-keyword' >> .gitattributesgit add test.html .gitattributesgit commit -m "Experimental RCS keyword support for git"rm test.htmlgit checkout test.htmlcat test.html On my system, I get: $Date: Tue Sep 16 10:15:02 EDT 2008$ If you have trouble getting the shell escapes in the smudge and clean commands to work, just write your own Perl scripts for expanding and removing RCS keywords, respectively, and use those scripts as your filter. Note that you really don't want to do this for more files than absolutely necessary, or git will lose most of its speed. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/62264",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4161/"
]
} |
62,276 | What tool would you recommend to detect Java package cyclic dependencies ,knowing that the goal is to list explicitly the specific classes involved in the detected 'across-packages cycle' ? I know about classycle and JDepend , but they both fail to list the classes involved in a cyclic package dependency. Metrics has an interesting graphical representation of cycles, but it is again limited to packages, and quite difficult to read sometime. I am getting tired to get a: " you have a package cycle dependency between those 3 packages you have xxx classes in each good luck finding the right classes and break this cycle " Do you know any tool that takes the extra step to actually explain to you why the cycle is detected (i.e. 'list the involved classes')? Riiight... Time to proclaim the results: @l7010.de: Thank you for the effort. I will vote you up (when I will have enough rep), especially for the 'CAP' answer... but CAP is dead in the water and no longer compatible with my Eclipse 3.4. The rest is commercial and I look only for freeware. @daniel6651: Thank you but, as said, freeware only (sorry to not have mentioned it in the first place). @izb as a frequent user of findbugs (using the latest 1.3.5 right now), I am one click away to accept your answer... if you could explain to me what option there is to activate for findbug to detect any cycle. That feature is only mentioned for the 0.8.7 version in passing (look for ' New Style detector to find circular dependencies between classes '), and I am not able to test it.Update: It works now, and I had an old findbugs configuration file in which that option was not activated. I still like CAD though ;) THE ANSWER is... see my own (second) answer below | Findbugs can detect circular class dependencies and has an Eclipse plugin too. http://findbugs.sourceforge.net/ | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/62276",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6309/"
]
} |
62,289 | How is it possible to read/write to the Windows registry using Java? | I know this question is old, but it is the first search result on google to "java read/write to registry". Recently I found this amazing piece of code which: Can read/write to ANY part of the registry. DOES NOT USE JNI. DOES NOT USE ANY 3rd PARTY/EXTERNAL APPLICATIONS TO WORK. DOES NOT USE THE WINDOWS API (directly) This is pure, Java code. It uses reflection to work, by actually accessing the private methods in the java.util.prefs.Preferences class. The internals of this class are complicated, but the class itself is very easy to use. For example, the following code obtains the exact windows distribution from the registry : String value = WinRegistry.readString ( WinRegistry.HKEY_LOCAL_MACHINE, //HKEY "SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion", //Key "ProductName"); //ValueName System.out.println("Windows Distribution = " + value); Here is the original class. Just copy paste it and it should work: import java.lang.reflect.InvocationTargetException;import java.lang.reflect.Method;import java.util.HashMap;import java.util.Map;import java.util.ArrayList;import java.util.List;import java.util.prefs.Preferences;public class WinRegistry { public static final int HKEY_CURRENT_USER = 0x80000001; public static final int HKEY_LOCAL_MACHINE = 0x80000002; public static final int REG_SUCCESS = 0; public static final int REG_NOTFOUND = 2; public static final int REG_ACCESSDENIED = 5; private static final int KEY_ALL_ACCESS = 0xf003f; private static final int KEY_READ = 0x20019; private static final Preferences userRoot = Preferences.userRoot(); private static final Preferences systemRoot = Preferences.systemRoot(); private static final Class<? extends Preferences> userClass = userRoot.getClass(); private static final Method regOpenKey; private static final Method regCloseKey; private static final Method regQueryValueEx; private static final Method regEnumValue; private static final Method regQueryInfoKey; private static final Method regEnumKeyEx; private static final Method regCreateKeyEx; private static final Method regSetValueEx; private static final Method regDeleteKey; private static final Method regDeleteValue; static { try { regOpenKey = userClass.getDeclaredMethod("WindowsRegOpenKey", new Class[] { int.class, byte[].class, int.class }); regOpenKey.setAccessible(true); regCloseKey = userClass.getDeclaredMethod("WindowsRegCloseKey", new Class[] { int.class }); regCloseKey.setAccessible(true); regQueryValueEx = userClass.getDeclaredMethod("WindowsRegQueryValueEx", new Class[] { int.class, byte[].class }); regQueryValueEx.setAccessible(true); regEnumValue = userClass.getDeclaredMethod("WindowsRegEnumValue", new Class[] { int.class, int.class, int.class }); regEnumValue.setAccessible(true); regQueryInfoKey = userClass.getDeclaredMethod("WindowsRegQueryInfoKey1", new Class[] { int.class }); regQueryInfoKey.setAccessible(true); regEnumKeyEx = userClass.getDeclaredMethod( "WindowsRegEnumKeyEx", new Class[] { int.class, int.class, int.class }); regEnumKeyEx.setAccessible(true); regCreateKeyEx = userClass.getDeclaredMethod( "WindowsRegCreateKeyEx", new Class[] { int.class, byte[].class }); regCreateKeyEx.setAccessible(true); regSetValueEx = userClass.getDeclaredMethod( "WindowsRegSetValueEx", new Class[] { int.class, byte[].class, byte[].class }); regSetValueEx.setAccessible(true); regDeleteValue = userClass.getDeclaredMethod( "WindowsRegDeleteValue", new Class[] { int.class, byte[].class }); regDeleteValue.setAccessible(true); regDeleteKey = userClass.getDeclaredMethod( "WindowsRegDeleteKey", new Class[] { int.class, byte[].class }); regDeleteKey.setAccessible(true); } catch (Exception e) { throw new RuntimeException(e); } } private WinRegistry() { } /** * Read a value from key and value name * @param hkey HKEY_CURRENT_USER/HKEY_LOCAL_MACHINE * @param key * @param valueName * @return the value * @throws IllegalArgumentException * @throws IllegalAccessException * @throws InvocationTargetException */ public static String readString(int hkey, String key, String valueName) throws IllegalArgumentException, IllegalAccessException, InvocationTargetException { if (hkey == HKEY_LOCAL_MACHINE) { return readString(systemRoot, hkey, key, valueName); } else if (hkey == HKEY_CURRENT_USER) { return readString(userRoot, hkey, key, valueName); } else { throw new IllegalArgumentException("hkey=" + hkey); } } /** * Read value(s) and value name(s) form given key * @param hkey HKEY_CURRENT_USER/HKEY_LOCAL_MACHINE * @param key * @return the value name(s) plus the value(s) * @throws IllegalArgumentException * @throws IllegalAccessException * @throws InvocationTargetException */ public static Map<String, String> readStringValues(int hkey, String key) throws IllegalArgumentException, IllegalAccessException, InvocationTargetException { if (hkey == HKEY_LOCAL_MACHINE) { return readStringValues(systemRoot, hkey, key); } else if (hkey == HKEY_CURRENT_USER) { return readStringValues(userRoot, hkey, key); } else { throw new IllegalArgumentException("hkey=" + hkey); } } /** * Read the value name(s) from a given key * @param hkey HKEY_CURRENT_USER/HKEY_LOCAL_MACHINE * @param key * @return the value name(s) * @throws IllegalArgumentException * @throws IllegalAccessException * @throws InvocationTargetException */ public static List<String> readStringSubKeys(int hkey, String key) throws IllegalArgumentException, IllegalAccessException, InvocationTargetException { if (hkey == HKEY_LOCAL_MACHINE) { return readStringSubKeys(systemRoot, hkey, key); } else if (hkey == HKEY_CURRENT_USER) { return readStringSubKeys(userRoot, hkey, key); } else { throw new IllegalArgumentException("hkey=" + hkey); } } /** * Create a key * @param hkey HKEY_CURRENT_USER/HKEY_LOCAL_MACHINE * @param key * @throws IllegalArgumentException * @throws IllegalAccessException * @throws InvocationTargetException */ public static void createKey(int hkey, String key) throws IllegalArgumentException, IllegalAccessException, InvocationTargetException { int [] ret; if (hkey == HKEY_LOCAL_MACHINE) { ret = createKey(systemRoot, hkey, key); regCloseKey.invoke(systemRoot, new Object[] { new Integer(ret[0]) }); } else if (hkey == HKEY_CURRENT_USER) { ret = createKey(userRoot, hkey, key); regCloseKey.invoke(userRoot, new Object[] { new Integer(ret[0]) }); } else { throw new IllegalArgumentException("hkey=" + hkey); } if (ret[1] != REG_SUCCESS) { throw new IllegalArgumentException("rc=" + ret[1] + " key=" + key); } } /** * Write a value in a given key/value name * @param hkey * @param key * @param valueName * @param value * @throws IllegalArgumentException * @throws IllegalAccessException * @throws InvocationTargetException */ public static void writeStringValue (int hkey, String key, String valueName, String value) throws IllegalArgumentException, IllegalAccessException, InvocationTargetException { if (hkey == HKEY_LOCAL_MACHINE) { writeStringValue(systemRoot, hkey, key, valueName, value); } else if (hkey == HKEY_CURRENT_USER) { writeStringValue(userRoot, hkey, key, valueName, value); } else { throw new IllegalArgumentException("hkey=" + hkey); } } /** * Delete a given key * @param hkey * @param key * @throws IllegalArgumentException * @throws IllegalAccessException * @throws InvocationTargetException */ public static void deleteKey(int hkey, String key) throws IllegalArgumentException, IllegalAccessException, InvocationTargetException { int rc = -1; if (hkey == HKEY_LOCAL_MACHINE) { rc = deleteKey(systemRoot, hkey, key); } else if (hkey == HKEY_CURRENT_USER) { rc = deleteKey(userRoot, hkey, key); } if (rc != REG_SUCCESS) { throw new IllegalArgumentException("rc=" + rc + " key=" + key); } } /** * delete a value from a given key/value name * @param hkey * @param key * @param value * @throws IllegalArgumentException * @throws IllegalAccessException * @throws InvocationTargetException */ public static void deleteValue(int hkey, String key, String value) throws IllegalArgumentException, IllegalAccessException, InvocationTargetException { int rc = -1; if (hkey == HKEY_LOCAL_MACHINE) { rc = deleteValue(systemRoot, hkey, key, value); } else if (hkey == HKEY_CURRENT_USER) { rc = deleteValue(userRoot, hkey, key, value); } if (rc != REG_SUCCESS) { throw new IllegalArgumentException("rc=" + rc + " key=" + key + " value=" + value); } } // ===================== private static int deleteValue (Preferences root, int hkey, String key, String value) throws IllegalArgumentException, IllegalAccessException, InvocationTargetException { int[] handles = (int[]) regOpenKey.invoke(root, new Object[] { new Integer(hkey), toCstr(key), new Integer(KEY_ALL_ACCESS) }); if (handles[1] != REG_SUCCESS) { return handles[1]; // can be REG_NOTFOUND, REG_ACCESSDENIED } int rc =((Integer) regDeleteValue.invoke(root, new Object[] { new Integer(handles[0]), toCstr(value) })).intValue(); regCloseKey.invoke(root, new Object[] { new Integer(handles[0]) }); return rc; } private static int deleteKey(Preferences root, int hkey, String key) throws IllegalArgumentException, IllegalAccessException, InvocationTargetException { int rc =((Integer) regDeleteKey.invoke(root, new Object[] { new Integer(hkey), toCstr(key) })).intValue(); return rc; // can REG_NOTFOUND, REG_ACCESSDENIED, REG_SUCCESS } private static String readString(Preferences root, int hkey, String key, String value) throws IllegalArgumentException, IllegalAccessException, InvocationTargetException { int[] handles = (int[]) regOpenKey.invoke(root, new Object[] { new Integer(hkey), toCstr(key), new Integer(KEY_READ) }); if (handles[1] != REG_SUCCESS) { return null; } byte[] valb = (byte[]) regQueryValueEx.invoke(root, new Object[] { new Integer(handles[0]), toCstr(value) }); regCloseKey.invoke(root, new Object[] { new Integer(handles[0]) }); return (valb != null ? new String(valb).trim() : null); } private static Map<String,String> readStringValues (Preferences root, int hkey, String key) throws IllegalArgumentException, IllegalAccessException, InvocationTargetException { HashMap<String, String> results = new HashMap<String,String>(); int[] handles = (int[]) regOpenKey.invoke(root, new Object[] { new Integer(hkey), toCstr(key), new Integer(KEY_READ) }); if (handles[1] != REG_SUCCESS) { return null; } int[] info = (int[]) regQueryInfoKey.invoke(root, new Object[] { new Integer(handles[0]) }); int count = info[0]; // count int maxlen = info[3]; // value length max for(int index=0; index<count; index++) { byte[] name = (byte[]) regEnumValue.invoke(root, new Object[] { new Integer (handles[0]), new Integer(index), new Integer(maxlen + 1)}); String value = readString(hkey, key, new String(name)); results.put(new String(name).trim(), value); } regCloseKey.invoke(root, new Object[] { new Integer(handles[0]) }); return results; } private static List<String> readStringSubKeys (Preferences root, int hkey, String key) throws IllegalArgumentException, IllegalAccessException, InvocationTargetException { List<String> results = new ArrayList<String>(); int[] handles = (int[]) regOpenKey.invoke(root, new Object[] { new Integer(hkey), toCstr(key), new Integer(KEY_READ) }); if (handles[1] != REG_SUCCESS) { return null; } int[] info = (int[]) regQueryInfoKey.invoke(root, new Object[] { new Integer(handles[0]) }); int count = info[0]; // Fix: info[2] was being used here with wrong results. Suggested by davenpcj, confirmed by Petrucio int maxlen = info[3]; // value length max for(int index=0; index<count; index++) { byte[] name = (byte[]) regEnumKeyEx.invoke(root, new Object[] { new Integer (handles[0]), new Integer(index), new Integer(maxlen + 1) }); results.add(new String(name).trim()); } regCloseKey.invoke(root, new Object[] { new Integer(handles[0]) }); return results; } private static int [] createKey(Preferences root, int hkey, String key) throws IllegalArgumentException, IllegalAccessException, InvocationTargetException { return (int[]) regCreateKeyEx.invoke(root, new Object[] { new Integer(hkey), toCstr(key) }); } private static void writeStringValue (Preferences root, int hkey, String key, String valueName, String value) throws IllegalArgumentException, IllegalAccessException, InvocationTargetException { int[] handles = (int[]) regOpenKey.invoke(root, new Object[] { new Integer(hkey), toCstr(key), new Integer(KEY_ALL_ACCESS) }); regSetValueEx.invoke(root, new Object[] { new Integer(handles[0]), toCstr(valueName), toCstr(value) }); regCloseKey.invoke(root, new Object[] { new Integer(handles[0]) }); } // utility private static byte[] toCstr(String str) { byte[] result = new byte[str.length() + 1]; for (int i = 0; i < str.length(); i++) { result[i] = (byte) str.charAt(i); } result[str.length()] = 0; return result; }} Original Author: Apache. Library Source: https://github.com/apache/npanday/tree/trunk/components/dotnet-registry/src/main/java/npanday/registry | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/62289",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
62,292 | Now that most of the major browsers support full page zoom (at present, the only notable exception being Google Chrome), are liquid or elastic layouts no longer needed? Is the relative pain of building liquid/elastic layouts worth the effort? Are there any situations where a liquid layout would still be of benefit? Is full page zoom the real solution it at first appears to be? | Yes, because there are a vast variety of screens out there commonly ranging from 15" to 32".There is also some variation in what people consider a "comfortable" font size.All of which adds up to quite a range of sizes that your content will need to fit into. If anything, liquid layout is becoming even more necessary as we scale up to huge monitors, and down to cellphone devices. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/62292",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1944/"
]
} |
62,328 | input: phrase 1, phrase 2 output: semantic similarity value (between 0 and 1), or the probability these two phrases are talking about the same thing | You might want to check out this paper: Sentence similarity based on semantic nets and corpus statistics (PDF) I've implemented the algorithm described. Our context was very general (effectively any two English sentences) and we found the approach taken was too slow and the results, while promising, not good enough (or likely to be so without considerable, extra, effort). You don't give a lot of context so I can't necessarily recommend this but reading the paper could be useful for you in understanding how to tackle the problem. Regards, Matt. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/62328",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1925263/"
]
} |
62,334 | My designer keeps throwing out the term "liquid" layout. What does this mean? Thanks for the clarification, I have always just called this a percentage layout, and thought he was saying that the pieces could be moved around, and that was liquid | A "liquid" layout is a site layout that expands to fill the entire available area as the browser window is resized. Typically this is done using CSS. Liquid layouts can be quite helpful for certain types of sites, but they also tend to be significantly more effort than fixed width layouts, and their usefulness depends on the site content and how well implemented they are. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/62334",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1220/"
]
} |
62,340 | std::vector<int> ints;// ... fill ints with random valuesfor(std::vector<int>::iterator it = ints.begin(); it != ints.end(); ){ if(*it < 10) { *it = ints.back(); ints.pop_back(); continue; } it++;} This code is not working because when pop_back() is called, it is invalidated. But I don't find any doc talking about invalidation of iterators in std::vector::pop_back() . Do you have some links about that? | The call to pop_back() removes the last element in the vector and so the iterator to that element is invalidated. The pop_back() call does not invalidate iterators to items before the last element, only reallocation will do that. From Josuttis' "C++ Standard Library Reference": Inserting or removing elements invalidates references, pointers, and iterators that refer to the following element. If an insertion causes reallocation, it invalidates all references, iterators, and pointers. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/62340",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6605/"
]
} |
62,353 | I have a solution with multiple project. I am trying to optimize AssemblyInfo.cs files by linking one solution wide assembly info file. What are the best practices for doing this? Which attributes should be in solution wide file and which are project/assembly specific? Edit: If you are interested there is a follow up question What are differences between AssemblyVersion, AssemblyFileVersion and AssemblyInformationalVersion? | We're using a global file called GlobalAssemblyInfo.cs and a local one called AssemblyInfo.cs. The global file contains the following attributes: [assembly: AssemblyProduct("Your Product Name")] [assembly: AssemblyCompany("Your Company")] [assembly: AssemblyCopyright("Copyright © 2008 ...")] [assembly: AssemblyTrademark("Your Trademark - if applicable")] #if DEBUG [assembly: AssemblyConfiguration("Debug")] #else [assembly: AssemblyConfiguration("Release")] #endif [assembly: AssemblyVersion("This is set by build process")] [assembly: AssemblyFileVersion("This is set by build process")] The local AssemblyInfo.cs contains the following attributes: [assembly: AssemblyTitle("Your assembly title")] [assembly: AssemblyDescription("Your assembly description")] [assembly: AssemblyCulture("The culture - if not neutral")] [assembly: ComVisible(true/false)] // unique id per assembly [assembly: Guid("xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx")] You can add the GlobalAssemblyInfo.cs using the following procedure: Select Add/Existing Item... in the context menu of the project Select GlobalAssemblyInfo.cs Expand the Add-Button by clicking on that little down-arrow on the right hand Select "Add As Link" in the buttons drop down list | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/62353",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2361/"
]
} |
62,389 | What are the advantages/disadvantages between MS VS C++ 6.0 and MSVS C++ 2008? The main reason for asking such a question is that there are still many decent programmers that prefer using the older version instead of the newest version. Is there any reason the might prefer the older over the new? | Advantages of Visual Studio 2008 over Visual C++ 6.0: Much more standards compliant C++ compiler, with better template handling Support for x64 / mobile / XBOX targets Improved STL implementation Support for C++0x TR1 (smart pointers, regular expressions, etc) Secure C runtime library Improved code navigation Improved debugger; possibility to run remote debug sessions Better compiler optimizations Many bug fixes Faster builds on multi-core/multi-CPU systems Improved IDE user interface, with many nice features Improved macro support in the IDE; DTE allows access to more IDE methods and variables Updated MFC library (in VS2008 Service Pack 1) support for OPENMP (easy multithreading)(only in VS2008 pro.) Disadvantages of moving to Visual Studio 2008: The IDE is a lot slower than VS6 Intellisense still has performance issues (replacing it with VisualAssistX can help) Side-by-side assemblies make app deployment much more problematic The local (offline) MSDN library is extremely slow As mentioned here , there's no profiler in the Professional version In the spirit of Joel's recent blog post , I've combined some of the other answers posted into a single answer (and made this a community-owned post, so I won't gain rep from it). I hope you don't mind. Many thanks to Laur, NeARAZ, 17 of 26, me.yahoo.com, and everyone else who answered. -- ChrisN | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/62389",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6619/"
]
} |
62,418 | When a java based application starts to misbehave on a windows machine, you want to be able to kill the process in the task manager if you can't quit the application normally. Most of the time, there's more than one java based application running on my machine. Is there a better way than just randomly killing java.exe processes in hope that you'll hit the correct application eventually? EDIT: Thank you to all the people who pointed me to Sysinternal's Process Explorer - Exactly what I'm looking for! | Download Sysinternal's Process Explorer . It's a task manager much more powerfull than Windows's own manager. One of it's features is that you can see all the resources that each process is using (like registry keys, hard disk directories, named pipes, etc). So, browsing the resources that each java.exe process holds might help you determine wich one you want to kill. I usually find out by looking for the one that's using a certain log file directory. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/62418",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6094/"
]
} |
62,449 | When using the Net.Sockets.TcpListener, what is the best way to handle incoming connections (.AcceptSocket) in seperate threads? The idea is to start a new thread when a new incoming connection is accepted, while the tcplistener then stays available for further incoming connections (and for every new incoming connection a new thread is created). All communication and termination with the client that originated the connection will be handled in the thread. Example C# of VB.NET code is appreciated. | The code that I've been using looks like this: class Server{ private AutoResetEvent connectionWaitHandle = new AutoResetEvent(false); public void Start() { TcpListener listener = new TcpListener(IPAddress.Any, 5555); listener.Start(); while(true) { IAsyncResult result = listener.BeginAcceptTcpClient(HandleAsyncConnection, listener); connectionWaitHandle.WaitOne(); // Wait until a client has begun handling an event connectionWaitHandle.Reset(); // Reset wait handle or the loop goes as fast as it can (after first request) } } private void HandleAsyncConnection(IAsyncResult result) { TcpListener listener = (TcpListener)result.AsyncState; TcpClient client = listener.EndAcceptTcpClient(result); connectionWaitHandle.Set(); //Inform the main thread this connection is now handled //... Use your TcpClient here client.Close(); }} | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/62449",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1271/"
]
} |
62,496 | I have a number of tracks recorded by a GPS, which more formally can be described as a number of line strings. Now, some of the recorded tracks might be recordings of the same route, but because of inaccurasies in the GPS system, the fact that the recordings were made on separate occasions and that they might have been recorded travelling at different speeds, they won't match up perfectly, but still look close enough when viewed on a map by a human to determine that it's actually the same route that has been recorded. I want to find an algorithm that calculates the similarity between two line strings. I have come up with some home grown methods to do this, but would like to know if this is a problem that's already has good algorithms to solve it. How would you calculate the similarity, given that similar means represents the same path on a map? Edit: For those unsure of what I'm talking about, please look at this link for a definition of what a line string is: http://msdn.microsoft.com/en-us/library/bb895372.aspx - I'm not asking about character strings. | Compute the Fréchet distance on each pair of tracks. The distance can be used to gauge the similarity of your tracks. Math alert: Fréchet was a pioneer in the field of metric space which is relevant to your problem. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/62496",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/890/"
]
} |
62,503 | In C#, int and Int32 are the same thing, but I've read a number of times that int is preferred over Int32 with no reason given. Is there a reason, and should I care? | The two are indeed synonymous; int will be a little more familiar looking, Int32 makes the 32-bitness more explicit to those reading your code. I would be inclined to use int where I just need 'an integer', Int32 where the size is important (cryptographic code, structures) so future maintainers will know it's safe to enlarge an int if appropriate, but should take care changing Int32 s in the same way. The resulting code will be identical: the difference is purely one of readability or code appearance. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/62503",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1826/"
]
} |
62,504 | I am using MS Access 2003. I want to run a lot of insert SQL statements in what is called 'Query' in MS Access. Is there any easy(or indeed any way) to do it? | yes and no. You can't do: insert into foo (c1, c2, c3)values ("v1a", "v2a", "v3a"), ("v1b", "v2b", "v3b"), ("v1c", "v2c", "v3c") but you can do insert into foo (c1, c2, c3) select (v1, v2, v3) from bar What does that get you if you don't already have the data in a table? Well, you could craft a Select statement composed of a lot of unions of Selects with hard coded results. INSERT INTO foo (f1, f2, f3) SELECT * FROM (select top 1 "b1a" AS f1, "b2a" AS f2, "b3a" AS f3 from onerow union all select top 1 "b1b" AS f1, "b2b" AS f2, "b3b" AS f3 from onerow union all select top 1 "b1c" AS f1, "b2c" AS f2, "b3c" AS f3 from onerow) Note: I also have to include a some form of a dummy table (e.g., onerow) to fool access into allowing the union (it must have at least one row in it), and you need the "top 1" to ensure you don't get repeats for a table with more than one row But then again, it would probably be easier just to do three separate insert statements,especially if you are already building things up in a loop (unless of course the cost of doing the inserts is greater than the cost of your time to code it). | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/62504",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6613/"
]
} |
62,512 | I would like to find out safe ways of implementing three dimensional arrays of integers in C++, using pointer arithmetic / dynamic memory allocation, or, alternatively using STL techniques such as vectors. Essentially I want my integer array dimensions to look like: [ x ][ y ][ z ] x and y are in the range 20-6000z is known and equals 4. | Have a look at the Boost multi-dimensional array library. Here's an example (adapted from the Boost documentation): #include "boost/multi_array.hpp"int main() { // Create a 3D array that is 20 x 30 x 4 int x = 20; int y = 30; int z = 4; typedef boost::multi_array<int, 3> array_type; typedef array_type::index index; array_type my_array(boost::extents[x][y][z]); // Assign values to the elements int values = 0; for (index i = 0; i != x; ++i) { for (index j = 0; j != y; ++j) { for (index k = 0; k != z; ++k) { my_array[i][j][k] = values++; } } }} | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/62512",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6795/"
]
} |
62,539 | What is the dependency inversion principle and why is it important? | Check this document out: The Dependency Inversion Principle . It basically says: High level modules should not depend upon low-level modules. Both should depend upon abstractions. Abstractions should never depend upon details. Details should depend upon abstractions. As to why it is important, in short: changes are risky, and by depending on a concept instead of on an implementation, you reduce the need for change at call sites. Effectively, the DIP reduces coupling between different pieces of code. The idea is that although there are many ways of implementing, say, a logging facility, the way you would use it should be relatively stable in time. If you can extract an interface that represents the concept of logging, this interface should be much more stable in time than its implementation, and call sites should be much less affected by changes you could make while maintaining or extending that logging mechanism. By also making the implementation depend on an interface, you get the possibility to choose at run-time which implementation is better suited for your particular environment. Depending on the cases, this may be interesting too. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/62539",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3012/"
]
} |
62,542 | I'm not talking about bug tracking software (like Bugzilla or Jira). I'm looking for something that: Stores test specifications in text format Combines test specs into test coverage scenarios Keeps track of the progress through testing scenarios Links test specs with bug reports stored in Bugzilla Generates progress reports Is centrally managed on its own (i.e. is not a hack/extension on top of something else) | TestLink is a pretty nice open source test tracking tool with the features you need, and is still under active development. Take a look at http://testlink.org/ | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/62542",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6751/"
]
} |
62,567 | What is the easiest way to compare strings in Python, ignoring case? Of course one can do (str1.lower() <= str2.lower()), etc., but this created two additional temporary strings (with the obvious alloc/g-c overheads). I guess I'm looking for an equivalent to C's stricmp(). [Some more context requested, so I'll demonstrate with a trivial example:] Suppose you want to sort a looong list of strings. You simply do theList.sort().This is O(n * log(n)) string comparisons and no memory management (since allstrings and list elements are some sort of smart pointers). You are happy. Now, you want to do the same, but ignore the case (let's simplify and sayall strings are ascii, so locale issues can be ignored).You can do theList.sort(key=lambda s: s.lower()), but then you cause two new allocations per comparison, plus burden the garbage-collector with the duplicated(lowered) strings. Each such memory-management noise is orders-of-magnitude slower than simple string comparison. Now, with an in-place stricmp()-like function, you do: theList.sort(cmp=stricmp)and it is as fast and as memory-friendly as theList.sort(). You are happy again. The problem is any Python-based case-insensitive comparison involves implicit string duplications, so I was expecting to find a C-based comparisons (maybe in module string). Could not find anything like that, hence the question here.(Hope this clarifies the question). | Here is a benchmark showing that using str.lower is faster than the accepted answer's proposed method ( libc.strcasecmp ): #!/usr/bin/env python2.7import randomimport timeitfrom ctypes import *libc = CDLL('libc.dylib') # change to 'libc.so.6' on linuxwith open('/usr/share/dict/words', 'r') as wordlist: words = wordlist.read().splitlines()random.shuffle(words)print '%i words in list' % len(words)setup = 'from __main__ import words, libc; gc.enable()'stmts = [ ('simple sort', 'sorted(words)'), ('sort with key=str.lower', 'sorted(words, key=str.lower)'), ('sort with cmp=libc.strcasecmp', 'sorted(words, cmp=libc.strcasecmp)'),]for (comment, stmt) in stmts: t = timeit.Timer(stmt=stmt, setup=setup) print '%s: %.2f msec/pass' % (comment, (1000*t.timeit(10)/10)) typical times on my machine: 235886 words in listsimple sort: 483.59 msec/passsort with key=str.lower: 1064.70 msec/passsort with cmp=libc.strcasecmp: 5487.86 msec/pass So, the version with str.lower is not only the fastest by far, but also the most portable and pythonic of all the proposed solutions here.I have not profiled memory usage, but the original poster has still not given a compelling reason to worry about it. Also, who says that a call into the libc module doesn't duplicate any strings? NB: The lower() string method also has the advantage of being locale-dependent. Something you will probably not be getting right when writing your own "optimised" solution. Even so, due to bugs and missing features in Python, this kind of comparison may give you wrong results in a unicode context. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/62567",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6984/"
]
} |
62,570 | I would like to move a file or folder from one place to another within the same repository without having to use Repo Browser to do it, and without creating two independent add/delete operations. Using Repo Browser works fine except that your code will be hanging in a broken state until you get any supporting changes checked in afterwards (like the .csproj file for example). Update: People have suggested "move" from the command line. Is there a TortoiseSVN equivalent? | To move a file or set of files using Tortoise SVN , right-click-and-drag the target files to their destination and release the right mouse button. The popup menu will have a SVN move versioned files here option. Note that the destination folder must have already been added to the repository for the SVN move versioned files here option to appear. | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/62570",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1436/"
]
} |
62,618 | I've got many, many mp3 files that I would like to merge into a single file. I've used the command line method copy /b 1.mp3+2.mp3 3.mp3 but it's a pain when there's a lot of them and their namings are inconsistent. The time never seems to come out right either. | As Thomas Owens pointed out, simply concatenating the files will leave multiple ID3 headers scattered throughout the resulting concatenated file - so the time/bitrate info will be wildly wrong. You're going to need to use a tool which can combine the audio data for you. mp3wrap would be ideal for this - it's designed to join together MP3 files, without needing to decode + re-encode the data (which would result in a loss of audio quality) and will also deal with the ID3 tags intelligently. The resulting file can also be split back into its component parts using the mp3splt tool - mp3wrap adds information to the IDv3 comment to allow this. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/62618",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4230/"
]
} |
62,625 | Using C#, I need a class called User that has a username, password, active flag, first name, last name, full name, etc. There should be methods to authenticate and save a user. Do I just write a test for the methods? And do I even need to worry about testing the properties since they are .Net's getter and setters? | Many great responses to this are also on my question: " Beginning TDD - Challenges? Solutions? Recommendations? " May I also recommend taking a look at my blog post (which was partly inspired by my question), I have got some good feedback on that. Namely: I Don’t Know Where to Start? Start afresh. Only think about writing tests when you are writing new code. This can be re-working of old code, or a completely new feature. Start simple. Don’t go running off and trying to get your head round a testing framework as well as being TDD-esque. Debug.Assert works fine. Use it as a starting point. It doesn’t mess with your project or create dependencies. Start positive. You are trying to improve your craft, feel good about it. I have seen plenty of developers out there that are happy to stagnate and not try new things to better themselves. You are doing the right thing, remember this and it will help stop you from giving up. Start ready for a challenge. It is quite hard to start getting into testing. Expect a challenge, but remember – challenges can be overcome. Only Test For What You Expect I had real problems when I first started because I was constantly sat there trying to figure out every possible problem that could occur and then trying to test for it and fix. This is a quick way to a headache. Testing should be a real YAGNI process. If you know there is a problem, then write a test for it. Otherwise, don’t bother. Only Test One Thing Each test case should only ever test one thing. If you ever find yourself putting “and” in the test case name, you’re doing something wrong. I hope this means we can move on from "getters and setters" :) | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/62625",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9938/"
]
} |
62,658 | I'm trying to install Laconica , an open-source Microblogging application on my Windows development server using XAMPP as per the instructions provided . The website cannot find PEAR, and throws the below errors: Warning: require_once(PEAR.php) [function.require-once]: failed to open stream: No such file or directory in C:\xampplite\htdocs\laconica\lib\common.php on line 31 Fatal error: require_once() [function.require]: Failed opening required 'PEAR.php' (include_path='.;\xampplite\php\pear\PEAR') in C:\xampplite\htdocs\laconica\lib\common.php on line 31 PEAR is located in C:\xampplite\php\pear phpinfo() shows me that the include path is .;\xampplite\php\pear What am I doing wrong? Why isn't the PEAR folder being included? | You need to fix your include_path system variable to point to the correct location. To fix it edit the php.ini file. In that file you will find a line that says, " include_path = ... ". (You can find out what the location of php.ini by running phpinfo() on a page.) Fix the part of the line that says, " \xampplite\php\pear\PEAR " to read " C:\xampplite\php\pear ". Make sure to leave the semi-colons before and/or after the line in place. Restart PHP and you should be good to go. To restart PHP in IIS you can restart the application pool assigned to your site or, better yet, restart IIS all together. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/62658",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6939/"
]
} |
62,771 | I want to include a batch file rename functionality in my application. A user can type a destination filename pattern and (after replacing some wildcards in the pattern) I need to check if it's going to be a legal filename under Windows. I've tried to use regular expression like [a-zA-Z0-9_]+ but it doesn't include many national-specific characters from various languages (e.g. umlauts and so on). What is the best way to do such a check? | You can get a list of invalid characters from Path.GetInvalidPathChars and GetInvalidFileNameChars . UPD: See Steve Cooper's suggestion on how to use these in a regular expression. UPD2: Note that according to the Remarks section in MSDN "The array returned from this method is not guaranteed to contain the complete set of characters that are invalid in file and directory names." The answer provided by sixlettervaliables goes into more details. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/62771",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7162/"
]
} |
62,784 | It's fall of 2008, and I still hear developers say that you should not design a site that requires JavaScript. I understand that you should develop sites that degrade gracefully when JS is not present/on. But at what point do you not include funcitonality that can only be powered by JS? I guess the question comes down to demographics. Are there numbers out there of how many folks are browsing without JS? | Just as long as you're aware of the accessibility limitations you might be introducing, ie for users of screen-reading software, etc. It's one thing to exclude people because they choose to turn off JS or use a browser which doesn't support it, it's entirely another to exclude them because of a disability. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/62784",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6957/"
]
} |
62,804 | Is there a standard library method that converts a string that has duration in the standard ISO 8601 Duration (also used in XSD for its duration type) format into the .NET TimeSpan object? For example, P0DT1H0M0S which represents a duration of one hour, is converted into New TimeSpan(0,1,0,0,0). A Reverse converter does exist which works as follows:Xml.XmlConvert.ToString(New TimeSpan(0,1,0,0,0))The above expression will return P0DT1H0M0S. | This will convert from xs:duration to TimeSpan: System.Xml.XmlConvert.ToTimeSpan("P0DT1H0M0S") See http://msdn.microsoft.com/en-us/library/system.xml.xmlconvert.totimespan.aspx | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/62804",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7105/"
]
} |
62,814 | Is there any difference between a binary semaphore and mutex or are they essentially the same? | They are NOT the same thing. They are used for different purposes! While both types of semaphores have a full/empty state and use the same API, their usage is very different. Mutual Exclusion Semaphores Mutual Exclusion semaphores are used to protect shared resources (data structure, file, etc..). A Mutex semaphore is "owned" by the task that takes it. If Task B attempts to semGive a mutex currently held by Task A, Task B's call will return an error and fail. Mutexes always use the following sequence: - SemTake - Critical Section - SemGive Here is a simple example: Thread A Thread B Take Mutex access data ... Take Mutex <== Will block ... Give Mutex access data <== Unblocks ... Give Mutex Binary Semaphore Binary Semaphore address a totally different question: Task B is pended waiting for something to happen (a sensor being tripped for example). Sensor Trips and an Interrupt Service Routine runs. It needs to notify a task of the trip. Task B should run and take appropriate actions for the sensor trip. Then go back to waiting. Task A Task B ... Take BinSemaphore <== wait for something Do Something Noteworthy Give BinSemaphore do something <== unblocks Note that with a binary semaphore, it is OK for B to take the semaphore and A to give it. Again, a binary semaphore is NOT protecting a resource from access. The act of Giving and Taking a semaphore are fundamentally decoupled. It typically makes little sense for the same task to so a give and a take on the same binary semaphore. | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/62814",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7086/"
]
} |
62,921 | How to decide whether to use threads or create separate process altogether in your application to achieve parallelism. | Threads are more light weight, and for the making several "workers" just to utilize all availabe CPUs or cores, you're better of with threads. When you need the workers to be better isolated and more robust, like with most servers, go with sockets. When one thread crashes badly, it usually takes down the entire process, including other threads working in that process. If a process turns sour and dies, it doesn't touch any other process, so they can happily go on with their bussiness as if nothing happened. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/62921",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7086/"
]
} |
62,929 | I am getting the following error trying to read from a socket. I'm doing a readInt() on that InputStream , and I am getting this error. Perusing the documentation this suggests that the client part of the connection closed the connection. In this scenario, I am the server. I have access to the client log files and it is not closing the connection, and in fact its log files suggest I am closing the connection. So does anybody have an idea why this is happening? What else to check for? Does this arise when there are local resources that are perhaps reaching thresholds? I do note that I have the following line: socket.setSoTimeout(10000); just prior to the readInt() . There is a reason for this (long story), but just curious, are there circumstances under which this might lead to the indicated error? I have the server running in my IDE, and I happened to leave my IDE stuck on a breakpoint, and I then noticed the exact same errors begin appearing in my own logs in my IDE. Anyway, just mentioning it, hopefully not a red herring. :-( | There are several possible causes. The other end has deliberately reset the connection, in a way which I will not document here. It is rare, and generally incorrect, for application software to do this, but it is not unknown for commercial software. More commonly, it is caused by writing to a connection that the other end has already closed normally. In other words an application protocol error. It can also be caused by closing a socket when there is unread data in the socket receive buffer. In Windows, 'software caused connection abort', which is not the same as 'connection reset', is caused by network problems sending from your end. There's a Microsoft knowledge base article about this. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/62929",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
62,936 | For example: man(1) , find(3) , updatedb(2) ? What do the numbers in parentheses (Brit. "brackets") mean? | It's the section that the man page for the command is assigned to. These are split as General commands System calls C library functions Special files (usually devices, those found in /dev) and drivers File formats and conventions Games and screensavers Miscellanea System administration commands and daemons Original descriptions of each section can be seen in the Unix Programmer's Manual (page ii). In order to access a man page given as "foo(5)", run: man 5 foo | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/62936",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7370/"
]
} |
62,946 | I have a requirement to make a large amount of code MISRA compliant. First question: Can somebody to give an estimation for passing well written code for embedded system based on experience. I understand that "well written" is poorly defined and vague so i ask for raw estimation. Second question: Any recommendation for tool that can be customizable (i.e allowing suppress specific warnings) and used in automatic build environment (i.e command line interface) Any other useful suggestions that can help with this task. Thanks Ilya. | I also highly recommend PC-Lint. If you happen to be compiling your code with Visual Studio I recommend a plug-in 'Visual Lint' from Riverblade. If you cannot compile the code in Visual Studio, you can still run PC-Lint from the command line to good effect. Some embedded system compilers provide MISRA compliance testing as compiler warnings. I use the IAR compiler for Arm7/Arm9 development. It provides an easy to configure MISRA compliance checklist right in the compiler setup. It is difficult to come up with a rule of thumb for estimating the time it would take you to make some well written code MISRA compliant. A lot depends on the existing coding habits of the programmers and how closely they follow the MISRA rules in the first place. Rough estimates: 2 - 3 days to become adept at PC-Lint usage. Initial pass at making existing code MISRA compliant: 10 to 25 percent of the time spent writing the code in the first place. Keeping code MISRA compliant: 5 to 10 percent added to code development. Half of this cost is changing the habits of your coders to follow the 'MISRA way' of doing things. The other half is the extra cost of code testing and inspection to ensure MISRA compliance. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/62946",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6807/"
]
} |
62,963 | Last year, Scott Guthrie stated “You can actually override the raw SQL that LINQ to SQL uses if you want absolute control over the SQL executed”, but I can’t find documentation describing an extensibility method. I would like to modify the following LINQ to SQL query: using (NorthwindContext northwind = new NorthwindContext ()) { var q = from row in northwind.Customers let orderCount = row.Orders.Count () select new { row.ContactName, orderCount };} Which results in the following TSQL: SELECT [t0].[ContactName], ( SELECT COUNT(*) FROM [dbo].[Orders] AS [t1] WHERE [t1].[CustomerID] = [t0].[CustomerID] ) AS [orderCount]FROM [dbo].[Customers] AS [t0] To: using (NorthwindContext northwind = new NorthwindContext ()) { var q = from row in northwind.Customers.With ( TableHint.NoLock, TableHint.Index (0)) let orderCount = row.Orders.With ( TableHint.HoldLock).Count () select new { row.ContactName, orderCount };} Which would result in the following TSQL: SELECT [t0].[ContactName], ( SELECT COUNT(*) FROM [dbo].[Orders] AS [t1] WITH (HOLDLOCK) WHERE [t1].[CustomerID] = [t0].[CustomerID] ) AS [orderCount]FROM [dbo].[Customers] AS [t0] WITH (NOLOCK, INDEX(0)) Using: public static Table<TEntity> With<TEntity> ( this Table<TEntity> table, params TableHint[] args) where TEntity : class { //TODO: implement return table;}public static EntitySet<TEntity> With<TEntity> ( this EntitySet<TEntity> entitySet, params TableHint[] args) where TEntity : class { //TODO: implement return entitySet;} And public class TableHint { //TODO: implement public static TableHint NoLock; public static TableHint HoldLock; public static TableHint Index (int id) { return null; } public static TableHint Index (string name) { return null; }} Using some type of LINQ to SQL extensibility, other than this one . Any ideas? | The ability to change the underlying provider and thus modify the SQL did not make the final cut in LINQ to SQL. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/62963",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5869/"
]
} |
63,035 | I've just started developing an ExtJS application that I plan to support with a very lightweight JSON PHP service. Other than that, it will be standalone. My question is, what is the best way to organize the files and classes that will inevitably come into existence? Anyone have any experience with large ExtJS projects (several thousand lines). | I would start here http://blog.extjs.eu/know-how/writing-a-big-application-in-ext/ This site gives a good introductory overview of how to structure your application. We are currently using these ideas in two of our ASP.NET MVC / ExtJS applications. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/63035",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
63,086 | I want to start using Python for small projects but the fact that a misplaced tab or indent can throw a compile error is really getting on my nerves. Is there some type of setting to turn this off? I'm currently using NotePad++. Is there maybe an IDE that would take care of the tabs and indenting? | The answer is no. At least, not until something like the following is implemented: from __future__ import braces | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/63086",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1231/"
]
} |
63,090 | Here we go again, the old argument still arises... Would we better have a business key as a primary key, or would we rather have a surrogate id (i.e. an SQL Server identity) with a unique constraint on the business key field? Please, provide examples or proof to support your theory. | Both. Have your cake and eat it. Remember there is nothing special about a primary key, except that it is labelled as such. It is nothing more than a NOT NULL UNIQUE constraint, and a table can have more than one. If you use a surrogate key, you still want a business key to ensure uniqueness according to the business rules. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/63090",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4690/"
]
} |
63,104 | When a previous Vim session crashed, you are greeted with the "Swap file ... already exists!" for each and every file that was open in the previous session. Can you make this Vim recovery prompt smarter? (Without switching off recovery!) Specifically, I'm thinking of: If the swapped version does not contain unsaved changes and the editing process is no longer running, can you make Vim automatically delete the swap file? Can you automate the suggested process of saving the recovered file under a new name, merging it with file on disk and then deleting the old swap file, so that minimal interaction is required? Especially when the swap version and the disk version are the same, everything should be automatic. I discovered the SwapExists autocommand but I don't know if it can help with these tasks. | I have vim store my swap files in a single local directory, by having this in my .vimrc: set directory=~/.vim/swap,. Among other benefits, this makes the swap files easy to find all at once.Now when my laptop loses power or whatever and I start back up with a bunch of swap files laying around, I just run my cleanswap script: TMPDIR=$(mktemp -d) || exit 1RECTXT="$TMPDIR/vim.recovery.$USER.txt"RECFN="$TMPDIR/vim.recovery.$USER.fn"trap 'rm -f "$RECTXT" "$RECFN"; rmdir "$TMPDIR"' 0 1 2 3 15for q in ~/.vim/swap/.*sw? ~/.vim/swap/*; do [[ -f $q ]] || continue rm -f "$RECTXT" "$RECFN" vim -X -r "$q" \ -c "w! $RECTXT" \ -c "let fn=expand('%')" \ -c "new $RECFN" \ -c "exec setline( 1, fn )" \ -c w\! \ -c "qa" if [[ ! -f $RECFN ]]; then echo "nothing to recover from $q" rm -f "$q" continue fi CRNT="$(cat $RECFN)" if diff --strip-trailing-cr --brief "$CRNT" "$RECTXT"; then echo "removing redundant $q" echo " for $CRNT" rm -f "$q" else echo $q contains changes vim -n -d "$CRNT" "$RECTXT" rm -i "$q" || exit fidone This will remove any swap files that are up-to-date with the real files. Any that don't match are brought up in a vimdiff window so I can merge in my unsaved changes. --Chouser | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/63104",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6918/"
]
} |
63,142 | What is the Reuse/Release Equivalence Principle and why is it important? | The Reuse/Release Equivalence Principle (REP) says: The unit of reuse is the unit of release. Effective reuse requires tracking of releases from a change control system. The package is the effective unit of reuse and release. The unit of reuse is the unit of release Code should not be reused by copying it from one class and pasting it into another. If the original author fixes any bugs in the code, or adds any features, you will not automatically get the benefit. You will have to find out what's changed, then alter your copy. Your code and the original code will gradually diverge. Instead, code should be reused by including a released library in your code. The original author retains responsibility for maintaining it; you should not even need to see the source code. Effective reuse requires tracking of releases from a change control system The author of a library needs to identify releases with numbers or names of some sort. This allows users of the library to identify different versions. This requires the use of some kind of release tracking system. The package is the effective unit of reuse and release It might be possible to use a class as the unit of reuse and release, however there are so many classes in a typical application, it would be burdensome for the release tracking system to keep track of them all. A larger-scale entity is required, and the package fits this need well. See also Robert Martin's article on Granularity . | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/63142",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3012/"
]
} |
63,150 | While working in a Java app, I recently needed to assemble a comma-delimited list of values to pass to another web service without knowing how many elements there would be in advance. The best I could come up with off the top of my head was something like this: public String appendWithDelimiter( String original, String addition, String delimiter ) { if ( original.equals( "" ) ) { return addition; } else { return original + delimiter + addition; }}String parameterString = "";if ( condition ) parameterString = appendWithDelimiter( parameterString, "elementName", "," );if ( anotherCondition ) parameterString = appendWithDelimiter( parameterString, "anotherElementName", "," ); I realize this isn't particularly efficient, since there are strings being created all over the place, but I was going for clarity more than optimization. In Ruby, I can do something like this instead, which feels much more elegant: parameterArray = [];parameterArray << "elementName" if condition;parameterArray << "anotherElementName" if anotherCondition;parameterString = parameterArray.join(","); But since Java lacks a join command, I couldn't figure out anything equivalent. So, what's the best way to do this in Java? | Pre Java 8: Apache's commons lang is your friend here - it provides a join method very similar to the one you refer to in Ruby: StringUtils.join(java.lang.Iterable,char) Java 8: Java 8 provides joining out of the box via StringJoiner and String.join() . The snippets below show how you can use them: StringJoiner StringJoiner joiner = new StringJoiner(",");joiner.add("01").add("02").add("03");String joinedString = joiner.toString(); // "01,02,03" String.join(CharSequence delimiter, CharSequence... elements)) String joinedString = String.join(" - ", "04", "05", "06"); // "04 - 05 - 06" String.join(CharSequence delimiter, Iterable<? extends CharSequence> elements) List<String> strings = new LinkedList<>();strings.add("Java");strings.add("is");strings.add("cool");String message = String.join(" ", strings);//message returned is: "Java is cool" | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/63150",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2041950/"
]
} |
63,166 | I once had the task of determining the following performance parameters from inside a running application: Total virtual memory available Virtual memory currently used Virtual memory currently used by my process Total RAM available RAM currently used RAM currently used by my process % CPU currently used % CPU currently used by my process The code had to run on Windows and Linux. Even though this seems to be a standard task, finding the necessary information in the manuals (WIN32 API, GNU docs) as well as on the Internet took me several days, because there's so much incomplete/incorrect/outdated information on this topic to be found out there. In order to save others from going through the same trouble, I thought it would be a good idea to collect all the scattered information plus what I found by trial and error here in one place. | Windows Some of the above values are easily available from the appropriate Win32 API, I just list them here for completeness. Others, however, need to be obtained from the Performance Data Helper library (PDH), which is a bit "unintuitive" and takes a lot of painful trial and error to get to work. (At least it took me quite a while, perhaps I've been only a bit stupid...) Note: for clarity all error checking has been omitted from the following code. Do check the return codes...! Total Virtual Memory: #include "windows.h"MEMORYSTATUSEX memInfo;memInfo.dwLength = sizeof(MEMORYSTATUSEX);GlobalMemoryStatusEx(&memInfo);DWORDLONG totalVirtualMem = memInfo.ullTotalPageFile; Note: The name "TotalPageFile" is a bit misleading here. In reality this parameter gives the "Virtual Memory Size", which is size of swap file plus installed RAM. Virtual Memory currently used: Same code as in "Total Virtual Memory" and then DWORDLONG virtualMemUsed = memInfo.ullTotalPageFile - memInfo.ullAvailPageFile; Virtual Memory currently used by current process: #include "windows.h"#include "psapi.h"PROCESS_MEMORY_COUNTERS_EX pmc;GetProcessMemoryInfo(GetCurrentProcess(), (PROCESS_MEMORY_COUNTERS*)&pmc, sizeof(pmc));SIZE_T virtualMemUsedByMe = pmc.PrivateUsage; Total Physical Memory (RAM): Same code as in "Total Virtual Memory" and then DWORDLONG totalPhysMem = memInfo.ullTotalPhys; Physical Memory currently used: Same code as in "Total Virtual Memory" and then DWORDLONG physMemUsed = memInfo.ullTotalPhys - memInfo.ullAvailPhys; Physical Memory currently used by current process: Same code as in "Virtual Memory currently used by current process" and then SIZE_T physMemUsedByMe = pmc.WorkingSetSize; CPU currently used: #include "TCHAR.h"#include "pdh.h"static PDH_HQUERY cpuQuery;static PDH_HCOUNTER cpuTotal;void init(){ PdhOpenQuery(NULL, NULL, &cpuQuery); // You can also use L"\\Processor(*)\\% Processor Time" and get individual CPU values with PdhGetFormattedCounterArray() PdhAddEnglishCounter(cpuQuery, L"\\Processor(_Total)\\% Processor Time", NULL, &cpuTotal); PdhCollectQueryData(cpuQuery);}double getCurrentValue(){ PDH_FMT_COUNTERVALUE counterVal; PdhCollectQueryData(cpuQuery); PdhGetFormattedCounterValue(cpuTotal, PDH_FMT_DOUBLE, NULL, &counterVal); return counterVal.doubleValue;} CPU currently used by current process: #include "windows.h"static ULARGE_INTEGER lastCPU, lastSysCPU, lastUserCPU;static int numProcessors;static HANDLE self;void init(){ SYSTEM_INFO sysInfo; FILETIME ftime, fsys, fuser; GetSystemInfo(&sysInfo); numProcessors = sysInfo.dwNumberOfProcessors; GetSystemTimeAsFileTime(&ftime); memcpy(&lastCPU, &ftime, sizeof(FILETIME)); self = GetCurrentProcess(); GetProcessTimes(self, &ftime, &ftime, &fsys, &fuser); memcpy(&lastSysCPU, &fsys, sizeof(FILETIME)); memcpy(&lastUserCPU, &fuser, sizeof(FILETIME));}double getCurrentValue(){ FILETIME ftime, fsys, fuser; ULARGE_INTEGER now, sys, user; double percent; GetSystemTimeAsFileTime(&ftime); memcpy(&now, &ftime, sizeof(FILETIME)); GetProcessTimes(self, &ftime, &ftime, &fsys, &fuser); memcpy(&sys, &fsys, sizeof(FILETIME)); memcpy(&user, &fuser, sizeof(FILETIME)); percent = (sys.QuadPart - lastSysCPU.QuadPart) + (user.QuadPart - lastUserCPU.QuadPart); percent /= (now.QuadPart - lastCPU.QuadPart); percent /= numProcessors; lastCPU = now; lastUserCPU = user; lastSysCPU = sys; return percent * 100;} Linux On Linux the choice that seemed obvious at first was to use the POSIX APIs like getrusage() etc. I spent some time trying to get this to work, but never got meaningful values. When I finally checked the kernel sources themselves, I found out that apparently these APIs are not yet completely implemented as of Linux kernel 2.6!? In the end I got all values via a combination of reading the pseudo-filesystem /proc and kernel calls. Total Virtual Memory: #include "sys/types.h"#include "sys/sysinfo.h"struct sysinfo memInfo;sysinfo (&memInfo);long long totalVirtualMem = memInfo.totalram;//Add other values in next statement to avoid int overflow on right hand side...totalVirtualMem += memInfo.totalswap;totalVirtualMem *= memInfo.mem_unit; Virtual Memory currently used: Same code as in "Total Virtual Memory" and then long long virtualMemUsed = memInfo.totalram - memInfo.freeram;//Add other values in next statement to avoid int overflow on right hand side...virtualMemUsed += memInfo.totalswap - memInfo.freeswap;virtualMemUsed *= memInfo.mem_unit; Virtual Memory currently used by current process: #include "stdlib.h"#include "stdio.h"#include "string.h"int parseLine(char* line){ // This assumes that a digit will be found and the line ends in " Kb". int i = strlen(line); const char* p = line; while (*p <'0' || *p > '9') p++; line[i-3] = '\0'; i = atoi(p); return i;}int getValue(){ //Note: this value is in KB! FILE* file = fopen("/proc/self/status", "r"); int result = -1; char line[128]; while (fgets(line, 128, file) != NULL){ if (strncmp(line, "VmSize:", 7) == 0){ result = parseLine(line); break; } } fclose(file); return result;} Total Physical Memory (RAM): Same code as in "Total Virtual Memory" and then long long totalPhysMem = memInfo.totalram;//Multiply in next statement to avoid int overflow on right hand side...totalPhysMem *= memInfo.mem_unit; Physical Memory currently used: Same code as in "Total Virtual Memory" and then long long physMemUsed = memInfo.totalram - memInfo.freeram;//Multiply in next statement to avoid int overflow on right hand side...physMemUsed *= memInfo.mem_unit; Physical Memory currently used by current process: Change getValue() in "Virtual Memory currently used by current process" as follows: int getValue(){ //Note: this value is in KB! FILE* file = fopen("/proc/self/status", "r"); int result = -1; char line[128]; while (fgets(line, 128, file) != NULL){ if (strncmp(line, "VmRSS:", 6) == 0){ result = parseLine(line); break; } } fclose(file); return result;} CPU currently used: #include "stdlib.h"#include "stdio.h"#include "string.h"static unsigned long long lastTotalUser, lastTotalUserLow, lastTotalSys, lastTotalIdle;void init(){ FILE* file = fopen("/proc/stat", "r"); fscanf(file, "cpu %llu %llu %llu %llu", &lastTotalUser, &lastTotalUserLow, &lastTotalSys, &lastTotalIdle); fclose(file);}double getCurrentValue(){ double percent; FILE* file; unsigned long long totalUser, totalUserLow, totalSys, totalIdle, total; file = fopen("/proc/stat", "r"); fscanf(file, "cpu %llu %llu %llu %llu", &totalUser, &totalUserLow, &totalSys, &totalIdle); fclose(file); if (totalUser < lastTotalUser || totalUserLow < lastTotalUserLow || totalSys < lastTotalSys || totalIdle < lastTotalIdle){ //Overflow detection. Just skip this value. percent = -1.0; } else{ total = (totalUser - lastTotalUser) + (totalUserLow - lastTotalUserLow) + (totalSys - lastTotalSys); percent = total; total += (totalIdle - lastTotalIdle); percent /= total; percent *= 100; } lastTotalUser = totalUser; lastTotalUserLow = totalUserLow; lastTotalSys = totalSys; lastTotalIdle = totalIdle; return percent;} CPU currently used by current process: #include "stdlib.h"#include "stdio.h"#include "string.h"#include "sys/times.h"#include "sys/vtimes.h"static clock_t lastCPU, lastSysCPU, lastUserCPU;static int numProcessors;void init(){ FILE* file; struct tms timeSample; char line[128]; lastCPU = times(&timeSample); lastSysCPU = timeSample.tms_stime; lastUserCPU = timeSample.tms_utime; file = fopen("/proc/cpuinfo", "r"); numProcessors = 0; while(fgets(line, 128, file) != NULL){ if (strncmp(line, "processor", 9) == 0) numProcessors++; } fclose(file);}double getCurrentValue(){ struct tms timeSample; clock_t now; double percent; now = times(&timeSample); if (now <= lastCPU || timeSample.tms_stime < lastSysCPU || timeSample.tms_utime < lastUserCPU){ //Overflow detection. Just skip this value. percent = -1.0; } else{ percent = (timeSample.tms_stime - lastSysCPU) + (timeSample.tms_utime - lastUserCPU); percent /= (now - lastCPU); percent /= numProcessors; percent *= 100; } lastCPU = now; lastSysCPU = timeSample.tms_stime; lastUserCPU = timeSample.tms_utime; return percent;} TODO: Other Platforms I would assume, that some of the Linux code also works for the Unixes, except for the parts that read the /proc pseudo-filesystem. Perhaps on Unix these parts can be replaced by getrusage() and similar functions? | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/63166",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7381/"
]
} |
63,181 | In Flex, I have an xml document such as the following: var xml:XML = <root><node>value1</node><node>value2</node><node>value3</node></root> At runtime, I want to create a TextInput control for each node under root, and have the values bound to the values in the XML. As far as I can tell I can't use BindingUtils to bind to e4x nodes at runtime (please tell me if I'm wrong here!), so I'm trying to do this by hand: for each (var node:XML in xml.node){ var textInput:TextInput = new TextInput(); var handler:Function = function(event:Event):void { node.setChildren(event.target.text); }; textInput.text = node.text(); textInput.addEventListener(Event.CHANGE, handler); this.addChild(pileHeightEditor);} My problem is that when the user edits one of the TextInputs, the node getting assigned to is always the last one encountered in the for loop. I am used to this pattern from C#, where each time an anonymous function is created, a "snapshot" of the values of the used values is taken, so "node" would be different in each handler function. How do I "take a snapshot" of the current value of node to use in the handler? Or should I be using a different pattern in Flex? | Windows Some of the above values are easily available from the appropriate Win32 API, I just list them here for completeness. Others, however, need to be obtained from the Performance Data Helper library (PDH), which is a bit "unintuitive" and takes a lot of painful trial and error to get to work. (At least it took me quite a while, perhaps I've been only a bit stupid...) Note: for clarity all error checking has been omitted from the following code. Do check the return codes...! Total Virtual Memory: #include "windows.h"MEMORYSTATUSEX memInfo;memInfo.dwLength = sizeof(MEMORYSTATUSEX);GlobalMemoryStatusEx(&memInfo);DWORDLONG totalVirtualMem = memInfo.ullTotalPageFile; Note: The name "TotalPageFile" is a bit misleading here. In reality this parameter gives the "Virtual Memory Size", which is size of swap file plus installed RAM. Virtual Memory currently used: Same code as in "Total Virtual Memory" and then DWORDLONG virtualMemUsed = memInfo.ullTotalPageFile - memInfo.ullAvailPageFile; Virtual Memory currently used by current process: #include "windows.h"#include "psapi.h"PROCESS_MEMORY_COUNTERS_EX pmc;GetProcessMemoryInfo(GetCurrentProcess(), (PROCESS_MEMORY_COUNTERS*)&pmc, sizeof(pmc));SIZE_T virtualMemUsedByMe = pmc.PrivateUsage; Total Physical Memory (RAM): Same code as in "Total Virtual Memory" and then DWORDLONG totalPhysMem = memInfo.ullTotalPhys; Physical Memory currently used: Same code as in "Total Virtual Memory" and then DWORDLONG physMemUsed = memInfo.ullTotalPhys - memInfo.ullAvailPhys; Physical Memory currently used by current process: Same code as in "Virtual Memory currently used by current process" and then SIZE_T physMemUsedByMe = pmc.WorkingSetSize; CPU currently used: #include "TCHAR.h"#include "pdh.h"static PDH_HQUERY cpuQuery;static PDH_HCOUNTER cpuTotal;void init(){ PdhOpenQuery(NULL, NULL, &cpuQuery); // You can also use L"\\Processor(*)\\% Processor Time" and get individual CPU values with PdhGetFormattedCounterArray() PdhAddEnglishCounter(cpuQuery, L"\\Processor(_Total)\\% Processor Time", NULL, &cpuTotal); PdhCollectQueryData(cpuQuery);}double getCurrentValue(){ PDH_FMT_COUNTERVALUE counterVal; PdhCollectQueryData(cpuQuery); PdhGetFormattedCounterValue(cpuTotal, PDH_FMT_DOUBLE, NULL, &counterVal); return counterVal.doubleValue;} CPU currently used by current process: #include "windows.h"static ULARGE_INTEGER lastCPU, lastSysCPU, lastUserCPU;static int numProcessors;static HANDLE self;void init(){ SYSTEM_INFO sysInfo; FILETIME ftime, fsys, fuser; GetSystemInfo(&sysInfo); numProcessors = sysInfo.dwNumberOfProcessors; GetSystemTimeAsFileTime(&ftime); memcpy(&lastCPU, &ftime, sizeof(FILETIME)); self = GetCurrentProcess(); GetProcessTimes(self, &ftime, &ftime, &fsys, &fuser); memcpy(&lastSysCPU, &fsys, sizeof(FILETIME)); memcpy(&lastUserCPU, &fuser, sizeof(FILETIME));}double getCurrentValue(){ FILETIME ftime, fsys, fuser; ULARGE_INTEGER now, sys, user; double percent; GetSystemTimeAsFileTime(&ftime); memcpy(&now, &ftime, sizeof(FILETIME)); GetProcessTimes(self, &ftime, &ftime, &fsys, &fuser); memcpy(&sys, &fsys, sizeof(FILETIME)); memcpy(&user, &fuser, sizeof(FILETIME)); percent = (sys.QuadPart - lastSysCPU.QuadPart) + (user.QuadPart - lastUserCPU.QuadPart); percent /= (now.QuadPart - lastCPU.QuadPart); percent /= numProcessors; lastCPU = now; lastUserCPU = user; lastSysCPU = sys; return percent * 100;} Linux On Linux the choice that seemed obvious at first was to use the POSIX APIs like getrusage() etc. I spent some time trying to get this to work, but never got meaningful values. When I finally checked the kernel sources themselves, I found out that apparently these APIs are not yet completely implemented as of Linux kernel 2.6!? In the end I got all values via a combination of reading the pseudo-filesystem /proc and kernel calls. Total Virtual Memory: #include "sys/types.h"#include "sys/sysinfo.h"struct sysinfo memInfo;sysinfo (&memInfo);long long totalVirtualMem = memInfo.totalram;//Add other values in next statement to avoid int overflow on right hand side...totalVirtualMem += memInfo.totalswap;totalVirtualMem *= memInfo.mem_unit; Virtual Memory currently used: Same code as in "Total Virtual Memory" and then long long virtualMemUsed = memInfo.totalram - memInfo.freeram;//Add other values in next statement to avoid int overflow on right hand side...virtualMemUsed += memInfo.totalswap - memInfo.freeswap;virtualMemUsed *= memInfo.mem_unit; Virtual Memory currently used by current process: #include "stdlib.h"#include "stdio.h"#include "string.h"int parseLine(char* line){ // This assumes that a digit will be found and the line ends in " Kb". int i = strlen(line); const char* p = line; while (*p <'0' || *p > '9') p++; line[i-3] = '\0'; i = atoi(p); return i;}int getValue(){ //Note: this value is in KB! FILE* file = fopen("/proc/self/status", "r"); int result = -1; char line[128]; while (fgets(line, 128, file) != NULL){ if (strncmp(line, "VmSize:", 7) == 0){ result = parseLine(line); break; } } fclose(file); return result;} Total Physical Memory (RAM): Same code as in "Total Virtual Memory" and then long long totalPhysMem = memInfo.totalram;//Multiply in next statement to avoid int overflow on right hand side...totalPhysMem *= memInfo.mem_unit; Physical Memory currently used: Same code as in "Total Virtual Memory" and then long long physMemUsed = memInfo.totalram - memInfo.freeram;//Multiply in next statement to avoid int overflow on right hand side...physMemUsed *= memInfo.mem_unit; Physical Memory currently used by current process: Change getValue() in "Virtual Memory currently used by current process" as follows: int getValue(){ //Note: this value is in KB! FILE* file = fopen("/proc/self/status", "r"); int result = -1; char line[128]; while (fgets(line, 128, file) != NULL){ if (strncmp(line, "VmRSS:", 6) == 0){ result = parseLine(line); break; } } fclose(file); return result;} CPU currently used: #include "stdlib.h"#include "stdio.h"#include "string.h"static unsigned long long lastTotalUser, lastTotalUserLow, lastTotalSys, lastTotalIdle;void init(){ FILE* file = fopen("/proc/stat", "r"); fscanf(file, "cpu %llu %llu %llu %llu", &lastTotalUser, &lastTotalUserLow, &lastTotalSys, &lastTotalIdle); fclose(file);}double getCurrentValue(){ double percent; FILE* file; unsigned long long totalUser, totalUserLow, totalSys, totalIdle, total; file = fopen("/proc/stat", "r"); fscanf(file, "cpu %llu %llu %llu %llu", &totalUser, &totalUserLow, &totalSys, &totalIdle); fclose(file); if (totalUser < lastTotalUser || totalUserLow < lastTotalUserLow || totalSys < lastTotalSys || totalIdle < lastTotalIdle){ //Overflow detection. Just skip this value. percent = -1.0; } else{ total = (totalUser - lastTotalUser) + (totalUserLow - lastTotalUserLow) + (totalSys - lastTotalSys); percent = total; total += (totalIdle - lastTotalIdle); percent /= total; percent *= 100; } lastTotalUser = totalUser; lastTotalUserLow = totalUserLow; lastTotalSys = totalSys; lastTotalIdle = totalIdle; return percent;} CPU currently used by current process: #include "stdlib.h"#include "stdio.h"#include "string.h"#include "sys/times.h"#include "sys/vtimes.h"static clock_t lastCPU, lastSysCPU, lastUserCPU;static int numProcessors;void init(){ FILE* file; struct tms timeSample; char line[128]; lastCPU = times(&timeSample); lastSysCPU = timeSample.tms_stime; lastUserCPU = timeSample.tms_utime; file = fopen("/proc/cpuinfo", "r"); numProcessors = 0; while(fgets(line, 128, file) != NULL){ if (strncmp(line, "processor", 9) == 0) numProcessors++; } fclose(file);}double getCurrentValue(){ struct tms timeSample; clock_t now; double percent; now = times(&timeSample); if (now <= lastCPU || timeSample.tms_stime < lastSysCPU || timeSample.tms_utime < lastUserCPU){ //Overflow detection. Just skip this value. percent = -1.0; } else{ percent = (timeSample.tms_stime - lastSysCPU) + (timeSample.tms_utime - lastUserCPU); percent /= (now - lastCPU); percent /= numProcessors; percent *= 100; } lastCPU = now; lastSysCPU = timeSample.tms_stime; lastUserCPU = timeSample.tms_utime; return percent;} TODO: Other Platforms I would assume, that some of the Linux code also works for the Unixes, except for the parts that read the /proc pseudo-filesystem. Perhaps on Unix these parts can be replaced by getrusage() and similar functions? | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/63181",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6448/"
]
} |
63,257 | I'm working on a tool that will generate the source code for an interface and a couple classes implementing that interface. My output isn't particularly complicated, so it's not going to be hard to make the output conform to our normal code formatting standards. But this got me thinking: how human-readable does auto-generated code need to be? When should extra effort be expended to make sure the generated code is easily read and understood by a human? In my case, the classes I'm generating are essentially just containers for some data related to another part of the build with methods to get the data. No one should ever need to look at the code for the classes themselves, they just need to call the various getters the classes provide. So, it's probably not too important if the code is "clean", well formatted and easily read by a human. However, what happens if you're generating code that has more than a small amount of simple logic in it? | I think it's just as important for generated code to be readable and follow normal coding styles. At some point, someone is either going to need to debug the code or otherwise see what is happening "behind the scenes". | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/63257",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1409/"
]
} |
63,291 | How do I select all the columns in a table that only contain NULL values for all the rows? I'm using MS SQL Server 2005 . I'm trying to find out which columns are not used in the table so I can delete them. | Here is the sql 2005 or later version: Replace ADDR_Address with your tablename. declare @col varchar(255), @cmd varchar(max)DECLARE getinfo cursor forSELECT c.name FROM sys.tables t JOIN sys.columns c ON t.Object_ID = c.Object_IDWHERE t.Name = 'ADDR_Address'OPEN getinfoFETCH NEXT FROM getinfo into @colWHILE @@FETCH_STATUS = 0BEGIN SELECT @cmd = 'IF NOT EXISTS (SELECT top 1 * FROM ADDR_Address WHERE [' + @col + '] IS NOT NULL) BEGIN print ''' + @col + ''' end' EXEC(@cmd) FETCH NEXT FROM getinfo into @colENDCLOSE getinfoDEALLOCATE getinfo | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/63291",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/299/"
]
} |
63,303 | I have a System.Diagnostics.Process object in a program targeted at the .Net framework 3.5 I have redirected both StandardOutput and StandardError pipes and I'm receiving data from them asynchronously. I've also set an event handler for the Exited event. Once I call Process.Start() I want to go off and do other work whilst I wait for events to be raised. Unfortunately it appears that, for a process which returns a large amount of information, the Exited event is fired before the last OutputDataReceived event. How do I know when the last OutputDataReceived has been received? Ideally I would like the Exited event to be the last event I receive. Here is an example program: using System;using System.Diagnostics;using System.Threading;namespace ConsoleApplication1{ class Program { static void Main(string[] args) { string command = "output.exe"; string arguments = " whatever"; ProcessStartInfo info = new ProcessStartInfo(command, arguments); // Redirect the standard output of the process. info.RedirectStandardOutput = true; info.RedirectStandardError = true; // Set UseShellExecute to false for redirection info.UseShellExecute = false; Process proc = new Process(); proc.StartInfo = info; proc.EnableRaisingEvents = true; // Set our event handler to asynchronously read the sort output. proc.OutputDataReceived += new DataReceivedEventHandler(proc_OutputDataReceived); proc.ErrorDataReceived += new DataReceivedEventHandler(proc_ErrorDataReceived); proc.Exited += new EventHandler(proc_Exited); proc.Start(); // Start the asynchronous read of the sort output stream. Note this line! proc.BeginOutputReadLine(); proc.BeginErrorReadLine(); proc.WaitForExit(); Console.WriteLine("Exited (Main)"); } static void proc_Exited(object sender, EventArgs e) { Console.WriteLine("Exited (Event)"); } static void proc_ErrorDataReceived(object sender, DataReceivedEventArgs e) { Console.WriteLine("Error: {0}", e.Data); } static void proc_OutputDataReceived(object sender, DataReceivedEventArgs e) { Console.WriteLine("Output data: {0}", e.Data); } }} When running this program you will notice that "Exited (Event)" appears in a completely variable location within the output. You may need to run it a few times and, obviously, you will need to replace "output.exe" with a program of your choice that produces a suitably large amount of output. So, the question again: How do I know when the last OutputDataReceived has been received? Ideally I would like the Exited event to be the last event I receive. | The answer to this is that e.Data will be set to null : static void proc_ErrorDataReceived(object sender, DataReceivedEventArgs e){ if( e.Data == null ) _exited.Set();} | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/63303",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
63,408 | When I load my iPhone app it always loads a black screen first then pops up the main window. This happens even with a simple empty app with a single window loaded. I've noticed that when loading, most apps zoom in on the main window (or scale it to fit the screen, however you want to think about it) and then load the content of the screen, with no black screen (see the Contacts app for an example). How do I achieve this effect? | Add a Default.png to your project. This should be the image you want shown instead of the black launch screen. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/63408",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1967/"
]
} |
63,421 | Currently my workflow with Emacs when I am coding in C or C++ involves three windows. The largest on the right contains the file I am working with. The left is split into two, the bottom being a shell which I use to type in compile or make commands, and the top is often some sort of documentation or README file that I want to consult while I am working. Now I know there are some pretty expert Emacs users out there, and I am curious what other Emacs functionally is useful if the intention is to use it as a complete IDE. Specifically, most IDEs usually fulfill these functions is some form or another: Source code editor Compiler Debugging Documentation Lookup Version Control OO features like class lookup and object inspector For a few of these, it's pretty obvious how Emacs can fit these functions, but what about the rest? Also, if a specific language must be focused on, I'd say it should be C++. Edit: One user pointed out that I should have been more specific when I said 'what about the rest'. Mostly I was curious about efficient version control, as well as documentation lookup. For example, in SLIME it is fairly easy to do a quick hyperspec lookup on a Lisp function. Is there a quick way to look up something in C++ STL documentation (if I forgot the exact syntax of hash_map , for example)? | You'll have to be specific as to what you mean by "the rest". Except for the object inspector (that I"m aware of), emacs does all the above quite easily: editor (obvious) compiler - just run M-x compile and enter your compile command. From there on, you can just M-x compile and use the default. Emacs will capture C/C++ compiler errors (works best with GCC) and help you navigate to lines with warnings or errors. Debugging - similarly, when you want to debug, type M-x gdb and it will create a gdb buffer with special bindings Documentation Lookup - emacs has excellent CScope bindings for code navigation. For other documentation: Emacs also has a manpage reader, and for everything else, there's the web and books. version control - there are lots of Emacs bindings for various VCS backends (CVS, SCCS, RCS, SVN, GIT all come to mind) Edit: I realize my answer about documentation lookup really pertained to code navigation. Here's some more to-the-point info: Looking up manpages, info manuals, and Elisp documentation from within emacs Looking up Python documentation from within Emacs . Google searching will no doubt reveal further examples. As the second link shows, looking up functions (and whatever) in other documentation can be done, even if not supported out of the box. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/63421",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7545/"
]
} |
63,439 | How can I programatically cause a control's tooltip to show in a Winforms app without needing the mouse to hover over the control? (P/Invoke is ok if necessary). | If you are using the Tooltip control on the form, you can do it like this: ToolTip1.Show("Text to display", Control) The MSDN documentation for the ToolTip control's "Show" method has all the different variations on this and how to use them. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/63439",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2187/"
]
} |
63,447 | How do I perform an IF...THEN in an SQL SELECT statement? For example: SELECT IF(Obsolete = 'N' OR InStock = 'Y' ? 1 : 0) AS Saleable, * FROM Product | The CASE statement is the closest to IF in SQL and is supported on all versions of SQL Server. SELECT CAST( CASE WHEN Obsolete = 'N' or InStock = 'Y' THEN 1 ELSE 0 END AS bit) as Saleable, *FROM Product You only need to use the CAST operator if you want the result as a Boolean value. If you are happy with an int , this works: SELECT CASE WHEN Obsolete = 'N' or InStock = 'Y' THEN 1 ELSE 0 END as Saleable, *FROM Product CASE statements can be embedded in other CASE statements and even included in aggregates. SQL Server Denali (SQL Server 2012) adds the IIF statement which is also available in access (pointed out by Martin Smith ): SELECT IIF(Obsolete = 'N' or InStock = 'Y', 1, 0) as Saleable, * FROM Product | {
"score": 12,
"source": [
"https://Stackoverflow.com/questions/63447",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6522/"
]
} |
63,463 | Let's say I have a web page that currently accepts a single ID value via a url parameter: http://example.com/mypage.aspx?ID=1234 I want to change it to accept a list of ids, like this: http://example.com/mypage.aspx?IDs=1234,4321,6789 So it's available to my code as a string via context.Request.QueryString["IDs"]. What's the best way to turn that string value into a List<int>? Edit: I know how to do .split() on a comma to get a list of strings, but I ask because I don't know how to easily convert that string list to an int list. This is still in .Net 2.0, so no lambdas. | No offense to those who provided clear answers, but many people seem to be answering your question instead of addressing your problem. You want multiple IDs, so you think you could this this: http://example.com/mypage.aspx?IDs=1234,4321,6789 The problem is that this is a non-robust solution. In the future, if you want multiple values, what do you do if they have commas? A better solution (and this is perfectly valid in a query string), is to use multiple parameters with the same name: http://example.com/mypage.aspx?ID=1234;ID=4321;ID=6789 Then, whatever query string parser you use should be able to return a list of IDs. If it can't handle this (and also handle semi-colons instead of ampersands), then it's broken. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/63463",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3043/"
]
} |
63,494 | I discovered template metaprogramming more than 5 years ago and got a huge kick out of reading Modern C++ Design but I never found an opertunity to use it in real life. Have you ever used this technique in real code? Contributors to Boost need not apply ;o) | I once used template metaprogramming in C++ to implement a technique called "symbolic perturbation" for dealing with degenerate input in geometric algorithms. By representing arithmetic expressions as nested templates (i.e. basically by writing out the parse trees by hand) I was able to hand off all the expression analysis to the template processor. Doing this kind of thing with templates is more efficient than, say, writing expression trees using objects and doing the analysis at runtime. It's faster because the modified (perturbed) expression tree is then available to the optimizer at the same level as the rest of your code, so you get the full benefits of optimization, both within your expressions but also (where possible) between your expressions and the surrounding code. Of course you could accomplish the same thing by implementing a small DSL (domain specific language) for your expressions and the pasting the translated C++ code into your regular program. That would get you all the same optimization benefits and also be more legible -- but the tradeoff is that you have to maintain a parser. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/63494",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3848/"
]
} |
63,646 | I have a very simple WPF application in which I am using data binding to allow editing of some custom CLR objects. I am now wanting to put some input validation in when the user clicks save. However, all the WPF books I have read don't really devote any space to this issue. I see that you can create custom ValidationRules, but I am wondering if this would be overkill for my needs. So my question is this: is there a good sample application or article somewhere that demonstrates best practice for validating user input in WPF? | I think the new preferred way might be to use IDataErrorInfo Read more here | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/63646",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7532/"
]
} |
63,671 | I seem to remember reading something about how it is bad for structs to implement interfaces in CLR via C#, but I can't seem to find anything about it. Is it bad? Are there unintended consequences of doing so? public interface Foo { Bar GetBar(); }public struct Fubar : Foo { public Bar GetBar() { return new Bar(); } } | Since no one else explicitly provided this answer I will add the following: Implementing an interface on a struct has no negative consequences whatsoever. Any variable of the interface type used to hold a struct will result in a boxed value of that struct being used. If the struct is immutable (a good thing) then this is at worst a performance issue unless you are: using the resulting object for locking purposes (an immensely bad idea any way) using reference equality semantics and expecting it to work for two boxed values from the same struct. Both of these would be unlikely, instead you are likely to be doing one of the following: Generics Perhaps many reasonable reasons for structs implementing interfaces is so that they can be used within a generic context with constraints . When used in this fashion the variable like so: class Foo<T> : IEquatable<Foo<T>> where T : IEquatable<T>{ private readonly T a; public bool Equals(Foo<T> other) { return this.a.Equals(other.a); }} Enable the use of the struct as a type parameter so long as no other constraint like new() or class is used. Allow the avoidance of boxing on structs used in this way. Then this.a is NOT an interface reference thus it does not cause a box of whatever is placed into it. Further when the c# compiler compiles the generic classes and needs to insert invocations of the instance methods defined on instances of the Type parameter T it can use the constrained opcode: If thisType is a value type and thisType implements method then ptr is passed unmodified as the 'this' pointer to a call method instruction, for the implementation of method by thisType. This avoids the boxing and since the value type is implementing the interface is must implement the method, thus no boxing will occur. In the above example the Equals() invocation is done with no box on this.a 1 . Low friction APIs Most structs should have primitive-like semantics where bitwise identical values are considered equal 2 . The runtime will supply such behaviour in the implicit Equals() but this can be slow. Also this implicit equality is not exposed as an implementation of IEquatable<T> and thus prevents structs being used easily as keys for Dictionaries unless they explicitly implement it themselves. It is therefore common for many public struct types to declare that they implement IEquatable<T> (where T is them self) to make this easier and better performing as well as consistent with the behaviour of many existing value types within the CLR BCL. All the primitives in the BCL implement at a minimum: IComparable IConvertible IComparable<T> IEquatable<T> (And thus IEquatable ) Many also implement IFormattable , further many of the System defined value types like DateTime, TimeSpan and Guid implement many or all of these as well. If you are implementing a similarly 'widely useful' type like a complex number struct or some fixed width textual values then implementing many of these common interfaces (correctly) will make your struct more useful and usable. Exclusions Obviously if the interface strongly implies mutability (such as ICollection ) then implementing it is a bad idea as it would mean tat you either made the struct mutable (leading to the sorts of errors described already where the modifications occur on the boxed value rather than the original) or you confuse users by ignoring the implications of the methods like Add() or throwing exceptions. Many interfaces do NOT imply mutability (such as IFormattable ) and serve as the idiomatic way to expose certain functionality in a consistent fashion. Often the user of the struct will not care about any boxing overhead for such behaviour. Summary When done sensibly, on immutable value types, implementation of useful interfaces is a good idea Notes: 1: Note that the compiler may use this when invoking virtual methods on variables which are known to be of a specific struct type but in which it is required to invoke a virtual method. For example: List<int> l = new List<int>();foreach(var x in l) ;//no-op The enumerator returned by the List is a struct, an optimization to avoid an allocation when enumerating the list (With some interesting consequences ). However the semantics of foreach specify that if the enumerator implements IDisposable then Dispose() will be called once the iteration is completed. Obviously having this occur through a boxed call would eliminate any benefit of the enumerator being a struct (in fact it would be worse). Worse, if dispose call modifies the state of the enumerator in some way then this would happen on the boxed instance and many subtle bugs might be introduced in complex cases. Therefore the IL emitted in this sort of situation is: IL_0001: newobj System.Collections.Generic.List..ctorIL_0006: stloc.0 IL_0007: nop IL_0008: ldloc.0 IL_0009: callvirt System.Collections.Generic.List.GetEnumeratorIL_000E: stloc.2 IL_000F: br.s IL_0019IL_0011: ldloca.s 02 IL_0013: call System.Collections.Generic.List.get_CurrentIL_0018: stloc.1 IL_0019: ldloca.s 02 IL_001B: call System.Collections.Generic.List.MoveNextIL_0020: stloc.3 IL_0021: ldloc.3 IL_0022: brtrue.s IL_0011IL_0024: leave.s IL_0035IL_0026: ldloca.s 02 IL_0028: constrained. System.Collections.Generic.List.EnumeratorIL_002E: callvirt System.IDisposable.DisposeIL_0033: nop IL_0034: endfinally Thus the implementation of IDisposable does not cause any performance issues and the (regrettable) mutable aspect of the enumerator is preserved should the Dispose method actually do anything! 2: double and float are exceptions to this rule where NaN values are not considered equal. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/63671",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
63,687 | I would like to save the programs settings every time the user exits the program. So I need a way to call a function when the user quits the program. How do I do that? I am using Java 1.5. | You can add a shutdown hook to your application by doing the following: Runtime.getRuntime().addShutdownHook(new Thread(new Runnable() { public void run() { // what you want to do }})); This is basically equivalent to having a try {} finally {} block around your entire program, and basically encompasses what's in the finally block. Please note the caveats though! | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/63687",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
63,694 | Is there any feasible way of using generics to create a Math library that does not depend on the base type chosen to store data? In other words, let's assume I want to write a Fraction class. The fraction can be represented by two ints or two doubles or whatnot. The important thing is that the basic four arithmetic operations are well defined. So, I would like to be able to write Fraction<int> frac = new Fraction<int>(1,2) and/or Fraction<double> frac = new Fraction<double>(0.1, 1.0) . Unfortunately there is no interface representing the four basic operations (+,-,*,/). Has anybody found a workable, feasible way of implementing this? | Here is a way to abstract out the operators that is relatively painless. abstract class MathProvider<T> { public abstract T Divide(T a, T b); public abstract T Multiply(T a, T b); public abstract T Add(T a, T b); public abstract T Negate(T a); public virtual T Subtract(T a, T b) { return Add(a, Negate(b)); } } class DoubleMathProvider : MathProvider<double> { public override double Divide(double a, double b) { return a / b; } public override double Multiply(double a, double b) { return a * b; } public override double Add(double a, double b) { return a + b; } public override double Negate(double a) { return -a; } } class IntMathProvider : MathProvider<int> { public override int Divide(int a, int b) { return a / b; } public override int Multiply(int a, int b) { return a * b; } public override int Add(int a, int b) { return a + b; } public override int Negate(int a) { return -a; } } class Fraction<T> { static MathProvider<T> _math; // Notice this is a type constructor. It gets run the first time a // variable of a specific type is declared for use. // Having _math static reduces overhead. static Fraction() { // This part of the code might be cleaner by once // using reflection and finding all the implementors of // MathProvider and assigning the instance by the one that // matches T. if (typeof(T) == typeof(double)) _math = new DoubleMathProvider() as MathProvider<T>; else if (typeof(T) == typeof(int)) _math = new IntMathProvider() as MathProvider<T>; // ... assign other options here. if (_math == null) throw new InvalidOperationException( "Type " + typeof(T).ToString() + " is not supported by Fraction."); } // Immutable impementations are better. public T Numerator { get; private set; } public T Denominator { get; private set; } public Fraction(T numerator, T denominator) { // We would want this to be reduced to simpilest terms. // For that we would need GCD, abs, and remainder operations // defined for each math provider. Numerator = numerator; Denominator = denominator; } public static Fraction<T> operator +(Fraction<T> a, Fraction<T> b) { return new Fraction<T>( _math.Add( _math.Multiply(a.Numerator, b.Denominator), _math.Multiply(b.Numerator, a.Denominator)), _math.Multiply(a.Denominator, b.Denominator)); } public static Fraction<T> operator -(Fraction<T> a, Fraction<T> b) { return new Fraction<T>( _math.Subtract( _math.Multiply(a.Numerator, b.Denominator), _math.Multiply(b.Numerator, a.Denominator)), _math.Multiply(a.Denominator, b.Denominator)); } public static Fraction<T> operator /(Fraction<T> a, Fraction<T> b) { return new Fraction<T>( _math.Multiply(a.Numerator, b.Denominator), _math.Multiply(a.Denominator, b.Numerator)); } // ... other operators would follow. } If you fail to implement a type that you use, you will get a failure at runtime instead of at compile time (that is bad). The definition of the MathProvider<T> implementations is always going to be the same (also bad). I would suggest that you just avoid doing this in C# and use F# or some other language better suited to this level of abstraction. Edit: Fixed definitions of add and subtract for Fraction<T> .Another interesting and simple thing to do is implement a MathProvider that operates on an abstract syntax tree. This idea immediately points to doing things like automatic differentiation: http://conal.net/papers/beautiful-differentiation/ | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/63694",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7028/"
]
} |
63,741 | Why does the default IntelliJ default class javadoc comment use non-standard syntax? Instead of creating a line with "User: jstauffer" it could create a line with "@author jstauffer". The other lines that it creates (Date and Time) probably don't have javadoc syntax to use but why not use the javadoc syntax when available? For reference here is an example: /** * Created by IntelliJ IDEA. * User: jstauffer * Date: Nov 13, 2007 * Time: 11:15:10 AM * To change this template use File | Settings | File Templates. */ | I'm not sure why Idea doesn't use the @author tag by default. But you can change this behavior by going to File -> Settings -> File Templates and editing the File Header entry in the Includes tab. As of IDEA 14 it's: File -> Settings -> Editor -> File and Code Templates -> Includes -> File Header | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/63741",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6770/"
]
} |
63,749 | The SQL Server Express 2008 setup allow you to assign different user account for each service. For a development environment, would you use a domain user, local user, NT Authority\NETWORK SERCVICE, NT Authority\Local System or some other account and why? | Local System is not recommended, it is an administrator equivalent account and thus can lead to questionable coding that takes advantage of administrator privileges which would not be allowed in a production system since security conscious Admins/DBA's really don't like to run services as admin. Depending on if the server instance will need to access other domain resources or not should determine which type of low privilege account it should run under. If it does not need to access any (non-anonymous) domain resources than I normally create a unique local, low privilege account for it to run under in order to gain the additional security benefit of not having multiple services running in the same identity context. Be aware that the Local Service account is not supported for the SQL Server or SQL Server Agent services. If it does need to access non-anonymous domain resources then you have three options: Run as Network Service which is also a low privilege account but one that retains the computers network credentials. Run under a Local Service Account Run under a custom domain account with low local privileges. One advantage to running under the developers account is that it is easier to attach debuggers to processes in your own identity without compromising security so debugging is easier (since non-Admin accounts do not have the privilege to attach a debugger to another identities process by default). A disadvantage to using another domain account is the overhead of managing those accounts, especially since each service for each developer should ideally have unique credentials so you do not have any leaks if a developer were to leave. Most of what I tend to do does not require the service to access domain resources so I tend to use unique local low privilege accounts that I manage. I also run exclusively as a non-admin user (and have done so under XP SP2, Server 2003, Vista and Server 2008 with no major problems) so when I have cases where I need the service to access domain resources then I have no worries about using my own domain credentials (plus that way I don't have to worry the network admins about creating/maintaining a bunch of non-production domain identities). | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/63749",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/966/"
]
} |
63,784 | I want to move various parts of my app into simple scripts, to allow people that do not have a strong knowledge of c++ to be able to edit and implement various features. Because it's a real time app, I need to have some kind of multitasking for these scripts. Ideally I want it so that the c++ app calls a script function which then continues running (under the c++ thread) until either a pause point (Wait(x)), or it returns. In the case of it waiting the state needs to be saved ready for the script to be restarted the next time the app loops after the duration has expired. The scripts also need to be able to call c++ class methods, ideally using the c++ classes rather than plain wrapper functions around c++ classes. I don't want to spend a massive amount of time implementing this, so using an existing scripting language is preferred to writing my own. I heard that Python and Lua can be integrated into a c++ app, but I do not know how to do this to achieve my goals. The scripts must be able to call c++ functions The scripts must be able to "pause" when certain functions are called (eg. Wait), and be restarted again by the c++ thread Needs to be fast -- this is for a real time app and there could potentially be a lot of scripts running. I can probably roll the multitasking code fairly easily, provided the scripts can be saved and restarted (possibly by a different thread to the original). | You can use either Lua or Python. Lua is more "lightweight" than python. It's got a smaller memory footprint than python does and in our experience was easier to integrate (people's mileage on this point might vary). It can support a bunch of scripts running simultaneously. Lua, at least, supports stopping/starting threads in the manner you desire. Boost.python is nice, but in my (limited) experience, it was difficult for us to get compiling for our different environments and was pretty heavyweight. It has (in my opinion) the disadvantage of requiring Boost. For some, that might not be a problem, but if you don't need Boost (or are not using it), you are introducing a ton of code to get Boost.python working. YMMV. We have built Lua into apps on multiple platforms (win32, Xbox360 and PS3). I believe that it will work on x64. The suggestion to use Luabind is good. We wound up writing our own interface between the two and while not too complicated, having that glue code will save you a lot of time and perhaps aggravation. With either solution though, debugging can be a pain. We currently have no good solution for debugging Lua scripts that are embedded into our app. Since we haven't used python in our apps I can't speak to what tools might be available there, but a couple of years ago the landscape was roughly the same -- poor debugging. Having scripting to extend functionality is nice, but bugs in the scripts can cause problems and might be difficult to locate. The Lua code itself is kind of messy to work with if you need to make changes there. We have seen bugs in the Lua codebase itself that were hard to track down. I suspect that Boost::Python might have similar problems. And with any scripting language, it's not necessarily a solution for "non-programmers" to extend functionality. It might seem like it, but you will likely wind up spending a fair amount of time either debugging scripts or even perhaps Lua. That all said, we've been very happy with Lua and have shipped it in two games. We currently have no plans to move away from the language. All in all, we've found it better than other alternatives that were available a couple of years ago. Python (and IronPython) are other choices, but based on experience, they seem more heavy handed than Lua. I'd love to hear about other experiences there though. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/63784",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6266/"
]
} |
63,787 | I talked to a few friends who say that Drupal is amazing, and it is a way better than Joomla. What are the major differences/advantages? | The general consensus is that programmers prefer Drupal whereas mere mortals prefer Joomla. Joomla is praised for having a simpler user interface. (I personally don't agree with that; I think Joomla's UI is pretty painful to use. But then again, I'm looking at it with a programmer's eye.) Drupal, on the other hand, is praised for its high level of extensibility, along with its large library of high-quality (more or less) plug-ins that add features ("modules" in Drupal lingo) and many of which are extensible themselves. Start using Joomla today, and you'll probably end up with a decent but not quite perfect web site tonight. Start using Drupal today, and you'll be able to build exactly the web site you're wishing for - once you've put the time in. If you're considering parlaying your skills into a paid job one day, you should definitely side with Drupal. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/63787",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3208/"
]
} |
63,805 | How do I ask PowerShell where something is? For instance, "which notepad" and it returns the directory where the notepad.exe is run from according to the current paths. | The very first alias I made once I started customizing my profile in PowerShell was 'which'. New-Alias which get-command To add this to your profile, type this: "`nNew-Alias which get-command" | add-content $profile The `n at the start of the last line is to ensure it will start as a new line. | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/63805",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1220/"
]
} |
63,950 | I program with Emacs on Ubuntu (Hardy Heron at the moment), and I like the default text coloration in the Emacs GUI. However, the default text coloration when Emacs is run in the terminal is different and garish. How do I make the colors in the terminal match the colors in the GUI? | You don't have to be stuck to your terminal's default 16 (or fewer) colours. Modern terminals will support 256 colours (which will get you pretty close to your GUI look). Unfortunately, getting your terminal to support 256 colours is the tricky part, and varies from term to term. This page helped me out a lot (but it is out of date; I've definitely gotten 256 colours working in gnome-terminal and xfce4-terminal; but you may have to build them from source.) Once you've got your terminal happily using 256 colours, the magic invocation is setting your terminal type to "xterm-256color" before you invoke emacs, e.g.: env TERM=xterm-256color emacs -nw Or, you can set TERM in your .bashrc file: export TERM=xterm-256color You can check if it's worked in emacs by doing M-x list-colors-display , which will show you either 16, or all 256 glorious colours. If it works, then look at color-theme like someone else suggested. (You'll probably get frustrated at some point; god knows I do every time I try to do something similar. But stick with it; it's worth it.) | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/63950",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
63,998 | Continuing the "Hidden features of ..." meme, let's share the lesser-known but useful features of Ruby programming language. Try to limit this discussion with core Ruby, without any Ruby on Rails stuff. See also: Hidden features of C# Hidden features of Java Hidden features of JavaScript Hidden features of Ruby on Rails Hidden features of Python (Please, just one hidden feature per answer.) Thank you | Peter Cooper has a good list of Ruby tricks. Perhaps my favorite of his is allowing both single items and collections to be enumerated. (That is, treat a non-collection object as a collection containing just that object.) It looks like this: [*items].each do |item| # ...end | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/63998",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7754/"
]
} |
64,003 | I want to put a copyright notice in the footer of a web site, but I think it's incredibly tacky for the year to be outdated. How would I make the year update automatically with PHP 4 or PHP 5 ? | You can use either date or strftime . In this case I'd say it doesn't matter as a year is a year, no matter what (unless there's a locale that formats the year differently?) For example: <?php echo date("Y"); ?> On a side note, when formatting dates in PHP it matters when you want to format your date in a different locale than your default. If so, you have to use setlocale and strftime. According to the php manual on date: To format dates in other languages, you should use the setlocale() and strftime() functions instead of date(). From this point of view, I think it would be best to use strftime as much as possible, if you even have a remote possibility of having to localize your application. If that's not an issue, pick the one you like best. | {
"score": 11,
"source": [
"https://Stackoverflow.com/questions/64003",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1661459/"
]
} |
64,036 | It's a bit difficult to implement a deep object copy function. What steps you take to ensure the original object and the cloned one share no reference? | A safe way is to serialize the object, then deserialize. This ensures everything is a brand new reference. Here's an article about how to do this efficiently. Caveats: It's possible for classes to override serialization such that new instances are not created, e.g. for singletons. Also this of course doesn't work if your classes aren't Serializable. | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/64036",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3885/"
]
} |
64,038 | When I use the default java locale on my linux machine it comes out with the US locale settings, where do I change this so that it comes out with the correct locale? | With the user.language , user.country and user.variant properties. Example: java -Duser.language=th -Duser.country=TH -Duser.variant=TH SomeClass | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/64038",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
64,041 | How do I change font size on the DataGridView? | private void UpdateFont() { //Change cell font foreach(DataGridViewColumn c in dgAssets.Columns) { c.DefaultCellStyle.Font = new Font("Arial", 8.5F, GraphicsUnit.Pixel); } } | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/64041",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4653/"
]
} |
64,059 | I have some website which requires a logon and shows sensitive information. The person goes to the page, is prompted to log in, then gets to see the information. The person logs out of the site, and is redirected back to the login page. The person then can hit "back" and go right back to the page where the sensitive information is contained. Since the browser just thinks of it as rendered HTML, it shows it to them no problem. Is there a way to prevent that information from being displayed when the person hits the "back" button from the logged out screen? I'm not trying to disable the back button itself, I'm just trying to keep the sensitive information from being displayed again because the person is not logged into the site anymore. For the sake of argument, the above site/scenario is in ASP.NET with Forms Authentication (so when the user goes to the first page, which is the page they want, they're redirected to the logon page - in case that makes a difference). | The short answer is that it cannot be done securely. There are, however, a lot of tricks that can be implemented to make it difficult for users to hit back and get sensitive data displayed. Response.Cache.SetCacheability(HttpCacheability.NoCache);Response.Cache.SetExpires(Now.AddSeconds(-1));Response.Cache.SetNoStore();Response.AppendHeader("Pragma", "no-cache"); This will disable caching on client side, however this is not supported by all browsers . If you have the option of using AJAX then sensitive data can be retrieved using a updatepanel that is updated from client code and therefore it will not be displayed when hitting back unless client is still logged in. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/64059",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2577/"
]
} |
64,117 | I had to delete all the rows from a log table that contained about 5 million rows. My initial try was to issue the following command in query analyzer: delete from client_log which took a very long time. | Check out truncate table which is a lot faster. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/64117",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4096/"
]
} |
64,213 | Are there any open-source libraries that all programmers should know about? I'm thinkingsomething general, a sort of extension to the standard java.util that contains basic functions that are useful for all kinds of application. | Apache Commons Log4j Google collections | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/64213",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8330/"
]
} |
64,237 | I've googled around for this, but I still have trouble relating to what Django defines as "apps". Should I create a new app for each piece of functionality in a site, even though it uses models from the main project? Do you guys have good rule of thumb of when to split off a new app, and when to keep functionality together with the "main project" or other apps? | James Bennett has a wonderful set of slides on how to organize reusable apps in Django. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/64237",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8420/"
]
} |
64,272 | I want to create a custom control in C#. But every time I have to fully redraw my control, it flickers, even if I use double buffering (drawing to an Image first, and blitting that). How do I eliminate flicker when I have to fully redraw? | You could try putting the following in your constructor after the InitiliseComponent call. SetStyle(ControlStyles.OptimizedDoubleBuffer | ControlStyles.UserPaint | ControlStyles.AllPaintingInWmPaint, true); EDIT: If you're giving this a go, if you can, remove your own double buffering code and just have the control draw itself in response to the appropriate virtual methods being called. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/64272",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7305/"
]
} |
64,333 | What do I lose by adopting test driven design? List only negatives; do not list benefits written in a negative form. | If you want to do "real" TDD (read: test first with the red, green, refactor steps) then you also have to start using mocks/stubs, when you want to test integration points. When you start using mocks, after a while, you will want to start using Dependency Injection (DI) and a Inversion of Control (IoC) container. To do that you need to use interfaces for everything (which have a lot of pitfalls themselves). At the end of the day, you have to write a lot more code, than if you just do it the "plain old way". Instead of just a customer class, you also need to write an interface, a mock class, some IoC configuration and a few tests. And remember that the test code should also be maintained and cared for. Tests should be as readable as everything else and it takes time to write good code. Many developers don't quite understand how to do all these "the right way". But because everybody tells them that TDD is the only true way to develop software, they just try the best they can. It is much harder than one might think. Often projects done with TDD end up with a lot of code that nobody really understands. The unit tests often test the wrong thing, the wrong way. And nobody agrees how a good test should look like, not even the so called gurus. All those tests make it a lot harder to "change" (opposite to refactoring) the behavior of your system and simple changes just becomes too hard and time consuming. If you read the TDD literature, there are always some very good examples, but often in real life applications, you must have a user interface and a database. This is where TDD gets really hard, and most sources don't offer good answers. And if they do, it always involves more abstractions: mock objects, programming to an interface, MVC/MVP patterns etc., which again require a lot of knowledge, and... you have to write even more code. So be careful... if you don't have an enthusiastic team and at least one experienced developer who knows how to write good tests and also knows a few things about good architecture, you really have to think twice before going down the TDD road. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/64333",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8509/"
]
} |
64,360 | When I cut (kill) text in Emacs 22.1.1 (in its own window on X, in KDE, on Kubuntu), I can't paste (yank) it in any other application. | Insert the following into your .emacs file: (setq x-select-enable-clipboard t) | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/64360",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8522/"
]
} |
64,392 | I am looking into game programming in Java to see if it is feasible. When googling for it I find several old references to Java2D, Project Darkstar (Sun's MMO-server) and some books on Java game programming. But alot of the information seems to be several years old. So the question I am asking, is anyone creating any games in Java SE 1.5 or above? If so, what frameworks are used and are there any best practices or libraries available? | there is the excellent open source 3d engine called jMonkey ( http://www.jmonkeyengine.com ) which is being used for a few commercial projects as well as hobby developers...there is also at a lower level the lwjgl library which jmonkeyengine is built on which is a set of apis to wrap opengl as well as provide other game specific libs... | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/64392",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/103373/"
]
} |
64,420 | I don't want to take the time to learn Obj-C. I've spent 7+ years doing web application programming. Shouldn't there be a way to use the WebView and just write the whole app in javascript, pulling the files right from the resources of the project? | I found the answer after searching around. Here's what I have done: Create a new project in XCode. I think I used the view-based app. Drag a WebView object onto your interface and resize. Inside of your WebViewController.m (or similarly named file, depending on the name of your view), in the viewDidLoad method: NSString *filePath = [[NSBundle mainBundle] pathForResource:@"index" ofType:@"html"]; NSData *htmlData = [NSData dataWithContentsOfFile:filePath]; if (htmlData) { NSBundle *bundle = [NSBundle mainBundle]; NSString *path = [bundle bundlePath]; NSString *fullPath = [NSBundle pathForResource:@"index" ofType:@"html" inDirectory:path]; [webView loadRequest:[NSURLRequest requestWithURL:[NSURL fileURLWithPath:fullPath]]];} Now any files you have added as resources to the project are available for use in your web app. I've got an index.html file including javascript and css and image files with no problems. The only limitation I've found so far is that I can't create new folders so all the files clutter up the resources folder. Trick: make sure you've added the file as a resource in XCode or the file won't be available. I've been adding an empty file in XCode, then dragging my file on top in the finder. That's been working for me. Note: I realize that Obj-C must not be that hard to learn. But since I already have this app existing in JS and I know it works in Safari this is a much faster dev cycle for me. Some day I'm sure I'll have to break down and learn Obj-C. A few other resources I found helpful: Calling Obj-C from javascript: calling objective-c from javascript Calling javascript from Obj-C: iphone app development for web hackers Reading files from application bundle: uiwebview | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/64420",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8597/"
]
} |
64,436 | I'm using Excel VBA to a write a UDF. I would like to overload my own UDF with a couple of different versions so that different arguments will call different functions. As VBA doesn't seem to support this, could anyone suggest a good, non-messy way of achieving the same goal? Should I be using Optional arguments or is there a better way? | Declare your arguments as Optional Variants , then you can test to see if they're missing using IsMissing() or check their type using TypeName() , as shown in the following example: Public Function Foo(Optional v As Variant) As Variant If IsMissing(v) Then Foo = "Missing argument" ElseIf TypeName(v) = "String" Then Foo = v & " plus one" Else Foo = v + 1 End IfEnd Function This can be called from a worksheet as =FOO() , =FOO( number ) , or =FOO(" string ") . | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/64436",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/69157/"
]
} |
64,505 | I'm using the following method to send mail from Python using SMTP. Is it the right method to use or are there gotchas I'm missing ? from smtplib import SMTPimport datetimedebuglevel = 0smtp = SMTP()smtp.set_debuglevel(debuglevel)smtp.connect('YOUR.MAIL.SERVER', 26)smtp.login('USERNAME@DOMAIN', 'PASSWORD')from_addr = "John Doe <[email protected]>"to_addr = "[email protected]"subj = "hello"date = datetime.datetime.now().strftime( "%d/%m/%Y %H:%M" )message_text = "Hello\nThis is a mail from your server\n\nBye\n"msg = "From: %s\nTo: %s\nSubject: %s\nDate: %s\n\n%s" % ( from_addr, to_addr, subj, date, message_text )smtp.sendmail(from_addr, to_addr, msg)smtp.quit() | The script I use is quite similar; I post it here as an example of how to use the email.* modules to generate MIME messages; so this script can be easily modified to attach pictures, etc. I rely on my ISP to add the date time header. My ISP requires me to use a secure smtp connection to send mail, I rely on the smtplib module (downloadable at http://www1.cs.columbia.edu/~db2501/ssmtplib.py ) As in your script, the username and password, (given dummy values below), used to authenticate on the SMTP server, are in plain text in the source. This is a security weakness; but the best alternative depends on how careful you need (want?) to be about protecting these. ======================================= #! /usr/local/bin/pythonSMTPserver = 'smtp.att.yahoo.com'sender = 'me@my_email_domain.net'destination = ['recipient@her_email_domain.com']USERNAME = "USER_NAME_FOR_INTERNET_SERVICE_PROVIDER"PASSWORD = "PASSWORD_INTERNET_SERVICE_PROVIDER"# typical values for text_subtype are plain, html, xmltext_subtype = 'plain'content="""\Test message"""subject="Sent from Python"import sysimport osimport refrom smtplib import SMTP_SSL as SMTP # this invokes the secure SMTP protocol (port 465, uses SSL)# from smtplib import SMTP # use this for standard SMTP protocol (port 25, no encryption)# old version# from email.MIMEText import MIMETextfrom email.mime.text import MIMETexttry: msg = MIMEText(content, text_subtype) msg['Subject']= subject msg['From'] = sender # some SMTP servers will do this automatically, not all conn = SMTP(SMTPserver) conn.set_debuglevel(False) conn.login(USERNAME, PASSWORD) try: conn.sendmail(sender, destination, msg.as_string()) finally: conn.quit()except: sys.exit( "mail failed; %s" % "CUSTOM_ERROR" ) # give an error message | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/64505",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8206/"
]
} |
64,570 | PHP's explode function returns an array of strings split on some provided substring. It will return empty strings when there are leading, trailing, or consecutive delimiters, like this: var_dump(explode('/', '1/2//3/'));array(5) { [0]=> string(1) "1" [1]=> string(1) "2" [2]=> string(0) "" [3]=> string(1) "3" [4]=> string(0) ""} Is there some different function or option or anything that would return everything except the empty strings? var_dump(different_explode('/', '1/2//3/'));array(3) { [0]=> string(1) "1" [1]=> string(1) "2" [2]=> string(1) "3"} | Try preg_split . $exploded = preg_split('@/@', '1/2//3/', -1, PREG_SPLIT_NO_EMPTY); | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/64570",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5726/"
]
} |
64,582 | I was discussing neural networks (NN) with a friend over lunch the other day and he claimed the the performance of a NN written in Java would be similar to one written in C++. I know that with 'just in time' compiler techniques Java can do very well, but somehow I just don't buy it. Does anyone have any experience that would shed light on this issue? This page is the extent of my reading on the subject. | The Hotspot JIT can now produce code faster than C++. The reason is run-time empirical optimization. For example, it can see that a certain loop takes the "false" branch 99% of the time and reorder the machine code instructions accordingly. There's lots of articles about this. If you want all the details, read Sun's excellent whitepaper . For more informal info, try this one . | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/64582",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7545/"
]
} |
64,599 | I am considering converting a project that I've inherited from .net 1.1 to .net 2.0. The main warning I'm concerned about is that it wants me to switch from System.Web.Mail to using System.Net.Mail . I'm not ready to re-write all the components using the obsolete System.Web.Mail , so I'm curious to hear if any community members have had problems using it under .net 2.0? | System.Web.Mail is not a full .NET native implementation of the SMTP protocol. Instead, it uses the pre-existing COM functionality in CDONTS. System.Net.Mail, in contrast, is a fully managed implementation of an SMTP client. I've had far fewer problems with System.Net.Mail as it avoids COM hell. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/64599",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7856/"
]
} |
64,602 | There are three assembly version attributes. What are differences? Is it ok if I use AssemblyVersion and ignore the rest? MSDN says: AssemblyVersion : Specifies the version of the assembly being attributed. AssemblyFileVersion : Instructs a compiler to use a specific version number for the Win32 file version resource. The Win32 file version is not required to be the same as the assembly's version number. AssemblyInformationalVersion : Defines additional version information for an assembly manifest. This is a follow-up to What are the best practices for using Assembly Attributes? | AssemblyVersion Where other assemblies that reference your assembly will look. If this number changes, other assemblies must update their references to your assembly! Only update this version if it breaks backward compatibility. The AssemblyVersion is required. I use the format: major.minor (and major for very stable codebases). This would result in: [assembly: AssemblyVersion("1.3")] If you're following SemVer strictly then this means you only update when the major changes, so 1.0, 2.0, 3.0, etc. AssemblyFileVersion Used for deployment (like setup programs). You can increase this number for every deployment. Use it to mark assemblies that have the same AssemblyVersion but are generated from different builds and/or code. In Windows, it can be viewed in the file properties. The AssemblyFileVersion is optional. If not given, the AssemblyVersion is used. I use the format: major.minor.patch.build , where I follow SemVer for the first three parts and use the buildnumber of the buildserver for the last part (0 for local build).This would result in: [assembly: AssemblyFileVersion("1.3.2.42")] Be aware that System.Version names these parts as major.minor.build.revision ! AssemblyInformationalVersion The Product version of the assembly. This is the version you would use when talking to customers or for display on your website. This version can be a string, like ' 1.0 Release Candidate '. The AssemblyInformationalVersion is optional. If not given, the AssemblyFileVersion is used. I use the format: major.minor[.patch] [revision as string] . This would result in: [assembly: AssemblyInformationalVersion("1.3 RC1")] | {
"score": 11,
"source": [
"https://Stackoverflow.com/questions/64602",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2361/"
]
} |
64,605 | Is it possible to use both JScript and VBScript in the same HTA? Can I call VBScript functions from JScript and vice-versa? Are there any "gotchas," like the JScript running first and the VBScript running second (classic ASP pages have this issue). | Yeah, just separate them into different script tags: <script language="javascript"> // javascript code</script><script language="vbscript"> ' vbscript code</script> Edit: And, yeah, you can cross call between Javascript and VBScript with no extra work. Edit: This is also true of ANY Windows Scripting technology. It works in WSF files and can include scripts written in any supported ActiveScript language such as Perl as long as the engine is installed. Edit: The specific "gotcha" of all JScript being executed first, then VBScript is related to how ASP processes scripts. The MSHTA host (which uses IE's engine) does not have this problem. I'm not much into HTAs though, so I can't address any other possible "gotchas". | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/64605",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5616/"
]
} |
64,631 | I know next to nothing when it comes to the how and why of https connections. Obviously, when I'm transmitting secure data like passwords or especially credit card information, https is a critical tool. What do I need to know about it, though? What are the most common mistakes you see developers making when they implement it in their projects? Are there times when https is just a bad idea? Thanks! | An HTTPS, or Secure Sockets Layer (SSL) certificate is served for a site, and is typically signed by a Certificate Authority (CA), which is effectively a trusted 3rd party that verifies some basic details about your site, and certifies it for use in browsers. If your browser trusts the CA, then it trusts any certificates signed by that CA (this is known as the trust chain). Each HTTP (or HTTPS) request consists of two parts: a request, and a response. When you request something through HTTPS, there are actually a few things happening in the background: The client (browser) does a "handshake", where it requests the server's public key and identification. At this point, the browser can check for validity (does the site name match? is the date range current? is it signed by a CA it trusts?). It can even contact the CA and make sure the certificate is valid. The client creates a new pre-master secret, which is encrypted using the servers's public key (so only the server can decrypt it) and sent to the server The server and client both use this pre-master secret to generate the master secret, which is then used to create a symmetric session key for the actual data exchange Both sides send a message saying they're done the handshake The server then processes the request normally, and then encrypts the response using the session key If the connection is kept open, the same symmetric key will be used for each. If a new connection is established, and both sides still have the master secret, new session keys can be generated in an 'abbreviated handshake'. Typically a browser will store a master secret until it's closed, while a server will store it for a few minutes or several hours (depending on configuration). For more on the length of sessions see How long does an HTTPS symmetric key last? Certificates and Hostnames Certificates are assigned a Common Name (CN), which for HTTPS is the domain name. The CN has to match exactly, eg, a certificate with a CN of "example.com" will NOT match the domain "www.example.com", and users will get a warning in their browser. Before SNI , it was not possible to host multiple domain names on one IP. Because the certificate is fetched before the client even sends the actual HTTP request, and the HTTP request contains the Host: header line that tells the server what URL to use, there is no way for the server to know what certificate to serve for a given request. SNI adds the hostname to part of the TLS handshake, and so as long as it's supported on both client and server (and in 2015, it is widely supported) then the server can choose the correct certificate. Even without SNI, one way to serve multiple hostnames is with certificates that include Subject Alternative Names (SANs), which are essentially additional domains the certificate is valid for. Google uses a single certificate to secure many of it's sites, for example. Another way is to use wildcard certificates. It is possible to get a certificate like " .example.com" in which case "www.example.com" and "foo.example.com" will both be valid for that certificate. However, note that "example.com" does not match " .example.com", and neither does "foo.bar.example.com". If you use "www.example.com" for your certificate, you should redirect anyone at "example.com" to the "www." site. If they request https://example.com , unless you host it on a separate IP and have two certificates, the will get a certificate error. Of course, you can mix both wildcard and SANs (as long as your CA lets you do this) and get a certificate for both "example.com" and with SANs " .example.com", "example.net", and " .example.net", for example. Forms Strictly speaking, if you are submitting a form, it doesn't matter if the form page itself is not encrypted, as long as the submit URL goes to an https:// URL. In reality, users have been trained (at least in theory) not to submit pages unless they see the little "lock icon", so even the form itself should be served via HTTPS to get this. Traffic and Server Load HTTPS traffic is much bigger than its equivalent HTTP traffic (due to encryption and certificate overhead), and it also puts a bigger strain on the server (encrypting and decrypting). If you have a heavily-loaded server, it may be desirable to be very selective about what content is served using HTTPS. Best Practices If you're not just using HTTPS for the entire site, it should automatically redirect to HTTPS as required. Whenever a user is logged in, they should be using HTTPS, and if you're using session cookies, the cookie should have the secure flag set . This prevents interception of the session cookie, which is especially important given the popularity of open (unencrypted) wifi networks. Any resources on the page should come from the same scheme being used for the page. If you try to fetch images from http:// when the page is loaded with HTTPS, the user will get security warnings. You should either use fully-qualified URLs, or another easy way is to use absolute URLs that do not include the hostname (eg, src="/images/foo.png") because they work for both. This includes external resources (eg, Google Analytics) Don't do POSTs (form submits) when changing from HTTPS to HTTP. Most browsers will flag this as a security warning. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/64631",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
64,639 | What's the proper way to convert from a scientific notation string such as "1.234567E-06" to a floating point variable using C#? | Double.Parse("1.234567E-06", System.Globalization.NumberStyles.Float); | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/64639",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2488/"
]
} |
64,640 | Someone please correct me if I'm wrong, but parsing a yyyy/MM/dd (or other specific formats) dates in C# should be as easy as DateTime.ParseExact(theDate, "yyyy/MM/dd"); but no, C# forces you to create an IFormatProvider. Is there an app.config friendly way of setting this so I don't need to do this each time? DateTime.ParseExact(theDate, "yyyy/MM/dd", new CultureInfo("en-CA", true)); | The IFormatProvider argument can be null. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/64640",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7311/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.