source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
64,649 | If I issue the find command as follows: find . -name *.ear It prints out: ./dir1/dir2/earFile1.ear./dir1/dir2/earFile2.ear./dir1/dir3/earFile1.ear I want to 'print' the name and the size to the command line: ./dir1/dir2/earFile1.ear 5000 KB./dir1/dir2/earFile2.ear 5400 KB./dir1/dir3/earFile1.ear 5400 KB | find . -name '*.ear' -exec ls -lh {} \; just the h extra from jer.drab.org's reply. saves time converting to MB mentally ;) | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/64649",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/700/"
]
} |
64,653 | I am trying to delete several rows from a MySQL 5.0.45 database: delete from bundle_inclusions; The client works for a while and then returns the error: Lock wait timeout exceeded; try restarting transaction It's possible there is some uncommitted transaction out there that has a lock on this table, but I need this process to trump any such locks. How do I break the lock in MySQL? | I agree with Erik; TRUNCATE TABLE is the way to go. However, if you can't use that for some reason (for example, if you don't really want to delete every row in the table), you can try the following options: Delete the rows in smaller batches (e.g. DELETE FROM bundle_inclusions WHERE id BETWEEN ? and ?) If it's a MyISAM table (actually, this may work with InnoDB too), try issuing a LOCK TABLE before the DELETE. This should guarantee that you have exclusive access. If it's an InnoDB table, then after the timeout occurs, use SHOW INNODB STATUS. This should give you some insight into why the lock acquisition failed. If you have the SUPER privilege you could try SHOW PROCESSLIST ALL to see what other connections (if any) are using the table, and then use KILL to get rid of the one(s) you're competing with. I'm sure there are many other possibilities; I hope one of these help. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/64653",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8344/"
]
} |
64,689 | Is this function declaration in C#: void foo(string mystring) the same as this one in C: void foo(char *) i.e. In C#, does the called function receive a pointer behind the scenes? | In this specific instance, it is more like: void foo(const char *); .Net strings are immutable and passed by reference. However, in general C# receives a pointer or reference to an object behind the scenes. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/64689",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1463/"
]
} |
64,749 | When I run a particular SQL script in Unix environments, I see a '^M' character at the end of each line of the SQL script as it is echoed to the command line. I don't know on which OS the SQL script was initially created. What is causing this and how do I fix it? | It's caused by the DOS/Windows line-ending characters. Like Andy Whitfield said, the Unix command dos2unix will help fix the problem. If you want more information, you can read the man pages for that command. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/64749",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7648/"
]
} |
64,760 | In a web application, is it acceptable to use HTML in your code (non-scripted languages, Java, .NET)? There are two major sub questions: Should you use code to print HTML, or otherwise directly create HTML that is displayed? Should you mix code within your HTML pages? | Generally, it's better to keep presentation (HTML) separate from logic ("back-end" code). Your code is decoupled and easier to maintain this way. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/64760",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8509/"
]
} |
64,782 | int i = 4;string text = "Player ";cout << (text + i); I'd like it to print Player 4 . The above is obviously wrong but it shows what I'm trying to do here. Is there an easy way to do this or do I have to start adding new includes? | Well, if you use cout you can just write the integer directly to it, as in std::cout << text << i; The C++ way of converting all kinds of objects to strings is through string streams . If you don't have one handy, just create one. #include <sstream>std::ostringstream oss;oss << text << i;std::cout << oss.str(); Alternatively, you can just convert the integer and append it to the string. oss << i;text += oss.str(); Finally, the Boost libraries provide boost::lexical_cast , which wraps around the stringstream conversion with a syntax like the built-in type casts. #include <boost/lexical_cast.hpp>text += boost::lexical_cast<std::string>(i); This also works the other way around, i.e. to parse strings. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/64782",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
64,786 | What is your favorite method to handle errors in Bash? The best example of handling errors I have found on the web was written by William Shotts, Jr at http://www.linuxcommand.org . He suggests using the following function for error handling in Bash: #!/bin/bash# A slicker error handling routine# I put a variable in my scripts named PROGNAME which# holds the name of the program being run. You can get this# value from the first item on the command line ($0).# Reference: This was copied from <http://www.linuxcommand.org/wss0150.php>PROGNAME=$(basename $0)function error_exit{# ----------------------------------------------------------------# Function for exit due to fatal program error# Accepts 1 argument:# string containing descriptive error message# ---------------------------------------------------------------- echo "${PROGNAME}: ${1:-"Unknown Error"}" 1>&2 exit 1}# Example call of the error_exit function. Note the inclusion# of the LINENO environment variable. It contains the current# line number.echo "Example of error with line number and message"error_exit "$LINENO: An error has occurred." Do you have a better error handling routine that you use in Bash scripts? | Use a trap! tempfiles=( )cleanup() { rm -f "${tempfiles[@]}"}trap cleanup 0error() { local parent_lineno="$1" local message="$2" local code="${3:-1}" if [[ -n "$message" ]] ; then echo "Error on or near line ${parent_lineno}: ${message}; exiting with status ${code}" else echo "Error on or near line ${parent_lineno}; exiting with status ${code}" fi exit "${code}"}trap 'error ${LINENO}' ERR ...then, whenever you create a temporary file: temp_foo="$(mktemp -t foobar.XXXXXX)"tempfiles+=( "$temp_foo" ) and $temp_foo will be deleted on exit, and the current line number will be printed. ( set -e will likewise give you exit-on-error behavior, though it comes with serious caveats and weakens code's predictability and portability). You can either let the trap call error for you (in which case it uses the default exit code of 1 and no message) or call it yourself and provide explicit values; for instance: error ${LINENO} "the foobar failed" 2 will exit with status 2, and give an explicit message. Alternatively shopt -s extdebug and give the first lines of the trap a little modification to trap all non-zero exit codes across the board (mind set -e non-error non-zero exit codes): error() { local last_exit_status="$?" local parent_lineno="$1" local message="${2:-(no message ($last_exit_status))}" local code="${3:-$last_exit_status}" # ... continue as above}trap 'error ${LINENO}' ERRshopt -s extdebug This then is also "compatible" with set -eu . | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/64786",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
64,790 | I have breakpoints set but Xcode appears to ignore them. | First of all, I agree 100% with the earlier folks that said turn OFF Load Symbols Lazily . I have two more things to add. (My first suggestion sounds obvious, but the first time someone suggested it to me, my reaction went along these lines: "come on, please, you really think I wouldn't know better...... oh.") Make sure you haven't accidentally set "Active Build Configuration" to "Release." Under "Targets" in the graphical tree display of your project, right click on your Target and do "Get Info." Look for a property named "Generate Debug Symbols" (or similar) and make sure this is CHECKED (aka ON). Also, you might try finding (also in Target >> Get Info) a property called "Debug Information Format" and setting it to "Dwarf with dsym file." There are a number of other properties under Target >> Get Info that might affect you. Look for things like optimizing or compressing code and turn that stuff OFF (I assume you are working in a debug mode, so that this is not bad advice). Also, look for things like stripping symbols and make sure that is also OFF. For example, "Strip Linked Product" should be set to "No" for the Debug target. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/64790",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8761/"
]
} |
64,851 | How would you write (in C/C++) a macro which tests if an integer type (given as a parameter) is signed or unsigned? #define is_this_type_signed (my_type) ... | If what you want is a simple macro, this should do the trick: #define is_type_signed(my_type) (((my_type)-1) < 0) | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/64851",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4528/"
]
} |
64,860 | What is the fastest, easiest tool or method to convert text files between character sets? Specifically, I need to convert from UTF-8 to ISO-8859-15 and vice versa. Everything goes: one-liners in your favorite scripting language, command-line tools or other utilities for OS, web sites, etc. Best solutions so far: On Linux/UNIX/OS X/cygwin: Gnu iconv suggested by Troels Arvin is best used as a filter . It seems to be universally available. Example: $ iconv -f UTF-8 -t ISO-8859-15 in.txt > out.txt As pointed out by Ben , there is an online converter using iconv . recode ( manual ) suggested by Cheekysoft will convert one or several files in-place . Example: $ recode UTF8..ISO-8859-15 in.txt This one uses shorter aliases: $ recode utf8..l9 in.txt Recode also supports surfaces which can be used to convert between different line ending types and encodings: Convert newlines from LF (Unix) to CR-LF (DOS): $ recode ../CR-LF in.txt Base64 encode file: $ recode ../Base64 in.txt You can also combine them. Convert a Base64 encoded UTF8 file with Unix line endings to Base64 encoded Latin 1 file with Dos line endings: $ recode utf8/Base64..l1/CR-LF/Base64 file.txt On Windows with Powershell ( Jay Bazuzi ): PS C:\> gc -en utf8 in.txt | Out-File -en ascii out.txt (No ISO-8859-15 support though; it says that supported charsets are unicode, utf7, utf8, utf32, ascii, bigendianunicode, default, and oem.) Edit Do you mean iso-8859-1 support? Using "String" does this e.g. for vice versa gc -en string in.txt | Out-File -en utf8 out.txt Note: The possible enumeration values are "Unknown, String, Unicode, Byte, BigEndianUnicode, UTF8, UTF7, Ascii". CsCvt - Kalytta's Character Set Converter is another great command line based conversion tool for Windows. | Stand-alone utility approach iconv -f ISO-8859-1 -t UTF-8 in.txt > out.txt -f ENCODING the encoding of the input-t ENCODING the encoding of the output You don't have to specify either of these arguments. They will default to your current locale, which is usually UTF-8. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/64860",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2948/"
]
} |
64,894 | Is it possible to select from show tables in MySQL? SELECT * FROM (SHOW TABLES) AS `my_tables` Something along these lines, though the above does not work (on 5.0.51a, at least). | I think you want SELECT * FROM INFORMATION_SCHEMA.TABLES See http://dev.mysql.com/doc/refman/5.0/en/tables-table.html | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/64894",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
64,904 | I need to support exact phrases (enclosed in quotes) in an otherwise space-separated list of terms.Thus splitting the respective string by the space-character is not sufficient anymore. Example: input : 'foo bar "lorem ipsum" baz'output: ['foo', 'bar', 'lorem ipsum', 'baz'] I wonder whether this could be achieved with a single RegEx, rather than performing complex parsing or split-and-rejoin operations. Any help would be greatly appreciated! | var str = 'foo bar "lorem ipsum" baz'; var results = str.match(/("[^"]+"|[^"\s]+)/g); ... returns the array you're looking for. Note, however: Bounding quotes are included, so can be removed with replace(/^"([^"]+)"$/,"$1") on the results. Spaces between the quotes will stay intact. So, if there are three spaces between lorem and ipsum , they'll be in the result. You can fix this by running replace(/\s+/," ") on the results. If there's no closing " after ipsum (i.e. an incorrectly-quoted phrase) you'll end up with: ['foo', 'bar', 'lorem', 'ipsum', 'baz'] | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/64904",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
64,905 | I want to create templates for base new reports on to have common designs. How do you do it? | The need to produce reports with a common starting design and format is key to any project involving clients and their reports. I have been working on reports for over 10 years now. This has not been the largest portion of my jobs through the years but it has been a very import one. The key to any report project is not to recreate the mundane aspects of the reports for each but to use templates. The use of templates is not a common task or knowledge for Microsoft's SQL Server Reporting Services. Knowing how to save reports templates so that you and your team can create these shortcuts at the creation of a new report in Visual Studio 2005 will help save time and have all reports use the same layout and design. Create of a set of reports with the following suggestions: Page size -- 8.5 by 11 (letter) and 8.5 by 14 (legal) Orientation -- portrait and landscape for all paper sizes Header -- Text Box for report name, Text Box for report subtitle, client or brand logo Footer -- page number/total pages, date and time report printed Take all the rdl files for the reports created from the suggestions and copy the files to the following directory: C:\Program Files\Microsoft Visual Studio 8\Common7\IDE\PrivateAssemblies\ProjectItems\ReportProject When creating a new report in your Visual Studio 2005 report project through Add|New Item alt text http://www.cloudsocket.com/images/image-thumb14.png The new report dialog will present the list of items from the directory where the new templates were placed. alt text http://www.cloudsocket.com/images/image-thumb15.png Select the report that fits the requirement needed and proceed to develop your reports without needing to create the basics. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/64905",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7001/"
]
} |
64,977 | How do you create SQL Server 2005 stored procedure templates in SQL Server 2005 Management Studio? | Another little nugget that I think will help people developing and being more productive in their database development. I am a fan of stored procedures and functions when I develop software solutions. I like my actual CRUD methods to be implemented at the database level. It allows me to balance out my work between the application software (business logic and data access) and the database itself. Not wanting to start a religious war, but I want to allow people to develop stored procedures more quickly and with best practices through templates. Let’s start with making your own templates in the SQL Server 2005 management Studio. First, you need to show the Template Explorer in the Studio. alt text http://www.cloudsocket.com/images/image-thumb10.png This will show the following: alt text http://www.cloudsocket.com/images/image-thumb11.png alt text http://www.cloudsocket.com/images/image-thumb12.png alt text http://www.cloudsocket.com/images/image-thumb13.png The IDE will create a blank template. To edit the template, right click on the template and select Edit. You will get a blank Query window in the IDE. You can now insert your template implementation. I have here the template of the new stored procedure to include a TRY CATCH. I like to include error handling in my stored procedures. With the new TRY CATCH addition to TSQL in SQL Server 2005, we should try to use this powerful exception handling mechanism through our code including database code. Save the template and you are all ready to use your new template for stored procedure creation. -- ======================================================-- Create basic stored procedure template with TRY CATCH-- ======================================================SET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGO-- =============================================-- Author: <Author,,Name>-- Create date: <Create Date,,>-- Description: <Description,,>-- =============================================CREATE PROCEDURE <Procedure_Name, sysname, ProcedureName> -- Add the parameters for the stored procedure here <@Param1, sysname, @p1> <Datatype_For_Param1, , int> = <Default_Value_For_Param1, , 0>, <@Param2, sysname, @p2> <Datatype_For_Param2, , int> = <Default_Value_For_Param2, , 0>AS BEGIN TRY BEGIN TRANSACTION -- Start the transaction SELECT @p1, @p2 -- If we reach here, success! COMMIT END TRY BEGIN CATCH -- there was an error IF @@TRANCOUNT > 0 ROLLBACK -- Raise an error with the details of the exception DECLARE @ErrMsg nvarchar(4000), @ErrSeverity int SELECT @ErrMsg = ERROR_MESSAGE(), @ErrSeverity = ERROR_SEVERITY() RAISERROR(@ErrMsg, @ErrSeverity, 1) END CATCHGO | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/64977",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7001/"
]
} |
64,981 | How do I create a unique constraint on an existing table in SQL Server 2005? I am looking for both the TSQL and how to do it in the Database Diagram. | The SQL command is: ALTER TABLE <tablename> ADD CONSTRAINT <constraintname> UNIQUE NONCLUSTERED ( <columnname> ) See the full syntax here . If you want to do it from a Database Diagram: right-click on the table and select 'Indexes/Keys' click the Add button to add a new index enter the necessary info in the Properties on the right hand side: the columns you want (click the ellipsis button to select) set Is Unique to Yes give it an appropriate name | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/64981",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2469/"
]
} |
65,034 | How would I remove the border from an iframe embedded in my web app? An example of the iframe is: <iframe src="myURL" width="300" height="300">Browser not compatible.</iframe> I would like the transition from the content on my page to the contents of the iframe to be seamless, assuming the background colors are consistent. The target browser is IE6 only and unfortunately solutions for others will not help. | Add the frameBorder attribute (note the capital ‘B’ ). So it would look like: <iframe src="myURL" width="300" height="300" frameBorder="0">Browser not compatible.</iframe> | {
"score": 11,
"source": [
"https://Stackoverflow.com/questions/65034",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2650/"
]
} |
65,035 | Considering this code, can I be absolutely sure that the finally block always executes, no matter what something() is? try { something(); return success; } catch (Exception e) { return failure; } finally { System.out.println("I don't know if this will get printed out");} | Yes, finally will be called after the execution of the try or catch code blocks. The only times finally won't be called are: If you invoke System.exit() If you invoke Runtime.getRuntime().halt(exitStatus) If the JVM crashes first If the JVM reaches an infinite loop (or some other non-interruptable, non-terminating statement) in the try or catch block If the OS forcibly terminates the JVM process; e.g., kill -9 <pid> on UNIX If the host system dies; e.g., power failure, hardware error, OS panic, et cetera If the finally block is going to be executed by a daemon thread and all other non-daemon threads exit before finally is called | {
"score": 13,
"source": [
"https://Stackoverflow.com/questions/65035",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/885027/"
]
} |
65,037 | As far as I know, in gcc you can write something like: #define DBGPRINT(fmt...) printf(fmt); Is there a way to do that in VC++? | Yes but only since VC++ 2005. The syntax for your example would be: #define DBGPRINT(fmt, ...) printf(fmt, __VA_ARGS__) A full reference is here . | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/65037",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9102/"
]
} |
65,071 | Is there a performant equivalent to the isnull function for DB2? Imagine some of our products are internal, so they don't have names: Select product.id, isnull(product.name, "Internal) From product Might return: 1 Socks 2 Shoes 3 Internal 4 Pants | For what its worth, COALESCE is similiar but IFNULL(expr1, default) is the exact match you're looking for in DB2. COALESCE allows multiple arguments, returning the first NON NULL expression, whereas IFNULL only permits the expression and the default. Thus SELECT product.ID, IFNULL(product.Name, "Internal") AS ProductNameFROM Product Gives you what you're looking for as well as the previous answers, just adding for completeness. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/65071",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9056/"
]
} |
65,076 | I've trouble setting up Vim (7.1.xxx) for editing Python files (*.py).Indenting seems to be broken (optimal 4 spaces).I've followed some tutorials I found via Google. Still no effect :/ Please help. | I use this on my macbook: " configure expanding of tabs for various file typesau BufRead,BufNewFile *.py set expandtabau BufRead,BufNewFile *.c set expandtabau BufRead,BufNewFile *.h set expandtabau BufRead,BufNewFile Makefile* set noexpandtab" --------------------------------------------------------------------------------" configure editor with tabs and nice stuff..." --------------------------------------------------------------------------------set expandtab " enter spaces when tab is pressedset textwidth=120 " break lines when line length increasesset tabstop=4 " use 4 spaces to represent tabset softtabstop=4set shiftwidth=4 " number of spaces to use for auto indentset autoindent " copy indent from current line when starting a new line" make backspaces more powerfullset backspace=indent,eol,startset ruler " show line and column numbersyntax on " syntax highlightingset showcmd " show (partial) command in status line (edited to only show stuff related to indent / tabs) | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/65076",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9099/"
]
} |
65,091 | I'd like to be able to write a PHP class that behaves like an array and uses normal array syntax for getting & setting. For example (where Foo is a PHP class of my making): $foo = new Foo();$foo['fooKey'] = 'foo value';echo $foo['fooKey']; I know that PHP has the _get and _set magic methods but those don't let you use array notation to access items. Python handles it by overloading __getitem__ and __setitem__. Is there a way to do this in PHP? If it makes a difference, I'm running PHP 5.2. | If you extend ArrayObject or implement ArrayAccess then you can do what you want. ArrayObject ArrayAccess | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/65091",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/305/"
]
} |
65,093 | We have a live MySQL database that is 99% INSERTs, around 100 per second. We want to archive the data each day so that we can run queries on it without affecting the main, live database. In addition, once the archive is completed, we want to clear the live database. What is the best way to do this without (if possible) locking INSERTs? We use INSERT DELAYED for the queries. | http://www.maatkit.org/ has mk-archiver archives or purges rows from a table to another table and/or a file. It is designed to efficiently “nibble” data in very small chunks without interfering with critical online transaction processing (OLTP) queries. It accomplishes this with a non-backtracking query plan that keeps its place in the table from query to query, so each subsequent query does very little work to find more archivable rows. Another alternative is to simply create a new database table each day. MyIsam does have some advantages for this, since INSERTs to the end of the table don't generally block anyway, and there is a merge table type to being them all back together. A number of websites log the httpd traffic to tables like that. With Mysql 5.1, there are also partition tables that can do much the same. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/65093",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2183/"
]
} |
65,135 | We have a rather large SVN repository. Doing SVN updates are taking longer and longer the more we add code. We added svn:externals to folders that were repeated in some projects like the FCKeditor on various websites. This helped, but not that much. What is the best way to reduce update time and boost SVN speed? | If it's an older SVN repository (or even quite new, but wasn't setup optimally), it maybe using the older BDB style of repository database. http://svn.apache.org/repos/asf/subversion/trunk/notes/fsfs has notes on the new one. To change from one to another isn;t too hard - dump the entire history, re-initialise it with the new svn format of file system and re-import. It may also be useful at the same time to filter the repo-dump to remove entire checkins of useless information (I, for example, have removed 20MB+ tarball files that someone had checked in). As far as general speed goes - a quality (speedy) hard-drive and extra memory for OS-based caching would be hard to fault in terms of increasing the speed of how SVN will work. On the client side, if you have tortoisesvn setup through PuttyAgent for SSH access to an external repository machine, you can also enable SSH compression, which can also help. Edit: SVN v1.5 also has the fsfs-reshard.py tool which can help split a FSFS based svn repository into a number of directories - which can themselves be linked onto different drive spindles. If you have thousands of revisions, that can also help - if for no other reason than finding one file among thousands takes time (and you tell tell if thats a problem by looking at the IOwait times) | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/65135",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9140/"
]
} |
65,164 | Some 4 years back, I followed this MSDN article for DateTime usage best practices for building a .Net client on .Net 1.1 and ASMX web services (with SQL 2000 server as the backend). I still remember the serialization issues I had with DateTime and the testing effort it took for servers in different time zones. My questions is this: Is there a similar best practices document for some of the new technologies like WCF and SQL server 2008, especially with the addition of new datetime types for storing time zone aware info. This is the environment: SQL server 2008 on Pacific Time. Web Services layer on a different time zone. Clients could be using .Net 2.0 or .Net 3.5 on different time zones. If it makes it easy, we can force everyone to upgrade to .Net 3.5. :) Any good suggestions/best practices for the data types to be used in each layer? | I think the best way of doing this is to always pass the object as UTC, and convert to local time on the clients. By doing so, there is a common reference point for all clients. To convert to UTC, call ToUniversalTime on the DateTime object. Then, on the clients, call ToLocalTime to get it in their current time zone. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/65164",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4337/"
]
} |
65,200 | I was reading a book on programming skills wherein the author asks the interviewee, "How do you crash a JVM?" I thought that you could do so by writing an infinite for-loop that would eventually use up all the memory. Anybody has any idea? | JNI . In fact, with JNI, crashing is the default mode of operation. You have to work extra hard to get it not to crash. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/65200",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9195/"
]
} |
65,205 | What's the best way to store a linked list in a MySQL database so that inserts are simple (i.e. you don't have to re-index a bunch of stuff every time) and such that the list can easily be pulled out in order? | Store an integer column in your table called 'position'. Record a 0 for the first item in your list, a 1 for the second item, etc. Index that column in your database, and when you want to pull your values out, sort by that column. alter table linked_list add column position integer not null default 0; alter table linked_list add index position_index (position); select * from linked_list order by position; To insert a value at index 3, modify the positions of rows 3 and above, and then insert: update linked_list set position = position + 1 where position >= 3; insert into linked_list (my_value, position) values ("new value", 3); | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/65205",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
65,206 | Using jQuery , how can I dynamically set the size attribute of a select box? I would like to include it in this code: $("#mySelect").bind("click", function() { $("#myOtherSelect").children().remove(); var options = '' ; for (var i = 0; i < myArray[this.value].length; i++) { options += '<option value="' + myArray[this.value][i] + '">' + myArray[this.value][i] + '</option>'; } $("#myOtherSelect").html(options).attr [... use myArray[this.value].length here ...]; });}); | Oops, it's $('#mySelect').attr('size', value) | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/65206",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2755/"
]
} |
65,209 | I was recently asked to come up with a script that will allow the end user to upload a PSD (Photoshop) file, and split it up and create images from each of the layers. I would love to stay with PHP for this, but I am open to Python or Perl as well. Any ideas would be greatly appreciated. | Oops, it's $('#mySelect').attr('size', value) | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/65209",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9176/"
]
} |
65,250 | Convert a .doc or .pdf to an image and display a thumbnail in Ruby? Does anyone know how to generate document thumbnails in Ruby (or C, python...) | A simple RMagick example to convert a PDF to a PNG would be: require 'RMagick'pdf = Magick::ImageList.new("doc.pdf")thumb = pdf.scale(300, 300)thumb.write "doc.png" To convert a MS Word document, it won't be as easy. Your best option may be to first convert it to a PDF before generating the thumbnail. Your options for generating the PDF depend heavily on the OS you're running on. One might be to use OpenOffice and the Python Open Document Converter . There are also online conversion services you could try, including http://Zamzar.com . | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/65250",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
65,266 | Each time a python file is imported that contains a large quantity of static regular expressions, cpu cycles are spent compiling the strings into their representative state machines in memory. a = re.compile("a.*b")b = re.compile("c.*d")... Question: Is it possible to store these regular expressions in a cache on disk in a pre-compiled manner to avoid having to execute the regex compilations on each import? Pickling the object simply does the following, causing compilation to happen anyway: >>> import pickle>>> import re>>> x = re.compile(".*")>>> pickle.dumps(x)"cre\n_compile\np0\n(S'.*'\np1\nI0\ntp2\nRp3\n." And re objects are unmarshallable: >>> import marshal>>> import re>>> x = re.compile(".*")>>> marshal.dumps(x)Traceback (most recent call last): File "<stdin>", line 1, in <module>ValueError: unmarshallable object | Is it possible to store these regular expressions in a cache on disk in a pre-compiled manner to avoid having to execute the regex compilations on each import? Not easily. You'd have to write a custom serializer that hooks into the C sre implementation of the Python regex engine. Any performance benefits would be vastly outweighed by the time and effort required. First, have you actually profiled the code? I doubt that compiling regexes is a significant part of the application's run-time. Remember that they are only compiled the first time the module is imported in the current execution -- thereafter, the module and its attributes are cached in memory. If you have a program that basically spawns once, compiles a bunch of regexes, and then exits, you could try re-engineering it to perform multiple tests in one invocation. Then you could re-use the regexes, as above. Finally, you could compile the regexes into C-based state machines and then link them in with an extension module. While this would likely be more difficult to maintain, it would eliminate regex compilation entirely from your application. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/65266",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9241/"
]
} |
65,268 | I have a sample held in a buffer from DirectX. It's a sample of a note played and captured from an instrument. How do I analyse the frequency of the sample (like a guitar tuner does)? I believe FFTs are involved, but I have no pointers to HOWTOs. | The FFT can help you figure out where the frequency is, but it can't tell you exactly what the frequency is. Each point in the FFT is a "bin" of frequencies, so if there's a peak in your FFT, all you know is that the frequency you want is somewhere within that bin, or range of frequencies. If you want it really accurate, you need a long FFT with a high resolution and lots of bins (= lots of memory and lots of computation). You can also guess the true peak from a low-resolution FFT using quadratic interpolation on the log-scaled spectrum, which works surprisingly well. If computational cost is most important, you can try to get the signal into a form in which you can count zero crossings, and then the more you count, the more accurate your measurement. None of these will work if the fundamental is missing , though. :) I've outlined a few different algorithms here , and the interpolated FFT is usually the most accurate (though this only works when the fundamental is the strongest harmonic - otherwise you need to be smarter about finding it), with zero-crossings a close second (though this only works for waveforms with one crossing per cycle ). Neither of these conditions is typical. Keep in mind that the partials above the fundamental frequency are not perfect harmonics in many instruments, like piano or guitar. Each partial is actually a little bit out of tune , or inharmonic . So the higher-frequency peaks in the FFT will not be exactly on the integer multiples of the fundamental, and the wave shape will change slightly from one cycle to the next, which throws off autocorrelation. To get a really accurate frequency reading, I'd say to use the autocorrelation to guess the fundamental, then find the true peak using quadratic interpolation. (You can do the autocorrelation in the frequency domain to save CPU cycles.) There are a lot of gotchas, and the right method to use really depends on your application. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/65268",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
65,310 | I am using Apache Axis to connect my Java app to a web server. I used wsdl2java to create the stubs for me, but when I try to use the stubs, I get the following exception: org.apache.axis.ConfigurationException: No service named <web service name> is available any idea? | According to the documentation linked to by @arnonym, this exception is somewhat misleading. In the first attempt to find the service a ConfigurationException is thrown and caught. It is logged at DEBUG level by the ConfigurationException class. Then another attempt is made using a different method to find the service that may then succeed. The workaround for this is to just change the log level on the ConfigurationException class to INFO in your log4j.properties: log4j.logger.org.apache.axis.ConfigurationException = INFO | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/65310",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2328/"
]
} |
65,351 | I have a generic method defined like this: public void MyMethod<T>(T myArgument) The first thing I want to do is check if the value of myArgument is the default value for that type, something like this: if (myArgument == default(T)) But this doesn't compile because I haven't guaranteed that T will implement the == operator. So I switched the code to this: if (myArgument.Equals(default(T))) Now this compiles, but will fail if myArgument is null, which is part of what I'm testing for. I can add an explicit null check like this: if (myArgument == null || myArgument.Equals(default(T))) Now this feels redundant to me. ReSharper is even suggesting that I change the myArgument == null part into myArgument == default(T) which is where I started. Is there a better way to solve this problem? I need to support both references types and value types. | To avoid boxing, the best way to compare generics for equality is with EqualityComparer<T>.Default . This respects IEquatable<T> (without boxing) as well as object.Equals , and handles all the Nullable<T> "lifted" nuances. Hence: if(EqualityComparer<T>.Default.Equals(obj, default(T))) { return obj;} This will match: null for classes null (empty) for Nullable<T> zero/false/etc for other structs | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/65351",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8739/"
]
} |
65,400 | How do I add an instance method to a class using a metaclass (yes I do need to use a metaclass)? The following kind of works, but the func_name will still be "foo": def bar(self): print "bar"class MetaFoo(type): def __new__(cls, name, bases, dict): dict["foobar"] = bar return type(name, bases, dict)class Foo(object): __metaclass__ = MetaFoo>>> f = Foo()>>> f.foobar()bar>>> f.foobar.func_name'bar' My problem is that some library code actually uses the func_name and later fails to find the 'bar' method of the Foo instance. I could do: dict["foobar"] = types.FunctionType(bar.func_code, {}, "foobar") There is also types.MethodType, but I need an instance that does'nt exist yet to use that. Am I missing someting here? | Try dynamically extending the bases that way you can take advantage of the mro and the methods are actual methods: class Parent(object): def bar(self): print "bar"class MetaFoo(type): def __new__(cls, name, bases, dict): return type(name, (Parent,) + bases, dict)class Foo(object): __metaclass__ = MetaFooif __name__ == "__main__": f = Foo() f.bar() print f.bar.func_name | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/65400",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5179/"
]
} |
65,427 | As I understand it, anything created with an alloc , new , or copy needs to be manually released. For example: int main(void) { NSString *string; string = [[NSString alloc] init]; /* use the string */ [string release];} My question, though, is wouldn't this be just as valid?: int main(void) { NSAutoreleasePool *pool; pool = [[NSAutoreleasePool alloc] init]; NSString *string; string = [[[NSString alloc] init] autorelease]; /* use the string */ [pool drain];} | Yes, your second code snippit is perfectly valid. Every time -autorelease is sent to an object, it is added to the inner-most autorelease pool. When the pool is drained, it simply sends -release to all the objects in the pool. Autorelease pools are simply a convenience that allows you to defer sending -release until "later". That "later" can happen in several places, but the most common in Cocoa GUI apps is at the end of the current run loop cycle. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/65427",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7979/"
]
} |
65,452 | This morning I ran into an issue with returning back a text string as result from a Web Service call. the Error I was getting is below ************** Exception Text **************System.ServiceModel.CommunicationException: Error in deserializing body of reply message for operation 'GetFilingTreeXML'. ---> System.InvalidOperationException: There is an error in XML document (1, 9201). ---> System.Xml.XmlException: The maximum string content length quota (8192) has been exceeded while reading XML data. This quota may be increased by changing the MaxStringContentLength property on the XmlDictionaryReaderQuotas object used when creating the XML reader. Line 1, position 9201.at System.Xml.XmlExceptionHelper.ThrowXmlException(XmlDictionaryReader reader, String res, String arg1, String arg2, String arg3)at System.Xml.XmlExceptionHelper.ThrowMaxStringContentLengthExceeded(XmlDictionaryReader reader, Int32 maxStringContentLength)at System.Xml.XmlDictionaryReader.ReadString(Int32 maxStringContentLength)at System.Xml.XmlDictionaryReader.ReadString()at System.Xml.XmlBaseReader.ReadElementString()at Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationReaderImageServerClientInterfaceSoap.Read10_GetFilingTreeXMLResponse()at Microsoft.Xml.Serialization.GeneratedAssembly.ArrayOfObjectSerializer9.Deserialize(XmlSerializationReader reader)at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle, XmlDeserializationEvents events)--- End of inner exception stack trace ---at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle, XmlDeserializationEvents events)at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle)at System.ServiceModel.Dispatcher.XmlSerializerOperationFormatter.DeserializeBody(XmlDictionaryReader reader, MessageVersion version, XmlSerializer serializer, MessagePartDescription returnPart, MessagePartDescriptionCollection bodyParts, Object[] parameters, Boolean isRequest)--- End of inner exception stack trace --- I did a search and the results are below: Search Results Most of those are WCF related but were enough to point me in the right direction. I will post answer as reply. | Try this blog post here . You can modify the MaxStringContentLength property in the Binding configuration. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/65452",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1889/"
]
} |
65,458 | There are many SCM systems out there. Some open, some closed, some free, some quite expensive. Which one (please choose only one) would you use for a 3000+ developer organization with several sites (some behind a very slow link)? Explain why you chose the one you chose. (Give some reasons, not just "because".) | For such a huge installation, there are at least the following major requirements: Data safety , maturity, robustness, Scalability , price (a per seat licence vs. open source always makes a huge difference regardless of the price per seat), ease of administration I would think that subversion would be just fine. There is support available (from collabnet , clearvision , wandisco and others). You could ask them if subversion would be able to handle your task. subversion has a very mature database backend - FSFS. It is absolutely rock solid and since 1.5 it can handle really many revisions without performance degradation. The revisions are written in a file system. So the reliability of your subversion repository depends on the quality of your file system, os and storage system. This is why I would recommend Solaris 10 with ZFS as the file system. ZFS has really great file system features for production systems. But above all it provides data integrity checksumming. So with this amount of source code in the subversion repository you won't have to worry about repository corruption because of a silent hard drive bit error or controller or cable bit error. By now ZFS is mature enough that it can be safely used as a UFS or whatever replacement. I don't know about the hardware requirements. Maybe Collabnet could give you advice. But a really good start (which could be used as NFS storage or backup storage if it turns out to be too slow - you will definitely be able to make good use of it anyway) would be a 2nd generation thumper , i.e Sun Fire X4540 Server : You can have (all within a nice 4U Rack Server for 80.000$ (list price - this will be likely negotiable)): 48 TB Disk space!, 8 AMD Opteron CPU cores, 64 GB RAM, Solaris 10 preinstalled, 3 year Platinum software and hardware support from sun. So the mere hardware and support price for this server would be 25$ per seat of your 3000 Developers. To assure really great data safety, you could partition the 48 hard drives as follows: 3 drives for the operating system (3-way Raid-1 mirror), 3 hot spares (not used, on stand-by in the case of a failure of the other drives), a zfs pool of 14 3-way Raid 1 mirrors (14*3=42 drives) for the subversion repository. If you would like to fill the 14 TB ZFS Raid space only by 80% then this would be approximately 10 Tebibyte of real usable disk space for the repository, i.e. an average of 3 GB per developer. With this configuration: Subversion 1.6 on a Sun x4540 thumper with 10 TiB 3-way Raid-1 ZFS redundant and checksummed disk space this should be a really serious start. If the compute power isn't enough for 3000+ developers than you could buy a beefier server which could use the disk space of the thumper. If the disk performance is too slow you could hook up a huge array of fast scsi drives to the compute server and use the thumper as a backup solution. Certainly, it would make sense to get consulting services from collabnet regarding the planning and deployment of this subversion server and to get platinum support for the hardware and solaris operating system from sun. Edit (answer to comment #1): For distributed teams there is the possibility of a master-slave configuration : WebDAV-Proxy . Each local team has a slave server, which replicates the repository. The developers get all checkouts from this slave. The checkins are forwarded transparently from the slave to the master. In this way, the master is always current. The vast majority of traffic is checkouts: Every developer gets every checkin any developer commits. So the checkout traffic should be 99.97% of the traffic with 3000 developers. If you have a local team with 50 developers, the checkout traffic would be reduced by 98%. The checkins shouldn't be a problem: how fast can anybody type new code? Obviously, for a small team you won't buy a thumper. You just need a box with enough hard drive space (i.e. if you intend to hold the hole repository 10TB). It can be a raid5 configuration as data loss isn't the end of the company. You won't need Solaris either. You could put linux on it if the local people would be more comfortable with it. Again: ask a consultant like collabnet if this is really a sound concept. With this many seats it shouldn't be a problem to pay for a one time consultation. They can set up the whole thing. Sun delivers the box with solaris pre-installed. You have sun support. So you won't need a solaris guru on site, as the configuration shouldn't change for the next years. This configuration means that the slow line from the team to the headquarter won't be clogged with redundant checkout data and the members of the local team can get their checkouts quickly it would dramatically reduce the load at the thumper - this means with that configuration you shouldn't have to worry at all whether the thumper is capable of handling the load it reduces the bandwidth costs Edit (after the release of the M3000): A much more extreme hardware configuration targeted even more towards insane data integrity would be the combination of a M3000 server and a J4500 array: the J4500 Storage Array is practically a thumper, but without the CPU-power and external storage interfaces which enables it to be connected to a server. The M3000 Server is a Sparc64 server at a midrange price with high end RAS features. Most data paths and even cpu registers are checksummed, etc. The RAM is not only ECC protected but has the equivalent of the IBM Chipkill feature: It's raid on memory: not only single bit errors are detected and corrected, but entire memory chips may fail completely while no data is lost - similar to failing hard drives in raid arrays. As the ZFS file system does CPU-based error checksumming on the data before it comes from, or after it goes to the CPU, the quality of the storage controller and cabling of the J4500 is not important. What matters are the bit error prevention and detection capabilities of the M3000 CPU, Memory, memory controller, etc. Unfortuntely, the high quality memory sticks sun is using to improve the quality even more are that much expensive that the combination of the four core (eight threads) 4GB Ram M3000 + 48 TB J4500 would be roughly equivalent to the thumper, but if you would like to increase the server memory from 4GB to 8, 16 or 32 GB for in-memory caching purposes, the price goes up steeply. But maybe a 4GB configuration would even be enough if the master-slave configuration for distributed teams is used. This hardware combination would be worth a thought if the source code and data integrity of this 3000 developer repository is valued extremely highly by the management. Then it would also make sense to add two or more thumpers as a rotating backup solution (not neccessary to protect against hardware failure, but to protect against administrator mistakes or for off-site backups in case of physical desasters). As this would be a Sparc and not a x86 solution, there are certified Collabnet Subversion binaries for this platform available freely. One of the advantages of subversion is also the excellent documentation: There is an excellent book from O'Reilly ( Version Control with Subversion ) also available for free as a PDF or HTML version. To sum it up: With the combination Subversion 1.6 + Solaris 10 + 3-way-raid-1 redundant and checksummed ZFS + thumper + master-slave server replication for local teams + sun support + collabnet/clearvision/orcaware/Karl Vogel consultation + excellent and free subversion manual for all developers you should have a solution which provides Extremely High Data Safety (very important for so much source code - you do not want to corrupt your repository, bit errors do happen, hard drives do fail!) You have one master data repository which holds all your versions/revisions really reliably: The main feature of source control systems. Maturity - Subversion has been used by many, many companies and open source projects. Scalability - With the master-slave replication you should not have a load problem on the master server: The load of the checkins are negligible. The checkouts are handled by the slaves. No High Latency for local teams behind slow connections (because of the replication) A low price: subversion is free (no per seat fee), excellent free documentation, over a three year period only 8$ per seat per year hardware and support costs for the master server, cheap linux boxes for slaves, one-time consultancy from collabnet et. al., low bandwidth costs because of master-slave-replication. Ease of administration: Essentially no administration of the master server: The subversion consultant can deploy everything. Sun staff will swap faulty hard drives, etc. Slaves can be linux boxes or whatever administration skills are available at the local sites. Excellent subversion documentation. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/65458",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9362/"
]
} |
65,475 | What characters are valid in a Java class name? What other rules govern Java class names (for instance, Java class names cannot begin with a number)? | You can have almost any character, including most Unicode characters! The exact definition is in the Java Language Specification under section 3.8: Identifiers . An identifier is an unlimited-length sequence of Java letters and Java digits , the first of which must be a Java letter . ... Letters and digits may be drawn from the entire Unicode character set, ... This allows programmers to use identifiers in their programs that are written in their native languages. An identifier cannot have the same spelling (Unicode character sequence) as a keyword (§3.9), boolean literal (§3.10.3), or the null literal (§3.10.7), or a compile-time error occurs. However, see this question for whether or not you should do that. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/65475",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8720/"
]
} |
65,491 | When working with large and/or many Javascript and CSS files, what's the best way to reduce the file sizes? | In addition to using server side compression, using intelligent coding is the best way to keep bandwidth costs low. You can always use tools like Dean Edward's Javascript Packer , but for CSS, take the time to learn CSS Shorthand . E.g. use: background: #fff url(image.gif) no-repeat top left; ...instead of: background-color: #fff;background-image: url(image.gif);background-repeat: no-repeat;background-position: top left; Also, use the cascading nature of CSS. For example, if you know that your site will use one font-family, define that for all elements that are in the body tag like this: body{font-family:arial;} One other thing that can help is including your CSS and JavaScript as files instead of inline or at the head of each page. That way your server only has to serve them once to the browser after that browser will go from cache. Including Javascript <script type="text/javascript" src="/scripts/loginChecker.js"></script> Including CSS <link rel="stylesheet" href="/css/myStyle.css" type="text/css" media="All" /> | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/65491",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9021/"
]
} |
65,512 | I've heard that SELECT * is generally bad practice to use when writing SQL commands because it is more efficient to SELECT columns you specifically need. If I need to SELECT every column in a table, should I use SELECT * FROM TABLE or SELECT column1, colum2, column3, etc. FROM TABLE Does the efficiency really matter in this case? I'd think SELECT * would be more optimal internally if you really need all of the data, but I'm saying this with no real understanding of database. I'm curious to know what the best practice is in this case. UPDATE: I probably should specify that the only situation where I would really want to do a SELECT * is when I'm selecting data from one table where I know all columns will always need to be retrieved, even when new columns are added. Given the responses I've seen however, this still seems like a bad idea and SELECT * should never be used for a lot more technical reasons that I ever though about. | One reason that selecting specific columns is better is that it raises the probability that SQL Server can access the data from indexes rather than querying the table data. Here's a post I wrote about it: The real reason select queries are bad index coverage It's also less fragile to change, since any code that consumes the data will be getting the same data structure regardless of changes you make to the table schema in the future. | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/65512",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/392/"
]
} |
65,515 | What techniques or tools are recommended for finding broken links on a website? I have access to the logfiles, so could conceivably parse these looking for 404 errors, but would like something automated which will follow (or attempt to follow) all links on a site. | For Chrome Extension there is hexometer See LinkChecker for Firefox. For Mac OS there is a tool Integrity which can check URLs for broken links. For Windows there is Xenu's Link Sleuth . | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/65515",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2084/"
]
} |
65,536 | How would I get the here and and here to be on the right, on the same lines as the lorem ipsums? See the following: Lorem Ipsum etc........here blah....................... blah blah.................. blah....................... lorem ipsums.......and here | <div style="position: relative; width: 250px;"> <div style="position: absolute; top: 0; right: 0; width: 100px; text-align:right;"> here </div> <div style="position: absolute; bottom: 0; right: 0; width: 100px; text-align:right;"> and here </div> Lorem Ipsum etc <br /> blah <br /> blah blah <br /> blah <br /> lorem ipsums</div> Gets you pretty close, although you may need to tweak the "top" and "bottom" values. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/65536",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9328/"
]
} |
65,585 | I want to delete foo() if foo() isn't called from anywhere. | Gendarme will detect private methods with no upstream callers. It is available cross platform, and the latest version handles " AvoidUncalledPrivateCodeRule ". FxCop will detect public/protected methods with no upstream callers. However, FxCop does not detect all methods without upstream callers, as it is meant to check in the case that your code is part of a Library, so public members are left out. You can use NDepend to do a search for public members with no upstream callers, which I detail here in this other StackOverflow answer . (edit: added information about Gendarme which actually does what the questioner asked) | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/65585",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9328/"
]
} |
65,607 | I've been attempting to write a Lisp macro that would perfom the equivalent of ++ in other programming languages for semantic reasons. I've attempted to do this in several different ways, but none of them seem to work, and all are accepted by the interpreter, so I don't know if I have the correct syntax or not. My idea of how this would be defined would be (defmacro ++ (variable) (incf variable)) but this gives me a SIMPLE-TYPE-ERROR when trying to use it. What would make it work? | Remember that a macro returns an expression to be evaluated. In order to do this, you have to backquote: (defmacro ++ (variable) `(incf ,variable)) | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/65607",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1256/"
]
} |
65,651 | I'm a longtime Java programmer working on a PHP project, and I'm trying to get PHPUnit up and working. When unit testing in Java, it's common to put test case classes and regular classes into separate directories, like this - /src MyClass.java/test MyClassTest.java and so on. When unit testing with PHPUnit, is it common to follow the same directory structure, or is there a better way to lay out test classes? So far, the only way I can get the "include("MyClass.php")" statement to work correctly is to include the test class in the same directory, but I don't want to include the test classes when I push to production. | I think it's a good idea to keep your files separate. I normally use a folder structure like this: /myapp/src/ <- my classes/myapp/tests/ <- my tests for the classes/myapp/public/ <- document root In your case, for including the class in your test file, why not just pass the the whole path to the include method? include('/path/to/myapp/src/MyClass.php'); or include('../src/MyClass.php'); | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/65651",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8770/"
]
} |
65,668 | Someone told me it's more efficient to use StringBuffer to concatenate strings in Java than to use the + operator for String s. What happens under the hood when you do that? What does StringBuffer do differently? | It's better to use StringBuilder (it's an unsynchronized version; when do you build strings in parallel?) these days, in almost every case, but here's what happens: When you use + with two strings, it compiles code like this: String third = first + second; To something like this: StringBuilder builder = new StringBuilder( first );builder.append( second );third = builder.toString(); Therefore for just little examples, it usually doesn't make a difference. But when you're building a complex string, you've often got a lot more to deal with than this; for example, you might be using many different appending statements, or a loop like this: for( String str : strings ) { out += str;} In this case, a new StringBuilder instance, and a new String (the new value of out - String s are immutable) is required in each iteration. This is very wasteful. Replacing this with a single StringBuilder means you can just produce a single String and not fill up the heap with String s you don't care about. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/65668",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
65,673 | I've been looking at ways to implement gmail-like messaging inside a browser, and arrived at the Comet concept. However, I haven't been able to find a good .NET implementation that allows me to do this within IIS (our application is written in ASP.NET 2.0). The solutions I found (or could think of, for that matter) require leaving a running thread per user - so that it could return a response to him once he gets a message. This doesn't scale at all, of course. So my question is - do you know of an ASP.NET implementation for Comet that works in a different way? Is that even possible with IIS? | Comet is challenging to scale with IIS because of comet's persistent connectivity, but there is a team looking at Comet scenarios now. Also look at Aaron Lerch's blog as I believe he's done some early Comet work in ASP.NET. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/65673",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3389/"
]
} |
65,718 | Maybe this is a silly question, but I've always assumed each number delineated by a period represented a single component of the software. If that's true, do they ever represent something different? I'd like to start assigning versions to the different builds of my software, but I'm not really sure how it should be structured. My software has five distinct components. | In version 1.9.0.1 : 1 : Major revision (new UI, lots of new features, conceptual change, etc.) 9 : Minor revision (maybe a change to a search box, 1 feature added, collection of bug fixes) 0 : Bug fix release 1 : Build number (if used)—that's why you see the .NET framework using something like 2.0.4.2709 You won't find a lot of apps going down to four levels, 3 is usually sufficient. | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/65718",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/191808/"
]
} |
65,800 | There are many different flavored HTML WYSIWYG editors from javascript to ASP.Net web controls, but all too often the features are the same. Does anyone have a favorite HTML editor they like to use in projects? Why? | I'm partial to TinyMCE WYSIWYG editor due to the following reasons: Javascript - so it is broadly usableregardless of the platform I'mworking in. Easy to use - just a couple lines ofcode and a textarea and the control is up andrunning. Easily themed - so I can quicklymake it look like the site in whichit is being used Most importantly - easily customizedto show/hide particular buttonsdepending on my application needs | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/65800",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1115144/"
]
} |
65,820 | I worked on an embedded system this summer written in straight C. It was an existing project that the company I work for had taken over. I have become quite accustomed to writing unit tests in Java using JUnit but was at a loss as to the best way to write unit tests for existing code (which needed refactoring) as well as new code added to the system. Are there any projects out there that make unit testing plain C code as easy as unit testing Java code with JUnit? Any insight that would apply specifically to embedded development (cross-compiling to arm-linux platform) would be greatly appreciated. | One unit testing framework in C is Check ; a list of unit testing frameworks in C can be found here and is reproduced below. Depending on how many standard library functions your runtime has, you may or not be able to use one of those. AceUnit AceUnit (Advanced C and Embedded Unit) bills itself as a comfortable C code unit test framework. It tries to mimick JUnit 4.x and includes reflection-like capabilities. AceUnit can be used in resource constraint environments, e.g. embedded software development, and importantly it runs fine in environments where you cannot include a single standard header file and cannot invoke a single standard C function from the ANSI / ISO C libraries. It also has a Windows port. It does not use forks to trap signals, although the authors have expressed interest in adding such a feature. See the AceUnit homepage . GNU Autounit Much along the same lines as Check, including forking to run unit tests in a separate address space (in fact, the original author of Check borrowed the idea from GNU Autounit). GNU Autounit uses GLib extensively, which means that linking and such need special options, but this may not be a big problem to you, especially if you are already using GTK or GLib. See the GNU Autounit homepage . cUnit Also uses GLib, but does not fork to protect the address space of unit tests. CUnit Standard C, with plans for a Win32 GUI implementation. Does not currently fork or otherwise protect the address space of unit tests. In early development. See the CUnit homepage . CuTest A simple framework with just one .c and one .h file that you drop into your source tree. See the CuTest homepage . CppUnit The premier unit testing framework for C++; you can also use it to test C code. It is stable, actively developed, and has a GUI interface. The primary reasons not to use CppUnit for C are first that it is quite big, and second you have to write your tests in C++, which means you need a C++ compiler. If these don’t sound like concerns, it is definitely worth considering, along with other C++ unit testing frameworks. See the CppUnit homepage . embUnit embUnit (Embedded Unit) is another unit test framework for embedded systems. This one appears to be superseded by AceUnit. Embedded Unit homepage . MinUnit A minimal set of macros and that’s it! The point is to show how easy it is to unit test your code. See the MinUnit homepage . CUnit for Mr. Ando A CUnit implementation that is fairly new, and apparently still in early development. See the CUnit for Mr. Ando homepage . This list was last updated in March 2008. More frameworks: CMocka CMocka is a test framework for C with support for mock objects. It's easy to use and setup. See the CMocka homepage . Criterion Criterion is a cross-platform C unit testing framework supporting automatic test registration, parameterized tests, theories, and that can output to multiple formats, including TAP and JUnit XML. Each test is run in its own process, so signals and crashes can be reported or tested if needed. See the Criterion homepage for more information. HWUT HWUT is a general Unit Test tool with great support for C. It can help to create Makefiles, generate massive test cases coded in minimal 'iteration tables', walk along state machines, generate C-stubs and more. The general approach is pretty unique: Verdicts are based on 'good stdout/bad stdout'. The comparison function, though, is flexible. Thus, any type of script may be used for checking. It may be applied to any language that can produce standard output. See the HWUT homepage . CGreen A modern, portable, cross-language unit testing and mocking framework for C and C++. It offers an optional BDD notation, a mocking library, the ability to run it in a single process (to make debugging easier). A test runner which discover automatically the test functions is available. But you can create your own programmatically. All those features (and more) are explained in the CGreen manual . Wikipedia gives a detailed list of C unit testing frameworks under List of unit testing frameworks: C | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/65820",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7049/"
]
} |
65,849 | I'm writing a web service, and I want to return the data as XHTML. Because it's data, not markup, I want to keep it very clean - no extra <div> s or <span> s. However, as a convenience to developers, I'd also like to make the returned data reasonably readable in a browser. To do so, I'm thinking a good way to go about it would be to use CSS. The thing I specifically want to do is to insert linebreaks at certain places. I'm aware of display: block , but it doesn't really work in the situation I'm trying to handle now - a form with <input> fields. Something like this: <form> Thingy 1: <input class="a" type="text" name="one" /> Thingy 2: <input class="a" type="text" name="two" /> Thingy 3: <input class="b" type="checkbox" name="three" /> Thingy 4: <input class="b" type="checkbox" name="four" /></form> I'd like it to render so that each label displays on the same line as the corresponding input field. I've tried this: input.a:after { content: "\a" } But that didn't seem to do anything. | It'd be best to wrap all of your elements in label elements, then apply css to the labels. The :before and :after pseudo classes are not completely supported in a consistent way. Label tags have a lot of advantages including increased accessibility (on multiple levels) and more. <label> Thingy one: <input type="text" name="one">;</label> then use CSS on your label elements... label {display:block;clear:both;} | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/65849",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
65,865 | I get this error: Can't locate Foo.pm in @INC Is there an easier way to install it than downloading, untarring, making, etc? | On Unix : usually you start cpan in your shell: $ cpan and type install Chocolate::Belgian or in short form: cpan Chocolate::Belgian On Windows : If you're using ActivePerl on Windows, the PPM (Perl Package Manager) has much of the same functionality as CPAN.pm. Example: $ ppm ppm> search net-smtp ppm> install Net-SMTP-Multipart see How do I install Perl modules? in the CPAN FAQ Many distributions ship a lot of perl modules as packages. Debian/Ubuntu: apt-cache search 'perl$' Arch Linux: pacman -Ss '^perl-' Gentoo: category dev-perl You should always prefer them as you benefit from automatic (security) updates and the ease of removal . This can be pretty tricky with the cpan tool itself. For Gentoo there's a nice tool called g-cpan which builds/installs the module from CPAN and creates a Gentoo package ( ebuild ) for you. | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/65865",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4234/"
]
} |
65,879 | Is it recommended that I use an initialization vector to encrypt/decrypt my data? Will it make things more secure? Is it one of those things that need to be evaluated on a case by case basis? To put this into actual context, the Win32 Cryptography function, CryptSetKeyParam allows for the setting of an initialization vector on a key prior to encrypting/decrypting. Other API's also allow for this. What is generally recommended and why? | An IV is essential when the same key might ever be used to encrypt more than one message. The reason is because, under most encryption modes, two messages encrypted with the same key can be analyzed together. In a simple stream cipher, for instance, XORing two ciphertexts encrypted with the same key results in the XOR of the two messages, from which the plaintext can be easily extracted using traditional cryptanalysis techniques. A weak IV is part of what made WEP breakable. An IV basically mixes some unique, non-secret data into the key to prevent the same key ever being used twice. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/65879",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4916/"
]
} |
66,009 | I know you could make a helper pretty easily given the data. So, if possible, please only submit answers that also include getting the data. | We are using an action filter for this. ... public override void OnActionExecuting(ActionExecutingContext filterContext) { var controller = (Controller) filterContext.Controller; Breadcrumb[] breadcrumbs = _breadcrumbManager.PushBreadcrumb(_breadcrumbLinkText); controller.ViewData.Add(breadcrumbs); } before you mention it, I too have a distaste for service location in the filter attributes - but we are left with few options. IBreadcrumbManager looks like this: public interface IBreadcrumbManager{ Breadcrumb[] PushBreadcrumb(string linkText);} The implementation puts Breadcrumb objects into the Session. The Url is HttpContext.Current.Request.RawUrl | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/66009",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1946/"
]
} |
66,066 | I've seen examples like this: public class MaxSeconds { public static final int MAX_SECONDS = 25;} and supposed that I could have a Constants class to wrap constants in, declaring them static final. I know practically no Java at all and am wondering if this is the best way to create constants. | That is perfectly acceptable, probably even the standard. (public/private) static final TYPE NAME = VALUE; where TYPE is the type, NAME is the name in all caps with underscores for spaces, and VALUE is the constant value; I highly recommend NOT putting your constants in their own classes or interfaces. As a side note: Variables that are declared final and are mutable can still be changed; however, the variable can never point at a different object. For example: public static final Point ORIGIN = new Point(0,0);public static void main(String[] args){ ORIGIN.x = 3;} That is legal and ORIGIN would then be a point at (3, 0). | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/66066",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1797/"
]
} |
66,385 | What is a recommended architecture for providing storage for a dynamic logical database schema? To clarify: Where a system is required to provide storage for a model whose schema may be extended or altered by its users once in production, what are some good technologies, database models or storage engines that will allow this? A few possibilities to illustrate: Creating/altering database objects via dynamically generated DML Creating tables with large numbers of sparse physical columns and using only those required for the 'overlaid' logical schema Creating a 'long, narrow' table that stores dynamic column values as rows that then need to be pivoted to create a 'short, wide' rowset containing all the values for a specific entity Using a BigTable/SimpleDB PropertyBag type system Any answers based on real world experience would be greatly appreciated | What you are proposing is not new. Plenty of people have tried it... most have found that they chase "infinite" flexibility and instead end up with much, much less than that. It's the "roach motel" of database designs -- data goes in, but it's almost impossible to get it out. Try and conceptualize writing the code for ANY sort of constraint and you'll see what I mean. The end result typically is a system that is MUCH more difficult to debug, maintain, and full of data consistency problems. This is not always the case, but more often than not, that is how it ends up. Mostly because the programmer(s) don't see this train wreck coming and fail to defensively code against it. Also, often ends up the case that the "infinite" flexibility really isn't that necessary; it's a very bad "smell" when the dev team gets a spec that says "Gosh I have no clue what sort of data they are going to put here, so let 'em put WHATEVER"... and the end users are just fine having pre-defined attribute types that they can use (code up a generic phone #, and let them create any # of them -- this is trivial in a nicely normalized system and maintains flexibility and integrity!) If you have a very good development team and are intimately aware of the problems you'll have to overcome with this design, you can successfully code up a well designed, not terribly buggy system. Most of the time. Why start out with the odds stacked so much against you, though? Don't believe me? Google "One True Lookup Table" or "single table design". Some good results: http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:10678084117056 http://thedailywtf.com/Comments/Tom_Kyte_on_The_Ultimate_Extensibility.aspx?pg=3 http://www.dbazine.com/ofinterest/oi-articles/celko22 http://thedailywtf.com/Comments/The_Inner-Platform_Effect.aspx?pg=2 | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/66385",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6199/"
]
} |
66,402 | I need to calculate Math.exp() from java very frequently, is it possible to get a native version to run faster than java 's Math.exp() ?? I tried just jni + C, but it's slower than just plain java . | +1 to writing your own exp() implementation. That is, if this is really a bottle-neck in your application. If you can deal with a little inaccuracy, there are a number of extremely efficient exponent estimation algorithms out there, some of them dating back centuries. As I understand it, Java's exp() implementation is fairly slow, even for algorithms which must return "exact" results. Oh, and don't be afraid to write that exp() implementation in pure-Java. JNI has a lot of overhead, and the JVM is able to optimize bytecode at runtime sometimes even beyond what C/C++ is able to achieve. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/66402",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9774/"
]
} |
66,420 | When using Google Chrome, I want to debug some JavaScript code. How can I do that? | Try adding this to your source: debugger; It works in most, if not all browsers. Just place it somewhere in your code, and it will act like a breakpoint. | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/66420",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9587/"
]
} |
66,423 | I have a servlet that is used for many different actions, used in the Front Controller pattern . Does anyone know if it is possible to tell if the data posted back to it is enctype="multipart/form-data"? I can't read the request parameters until I decide this, so I can't dispatch the request to the proper controller. Any ideas? | Yes, the Content-type header in the user agent's request should include multipart/form-data as described in (at least) the HTML4 spec: http://www.w3.org/TR/html401/interact/forms.html#h-17.13.4.2 | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/66423",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4257/"
]
} |
66,475 | I've got a multiline textBox that I would like to have a label on the form displaying the current line and column position of, as Visual Studio does. I know I can get the line # with GetLineFromCharIndex, but how can I get the column # on that line? (I really want the Cursor Position on that line, not 'column', per se) | int line = textbox.GetLineFromCharIndex(textbox.SelectionStart);int column = textbox.SelectionStart - textbox.GetFirstCharIndexFromLine(line); | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/66475",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9857/"
]
} |
66,540 | I know that garbage collection is automated in Java. But I understood that if you call System.gc() in your code that the JVM may or may not decide to perform garbage collection at that point. How does this work precisely? On what basis/parameters exactly does the JVM decide to do (or not do) a GC when it sees System.gc() ? Are there any examples in which case it's a good idea to put this in your code? | In practice, it usually decides to do a garbage collection. The answer varies depending on lots of factors, like which JVM you're running on, which mode it's in, and which garbage collection algorithm it's using. I wouldn't depend on it in your code. If the JVM is about to throw an OutOfMemoryError, calling System.gc() won't stop it, because the garbage collector will attempt to free as much as it can before it goes to that extreme. The only time I've seen it used in practice is in IDEs where it's attached to a button that a user can click, but even there it's not terribly useful. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/66540",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
66,606 | I'm trying to find ab - Apache HTTP server benchmarking tool for Ubuntu, I'm hoping there's a package I can install for it. I decided I need to do some simple load testing on my applications. | % sudo apt-get install apache2-utils The command-not-found package in Ubuntu provides some slick functionality where if you type a command that can't be resolved to an executable (or bash function or whatever) it will query your apt sources and find a package that contains the binary you tried to execute. So, in this case, I typed ab at the command prompt: % abThe program 'ab' is currently not installed. You can install it by typing:sudo apt-get install apache2-utilsbash: ab: command not found | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/66606",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/339/"
]
} |
66,610 | For a particular project I have, no server side code is allowed. How can I create the web site in php (with includes, conditionals, etc) and then have that converted into a static html site that I can give to the client? Update: Thanks to everyone who suggested wget. That's what I used. I should have specified that I was on a PC, so I grabbed the windows version from here: http://gnuwin32.sourceforge.net/packages/wget.htm . | If you have a Linux system available to you use wget : wget -k -K -E -r -l 10 -p -N -F -nH http://website.com/ Options -k : convert links to relative -K : keep an original versions of files without the conversions made by wget -E : rename html files to .html (if they don’t already have an htm(l) extension) -r : recursive… of course we want to make a recursive copy -l 10 : the maximum level of recursion. if you have a really big website you may need to put a higher number, but 10 levels should be enough. -p : download all necessary files for each page (css, js, images) -N : Turn on time-stamping. -F : When input is read from a file, force it to be treated as an HTML file. -nH : By default, wget put files in a directory named after the site’s hostname. This will disabled creating of those hostname directories and put everything in the current directory. Source: Jean-Pascal Houde's weblog | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/66610",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3741/"
]
} |
66,622 | I need to enumerate though generic IList<> of objects. The contents of the list may change, as in being added or removed by other threads, and this will kill my enumeration with a "Collection was modified; enumeration operation may not execute." What is a good way of doing threadsafe foreach on a IList<>? prefferably without cloning the entire list. It is not possible to clone the actual objects referenced by the list. | Cloning the list is the easiest and best way, because it ensures your list won't change out from under you. If the list is simply too large to clone, consider putting a lock around it that must be taken before reading/writing to it. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/66622",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3263/"
]
} |
66,643 | Is there a way to detect, from within the finally clause, that an exception is in the process of being thrown? See the example below: try { // code that may or may not throw an exception} finally { SomeCleanupFunctionThatThrows(); // if currently executing an exception, exit the program, // otherwise just let the exception thrown by the function // above propagate} or is ignoring one of the exceptions the only thing you can do? In C++ it doesn't even let you ignore one of the exceptions and just calls terminate(). Most other languages use the same rules as java. | Set a flag variable, then check for it in the finally clause, like so: boolean exceptionThrown = true;try { mightThrowAnException(); exceptionThrown = false;} finally { if (exceptionThrown) { // Whatever you want to do }} | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/66643",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5963/"
]
} |
66,730 | I've created a python object, but I want to send signals on it. I made it inherit from gobject.GObject, but there doesn't seem to be any way to create a new signal on my object. | You can also define signals inside the class definition: class MyGObjectClass(gobject.GObject): __gsignals__ = { "some-signal": (gobject.SIGNAL_RUN_FIRST, gobject.TYPE_NONE, (object, )), } The contents of the tuple are the the same as the three last arguments to gobject.signal_new . | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/66730",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8453/"
]
} |
66,750 | Here is a quick test program: public static void main( String[] args ){ Date date = Calendar.getInstance().getTime(); System.out.println("Months:"); printDate( "MMMM", "en", date ); printDate( "MMMM", "es", date ); printDate( "MMMM", "fr", date ); printDate( "MMMM", "de", date ); System.out.println("Days:"); printDate( "EEEE", "en", date ); printDate( "EEEE", "es", date ); printDate( "EEEE", "fr", date ); printDate( "EEEE", "de", date );}public static void printDate( String format, String locale, Date date ){ System.out.println( locale + ": " + (new SimpleDateFormat( format, new Locale( locale ) )).format( date ) );} The output is: Months:en: Septemberes: septiembrefr: septembrede: SeptemberDays:en: Mondayes: lunesfr: lundide: Montag How can I control the capitalization of the names. For some reason the Spanish and French always seem to return names that start with a lowercase letter. | Not all languages share english capitalization rules. I guess you'd need to alter the data used by the API, but your non-english clients might not appreciate it... about.com on french capitalization | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/66750",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9661/"
]
} |
66,770 | Whenever I run rspec tests for my Rails application it takes forever and a day of overhead before it actually starts running tests. Why is rspec so slow? Is there a way to speed up Rails' initial load or single out the part of my Rails app I need (e.g. ActiveRecord stuff only) so it doesn't load absolutely everything to run a few tests? | You should be able to to speed up your script/spec calls by running script/spec_server in a separate terminal window, then adding the additional -X parameter to your spec calls. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/66770",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8344/"
]
} |
66,773 | How can i add a line break to the text area in a html page?i use VB.net for server side coding. | If it's not vb you can use 
 (ascii codes for cr,lf) | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/66773",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/747/"
]
} |
66,800 | I've been using make and makefiles for many many years, and although the conceptis sound, the implementation has something to be desired. Has anyone found any good alternatives to make that don't overcomplicatethe problem? | check out SCons . For example Doom 3 and Blender make uses of it. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/66800",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9593/"
]
} |
66,837 | Are CDATA tags ever necessary in script tags and if so when? In other words, when and where is this: <script type="text/javascript">//<![CDATA[...code...//]]></script> preferable to this: <script type="text/javascript">...code...</script> | A CDATA section is required if you need your document to parse as XML (e.g. when an XHTML page is interpreted as XML) and you want to be able to write literal i<10 and a && b instead of i<10 and a && b , as XHTML will parse the JavaScript code as parsed character data as opposed to character data by default. This is not an issue with scripts that are stored in external source files, but for any inline JavaScript in XHTML you will probably want to use a CDATA section. Note that many XHTML pages were never intended to be parsed as XML in which case this will not be an issue. For a good writeup on the subject, see https://web.archive.org/web/20140304083226/http://javascript.about.com/library/blxhtml.htm | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/66837",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/208/"
]
} |
66,870 | I want a user-privileged (not root) process to launch new processes as user nobody . I've tried a straight call to setuid that fails with -1 EPERM on Ubuntu 8.04 : #include <sys/types.h>#include <unistd.h>int main() { setuid(65534); while (1); return 0;} How should I do this instead? | You will require assistance and a lot of trust from your system administrator. Ordinary users are not able to run the executable of their choice on behalf on other users, period. She may add your application to /etc/sudoers with proper settings and you'll be able to run it as with sudo -u nobody . This will work for both scripts and binary executables. Another option is that she will do chown nobody and chmod +s on your binary executable and you'll be able to execute it directly. This task must be repeated each time your executable changes. This could also work for scripts if you'll create a tiny helper executable which simply does exec("/home/you/bin/your-application") . This executable can be made suid-nobody (see above) and you may freely modify your-application . | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/66870",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9947/"
]
} |
66,880 | After reading this answer , I wonder if there's a way to get a "testing" credit card number. One that you can experiment with but that doesn't actually charge anything. | MasterCard: 5431111111111111Amex: 341111111111111Discover: 6011601160116611American Express (15 digits) 378282246310005American Express (15 digits) 371449635398431American Express Corporate (15 digits) 378734493671000Diners Club (14 digits) 30569309025904Diners Club (14 digits) 38520000023237Discover (16 digits) 6011111111111117Discover (16 digits) 6011000990139424JCB (16 digits) 3530111333300000JCB (16 digits) 3566002020360505MasterCard (16 digits) 5555555555554444MasterCard (16 digits) 5105105105105100Visa (16 digits) 4111111111111111Visa (16 digits) 4012888888881881Visa (13 digits) 4222222222222 Credit Card Prefix Numbers: Visa: 13 or 16 numbers starting with 4MasterCard: 16 numbers starting with 5Discover: 16 numbers starting with 6011AMEX: 15 numbers starting with 34 or 37 | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/66880",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5314/"
]
} |
66,882 | Which is the simplest way to check if two integers have same sign? Is there any short bitwise trick to do this? | What's wrong with return ((x<0) == (y<0)); ? | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/66882",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
66,893 | I was looking for a tree or graph data structure in C#, but I guess there isn't one provided. An Extensive Examination of Data Structures Using C# 2.0 a bit about why. Is there a convenient library which is commonly used to provide this functionality? Perhaps through a strategy pattern to solve the issues presented in the article. I feel a bit silly implementing my own tree, just as I would implementing my own ArrayList. I just want a generic tree which can be unbalanced. Think of a directory tree. C5 looks nifty, but their tree structures seem to be implemented as balanced red-black trees better suited to search than representing a hierarchy of nodes. | My best advice would be that there is no standard tree data structure because there are so many ways you could implement it that it would be impossible to cover all bases with one solution. The more specific a solution, the less likely it is applicable to any given problem. I even get annoyed with LinkedList - what if I want a circular linked list? The basic structure you'll need to implement will be a collection of nodes, and here are some options to get you started. Let's assume that the class Node is the base class of the entire solution. If you need to only navigate down the tree, then a Node class needs a List of children. If you need to navigate up the tree, then the Node class needs a link to its parent node. Build an AddChild method that takes care of all the minutia of these two points and any other business logic that must be implemented (child limits, sorting the children, etc.) | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/66893",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/361/"
]
} |
66,922 | I'm looking to find alternatives to Solr from the Apache Software Foundation. For those that don't know, Solr is an enterprise search server. A client application uses a web-services like interface to submit documents for indexing and also to perform search queries. Solr has other features built in like caching and replication. I believe it was originally started by CNet and then open-sourced. I'm looking for other search servers out there that might be seen as the competition. | I wrote a long post about my experiences and features of all the engines I listed below but I scrapped it because formatting is a pita. But quite simply if you don't want to shell out money Solr/Lucene or Fast (now MSSE) is really about the best you can do. Excluded because I have no experience of this product:Seamark, Price High to Low Endeca, FredHopper, Mercado, Google Mini, Microsoft Search Server, Autonomy, Microsoft Search Server Express, Solr/Lucene Speed Fast to Slow Google Mini/Endeca, FredHopper, Autonomy, Solr/MSS/MSSE Features High to Low Endeca, FredHopper, Mercado, Solr, Autonomy, Lucene, MSS/MSSE, Google Mini Extensibility High to Low Solr/Lucene, Endeca, FredHopper, Mercado, Autonomy, MSS/MSSE, Google Mini Java API Endeca, FredHopper, Autonomy, Solr/Lucene .NET API Endeca, Solr/Lucene, MSS/MSSE, Autonomy XML API FredHopper, Mercado, Solr/Lucene, Autonomy, Google Mini (limited) Faceted Search Endeca, FredHopper, Seamark, Solr Natural Language Search Endeca, Fred Hopper, Solr, Mercado, MSS/MSSE, Autonomy, Google Mini Document Crawling Endeca, Mercado, MSS/MSSE, Autonomy, Google Mini ITL Endeca, FredHopper Merchandizing/Content Spotlighting Endeca, FredHopper, Mercado Distributed Search Endeca, FredHopper, Mercado, Solr/Lucene, Autonomy, Google Mini Analytics Endeca Platform x86 Windows Endeca, FredHopper, Mercado, MSS/MSSE, Solr/Lucene, Autonomy Platform x64 Windows Endeca, FredHopper, Solr/Lucene Platform x86 Unix Variants Endeca, FredHopper, Mercado, Solr/Lucene, Autonomy Platform x64 Unix Variants Endeca, Solr/Lucene Other Google Mini | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/66922",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/543/"
]
} |
66,923 | So you have a String that is retrieved from an admin web UI (so it is definitely a String). How can you find out whether this string is an IP address or a hostname in Java? Update: I think I didn't make myself clear, I was more asking if there is anything in the Java SDK that I can use to distinguish between IPs and hostnames? Sorry for the confusion and thanks for everybody who took/will take the time to answer this. | You can use a regular expression with this pattern: \b(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\b That will tell you if it's an IPv4 address. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/66923",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10059/"
]
} |
66,964 | For example: This is main body of my content. I have a footnote link for this line [1]. Then, I have some more content. Some of it is interesting and it has some footnotes as well [2]. [1] Here is my first footnote. [2] Another footnote. So, if I click on the "[1]" link it directs the web page to the first footnote reference and so on. How exactly do I accomplish this in HTML? | Give a container an id, then use # to refer to that Id. e.g. <p>This is main body of my content. I have a footnote link for this line <a href="#footnote-1">[1]</a>. Then, I have some more content. Some of it is interesting and it has some footnotes as well <a href="#footnote-2">[2]</a>.</p><p id="footnote-1">[1] Here is my first footnote.</p><p id="footnote-2">[2] Another footnote.</p> | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/66964",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4872/"
]
} |
66,986 | Is there an Eclipse command to surround the current selection with parentheses? Creating a template is a decent workaround; it doesn't work with the "Surround With" functionality, because I want to parenthesize an expression, not an entire line, and that requires ${word_selection} rather than ${line_selection} . Is there a way that I can bind a keyboard shortcut to this particular template? Ctrl - space Ctrl - space arrow arrow arrow isn't as slick as I'd hoped for. | Maybe not the correct answer, but at least a workaround: define a Java template with the name "parenthesis" (or "pa") with the following : (${word_selection})${cursor} once the word is selected, ctrl - space + p + use the arrow keys to select the template I used this technique for boxing primary types in JDK 1.4.2 and it saves quite a lot of typing. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/66986",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3474/"
]
} |
67,045 | I am trying to convince those who set standards at my current organization that we should use jQuery rather than Prototype and/or YUI. What are some convincing advantages I can use to convince them? | The 3 main advantages of jQuery are: its light weight when compared to other javascript frameworks it has a wide range of plugins available for various specific needs it is easier for a designer to learn jQuery as it uses familiar CSS syntax. jQuery is Javascript for Designers | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/67045",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
67,063 | It strikes me that Properties in C# should be use when trying to manipulate a field in the class. But when there's complex calculations or database involved, we should use a getter/setter. Is this correct? When do you use s/getter over properties? | The .NET design guidelines provide some answers to this question in the Properties vs. Methods section. Basically, properties have the same semantics as a field. You shouldn't let a property throw exceptions, properties shouldn't have side effects, order shouldn't matter, and properties should return relatively quickly. If any of those things could happen, it's better to use a method. The guidelines also recommend using methods for returning arrays. When deciding whether to use a property or method, it helps if I think of it like a field. I think about the behavior of the property and ask myself, "If this were a field on the class, would I be surprised if it behaved the way it does?" Consider, for example, the TcpClient.GetStream method . It can throw several exceptions based on if the connection is made, and it's important that the TcpClient is configured before you try to get the stream. Because of this, it is a Get method rather than a property. If you take a good look at the design guidelines, you'll see that it's usually not a matter of preference; there's good reasons to use methods instead of properties in certain cases. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/67063",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10088/"
]
} |
67,082 | What is Windows' best I/O event notification facility? By best I mean something that ... doesn't have a limit on number of input file descriptors works on all file descriptors (disk files, sockets, ...) provides various notification modes (edge triggered, limit triggered) | In Windows, async operations are done by file operation, not by descriptor. There are several ways to wait on file operations to complete asynchronously. For example, if you want to know when data is available on a network socket, issue an async read request on the socket and when it completes, the data was available and was retrieved. In Win32, async operations use the OVERLAPPED structure to contain state about an outstanding IO operation. Associate the files with an IO Completion Port and dispatch async IO requests. When an operation completes, it will put a completion message on the queue which your worker thread(s) can wait on and retrieve as they arrive. You can also put user defined messages into the queue. There is no limit to how many files or queued messages can be used with a completion port Dispatch each IO operation with an event. The event associated with an operation will become signaled (satisfy a wait) when it completes. Use WaitForMultipleObjects to wait on all the events at once. This has the disadvantage of only being able to wait on MAXIMUM_WAIT_OBJECTS objects at once (64). You can also wait on other types of events at the same time (process/thread termination, mutexes, events, semaphores) Use a thread pool . The thread pool can take an unlimited number of objects and file operations to wait on and execute a user defined function upon completion each. Use ReadFileEx and WriteFileEx to queue Asynchronous Procedure Calls (APCs) to the calling thread and SleepEx (or WaitFor{Single|Multiple}ObjectsEx ) with Alertable TRUE to receive a notification message for each operation when it completes. This method is similar to an IO completion port, but only works for one thread. The Windows NT kernel makes no distinction between socket, disk file, pipe, etc. file operations internally: all of these options will work with all the file types. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/67082",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9107/"
]
} |
67,093 | The MySQL manual at MySQL covers this. Usually I just dump the database and reimport it with a new name. This is not an option for very big databases. Apparently RENAME {DATABASE | SCHEMA} db_name TO new_db_name; does bad things, exists only in a handful of versions, and is a bad idea overall . This needs to work with InnoDB , which stores things very differently than MyISAM . | For InnoDB , the following seems to work: create the new empty database, then rename each table in turn into the new database: RENAME TABLE old_db.table TO new_db.table; You will need to adjust the permissions after that. For scripting in a shell, you can use either of the following: mysql -u username -ppassword old_db -sNe 'show tables' | while read table; \ do mysql -u username -ppassword -sNe "rename table old_db.$table to new_db.$table"; done OR for table in `mysql -u root -ppassword -s -N -e "use old_db;show tables from old_db;"`; do mysql -u root -ppassword -s -N -e "use old_db;rename table old_db.$table to new_db.$table;"; done; Notes: There is no space between the option -p and the password. If your database has no password, remove the -u username -ppassword part. If some table has a trigger, it cannot be moved to another database using above method (will result Trigger in wrong schema error). If that is the case, use a traditional way to clone a database and then drop the old one: mysqldump old_db | mysql new_db If you have stored procedures, you can copy them afterwards: mysqldump -R old_db | mysql new_db | {
"score": 11,
"source": [
"https://Stackoverflow.com/questions/67093",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/556/"
]
} |
67,103 | I have an application that uses NHibernate as its ORM and sometimes it experiences performance issues due to how the data is being accessed by it. What kind of things can be done to improve the performance of NHibernate? (Please limit to one recommendation per answer) | The first and most dramatic performance problem that you can run into with NHibernate is if you are creating a new session factory for every session you create. Only one session factory instance should be created for each application execution and all sessions should be created by that factory. Along those lines, you should continue using the same session as long as it makes sense. This will vary by application, but for most web applications, a single session per request is recommended. If you throw away your session frequently, you aren't gaining the benefits of its cache. Intelligently using the session cache can change a routine with a linear (or worse) number of queries to a constant number without much work. Equally important is that you want to make sure that you are lazy loading your object references. If you are not, entire object graphs could be loaded for even the most simple queries. There are only certain reasons not to do this, but it is always better to start with lazy loading and switch back as needed. That brings us to eager fetching, the opposite of lazy loading. While traversing object hierarchies or looping through collections, it can be easy to lose track of how many queries you are making and you end up with an exponential number of queries. Eager fetching can be done on a per query basis with a FETCH JOIN. In rare circumstances, such as if there is a particular pair of tables you always fetch join, consider turning off lazy loading for that relationship. As always, SQL Profiler is a great way to find queries that are running slow or being made repeatedly. At my last job we had a development feature that counted queries per page request as well. A high number of queries for a routine is the most obvious indicator that your routine is not working well with NHibernate. If the number of queries per routine or request looks good, you are probably down to database tuning; making sure you have enough memory to store execution plans and data in the cache, correctly indexing your data, etc. One tricky little problem we ran into was with SetParameterList(). The function allows you to easily pass a list of parameters to a query. NHibernate implemented this by creating one parameter for each item passed in. This results in a different query plan for every number of parameters. Our execution plans were almost always getting released from the cache. Also, numerous parameters can significantly slow down a query. We did a custom hack of NHibernate to send the items as a delimited list in a single parameter. The list was separated in SQL Server by a table value function that our hack automatically inserted into the IN clause of the query. There could be other land mines like this depending on your application. SQL Profiler is the best way to find them. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/67103",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4872/"
]
} |
67,117 | Are there any documented techniques for speeding up mySQL dumps and imports? This would include my.cnf settings, using ramdisks, etc. Looking only for documented techniques, preferably with benchmarks showing potential speed-up. | Get a copy of High Performance MySQL . Great book. Extended inserts in dumps Dump with --tab format so you can use mysqlimport, which isfaster than mysql < dumpfile Import with multiple threads, one for each table. Use a different database engine if possible. importing into aheavily transactional engine like innodb is awfully slow. Insertinginto a non-transactional engine likeMyISAM is much much faster. Look at the table compare script in the Maakit toolkit and see if you canupdate your tables rather than dumping them and importing them. But you're probably talking about backups/restores. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/67117",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/556/"
]
} |
67,127 | I want to extend a WPF application with database functionality. Which database engine would you suggest and why? SQLite, SQL CE, other? | Depending on the applications use, I would recommend using SQL Lite because it doesn't require you to install any other software (SQL CE or Express, etc. usually would require a separate install). A list of the most important benefits for SQL Lite from the provider link at the bottom of this post: SQLite is a small C library that implements a self-contained, embeddable, zero-configuration SQL database engine. Features include: Zero-configuration - no setup or administration needed. Implements most of SQL92. (Features not supported) A complete database is stored in a single disk file. Database files can be freely shared between machines with different byte orders. Supports databases up to 2 terabytes (2^41 bytes) in size. Small code footprint: less than 30K lines of C code, less than 250KB code space (gcc on i486) Faster than popular client/server database engines for most common operations. Simple, easy to use API. Self-contained: no external dependencies. Sources are in the public domain. Use for any purpose. Since you're using WPF I can assume you're using at least .NET 3.0. I would then recommend going to .NET 3.5 SP1 (sames size as .NET 3.5 but includes a bunch of performance improvements) which includes LINQ. When using SQLite, however, you would want to use the following SQLite Provider which should provide LINQ support: An open source ADO.NET provider for the SQLite database engine | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/67127",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1133/"
]
} |
67,167 | We have a WinForms application written in C# that uses the AxAcroPDFLib.AxAcroPDF component to load and print a PDF file. Has been working without any problems in Windows XP. I have moved my development environment to Vista 64 bit and now the application will not run (on Vista 64) unless I remove the AxAcroPDF component. I get the following error when the application runs: "System.Runtime.InteropServices.COMException:Class not registered (Exception from HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG))." I have been advised on the Adobe Forums that the reason for the error is that they do not have a 64 bit version of the AxAcroPDF ActiveX control. Is there some way around this problem? For example can I convert the 32bit ActiveX control to a 64bit control myself? | You can't convert Adobe's ActiveX control to 64bit yourself, but you can force your application to run in 32bit mode by setting the platform target to x86. For instructions for your version of Visual Studio, see section 1.44 of Issues When Using Microsoft Visual Studio 2005 | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/67167",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10135/"
]
} |
67,174 | Does anybody know a "technique" to discover memory leaks caused by smart pointers? I am currently working on a large project written in C++ that heavily uses smart pointers with reference counting. Obviously we have some memory leaks caused by smart pointers, that are still referenced somewhere in the code, so that their memory does not get free'd. It's very hard to find the line of code with the "needless" reference, that causes the corresponding object not to be free'd (although it's not of use any longer). I found some advice in the web, that proposed to collect call stacks of the increment/decrement operations of the reference counter. This gives me a good hint, which piece of code has caused the reference counter to get increased or decreased. But what I need is some kind of algorithm that groups the corresponding "increase/decrease call stacks" together. After removing these pairs of call stacks, I hopefully have (at least) one "increase call stack" left over, that shows me the piece of code with the "needless" reference, that caused the corresponding object not to be freed. Now it will be no big deal to fix the leak! But has anybody an idea for an "algorithm" that does the grouping? Development takes place under Windows XP . (I hope someone understood, what I tried to explain ...) EDIt: I am talking about leaks caused by circular references. | Note that one source of leaks with reference-counting smart pointers are pointers with circular dependancies . For example, A have a smart pointer to B, and B have a smart pointer to A. Neither A nor B will be destroyed. You will have to find, and then break the dependancies. If possible, use boost smart pointers, and use shared_ptr for pointers which are supposed to be owners of the data, and weak_ptr for pointers not supposed to call delete. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/67174",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2012356/"
]
} |
67,207 | Does anyone know how to achieve the cover-flow effect using JavaScript to scroll through a bunch of images. I'm not talking about the 3D rotating itunes cover-art, but the effect that happens when you hit the space bar in a folder of documents, allowing you to preview them in a lightbox fashion. | http://www.jacksasylum.eu/ContentFlow/ is the best I ever found. a true 'CoverFlow', highly configurable, cross-browser, very smooth action, has relections and supports scroll wheel + keyboard control. - has to be what your looking for! | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/67207",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
67,209 | How do you customize the Copy/Paste behavior in Visual Studio 2008? For example I create a new <div id="MyDiv"></div> and then copy and paste it in the same file. VisualStudio pastes <div id="Div1"></div> instead of the original text I copied. It is even more frustrating when I'm trying to copy a group of related div's that I would like to copy/paste several times and only change one part of the id. Is there a setting I can tweak to change the copy/paste behavior? | Go into Tools > Options > Text Editor > HTML > Miscellaneous and uncheck "Auto ID elements on paste in Source view" | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/67209",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3747/"
]
} |
67,273 | How do you iterate through every file/directory recursively in standard C++? | In standard C++, technically there is no way to do this since standard C++ has no conception of directories. If you want to expand your net a little bit, you might like to look at using Boost.FileSystem . This has been accepted for inclusion in TR2, so this gives you the best chance of keeping your implementation as close as possible to the standard. An example, taken straight from the website: bool find_file( const path & dir_path, // in this directory, const std::string & file_name, // search for this name, path & path_found ) // placing path here if found{ if ( !exists( dir_path ) ) return false; directory_iterator end_itr; // default construction yields past-the-end for ( directory_iterator itr( dir_path ); itr != end_itr; ++itr ) { if ( is_directory(itr->status()) ) { if ( find_file( itr->path(), file_name, path_found ) ) return true; } else if ( itr->leaf() == file_name ) // see below { path_found = itr->path(); return true; } } return false;} | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/67273",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10184/"
]
} |
67,299 | I am working to integrate unit testing into the development process on the team I work on and there are some sceptics. What are some good ways to convince the sceptical developers on the team of the value of Unit Testing? In my specific case we would be adding Unit Tests as we add functionality or fixed bugs. Unfortunately our code base does not lend itself to easy testing. | Every day in our office there is an exchange which goes something like this: "Man, I just love unit tests, I've just been able to make a bunch of changes to the way something works, and then was able to confirm I hadn't broken anything by running the test over it again..." The details change daily, but the sentiment doesn't. Unit tests and test-driven development (TDD) have so many hidden and personal benefits as well as the obvious ones that you just can't really explain to somebody until they're doing it themselves. But, ignoring that, here's my attempt! Unit Tests allows you to make big changes to code quickly. You know it works now because you've run the tests, when you make the changes you need to make, you need to get the tests working again. This saves hours. TDD helps you to realise when to stop coding. Your tests give you confidence that you've done enough for now and can stop tweaking and move on to the next thing. The tests and the code work together to achieve better code. Your code could be bad / buggy. Your TEST could be bad / buggy. In TDD you are banking on the chances of both being bad / buggy being low. Often it's the test that needs fixing but that's still a good outcome. TDD helps with coding constipation. When faced with a large and daunting piece of work ahead writing the tests will get you moving quickly. Unit Tests help you really understand the design of the code you are working on. Instead of writing code to do something, you are starting by outlining all the conditions you are subjecting the code to and what outputs you'd expect from that. Unit Tests give you instant visual feedback, we all like the feeling of all those green lights when we've done. It's very satisfying. It's also much easier to pick up where you left off after an interruption because you can see where you got to - that next red light that needs fixing. Contrary to popular belief unit testing does not mean writing twice as much code, or coding slower. It's faster and more robust than coding without tests once you've got the hang of it. Test code itself is usually relatively trivial and doesn't add a big overhead to what you're doing. This is one you'll only believe when you're doing it :) I think it was Fowler who said: "Imperfect tests, run frequently, are much better than perfect tests that are never written at all". I interpret this as giving me permission to write tests where I think they'll be most useful even if the rest of my code coverage is woefully incomplete. Good unit tests can help document and define what something is supposed to do Unit tests help with code re-use. Migrate both your code and your tests to your new project. Tweak the code till the tests run again. A lot of work I'm involved with doesn't Unit Test well (web application user interactions etc.), but even so we're all test infected in this shop, and happiest when we've got our tests tied down. I can't recommend the approach highly enough. | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/67299",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9431/"
]
} |
67,354 | I have an iframe. The content is wider than the width I am setting so the iframe gets a horizontal scroll bar. I can't increase the width of the iframe so I want to just remove the scroll bar. I tried setting the scroll property to "no" but that kills both scroll bars and I want the vertical one. I tried setting overflow-x to "hidden" and that killed the horizontal scroll bar in ff but not in IE. sad for me. | scrolling="yes" horizontalscrolling="no" verticalscrolling="yes" Put that in your iFrame tag. You don't need to mess around with trying to format this in CSS. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/67354",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5234/"
]
} |
67,368 | When you roll out changes to a live web site, how do you go about checking that the live system is working correctly? Which tools do you use? Who does it? Do you block access to the site for the testing period? What amount of downtime is acceptable? | I tend to do all of my testing in another environment (not the live one!). This allows me to push the updates to the live site knowing that the code should be working ok, and I just do sanity testing on the live data - make sure I didn't forget a file somewhere, or had something weird go wrong. So proper testing in a testing or staging environment, then just trivial sanity checking. No need for downtime. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/67368",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10046/"
]
} |
67,370 | I'm programming WCF using the ChannelFactory which expects a type in order to call the CreateChannel method. For example: IProxy proxy = ChannelFactory<IProxy>.CreateChannel(...); In my case I'm doing routing so I don't know what type my channel factory will be using. I can parse a message header to determine the type but I hit a brick wall there because even if I have an instance of Type I can't pass that where ChannelFactory expects a generic type. Another way of restating this problem in very simple terms would be that I'm attempting to do something like this: string listtype = Console.ReadLine(); // say "System.Int32"Type t = Type.GetType( listtype);List<t> myIntegers = new List<>(); // does not compile, expects a "type"List<typeof(t)> myIntegers = new List<typeof(t)>(); // interesting - type must resolve at compile time? Is there an approach to this I can leverage within C#? | What you are looking for is MakeGenericType string elementTypeName = Console.ReadLine();Type elementType = Type.GetType(elementTypeName);Type[] types = new Type[] { elementType };Type listType = typeof(List<>);Type genericType = listType.MakeGenericType(types);IProxy proxy = (IProxy)Activator.CreateInstance(genericType); So what you are doing is getting the type-definition of the generic "template" class, then building a specialization of the type using your runtime-driving types. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/67370",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/64/"
]
} |
67,407 | Why isn't there a Team Foundation Server Express Edition? | Almost 3 years and 16 answers later, TFS Express is now a fact. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/67407",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7831/"
]
} |
67,410 | GNU sed version 4.1.5 seems to fail with International chars. Here is my input file: Gras Och Stenar Trad - From Moja to Minneapolis DVD [G2007DVD] 7812 | X<br>Gras Och Stenar Trad - From Möja to Minneapolis DVD [G2007DVD] 7812 | Y (Note the umlaut in the second line.) And when I do sed 's/.*| //' < in I would expect to see only the X and Y , as I've asked to remove ALL chars up to the '|' and space beyond it. Instead, I get: X<br>Gras Och Stenar Trad - From M? Y I know I can use tr to remove the International chars. first, but is there a way to just use sed? | I think the error occurs if the input encoding of the file is different from the preferred encoding of your environment. Example: in is UTF-8 $ LANG=de_DE.UTF-8 sed 's/.*| //' < inXY$ LANG=de_DE.iso88591 sed 's/.*| //' < inX Y UTF-8 can safely be interpreted as ISO-8859-1, you'll get strange characters but apart from that everything is fine. Example: in is ISO-8859-1 $ LANG=de_DE.UTF-8 sed 's/.*| //' < inXGras Och Stenar Trad - From MöY$ LANG=de_DE.iso88591 sed 's/.*| //' < inX Y ISO-8859-1 cannot be interpreted as UTF-8, decoding the input file fails. The strange match is probably due to the fact that sed tries to recover rather than fail completely. The answer is based on Debian Lenny/Sid and sed 4.1.5. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/67410",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10251/"
]
} |
67,454 | How to serve users a dynamically generated ZIP archive in Django? I'm making a site, where users can choose any combination of available books and download them as ZIP archive. I'm worried that generating such archives for each request would slow my server down to a crawl. I have also heard that Django doesn't currently have a good solution for serving dynamically generated files. | The solution is as follows. Use Python module zipfile to create zip archive, but as the file specify StringIO object (ZipFile constructor requires file-like object). Add files you want to compress. Then in your Django application return the content of StringIO object in HttpResponse with mimetype set to application/x-zip-compressed (or at least application/octet-stream ). If you want, you can set content-disposition header, but this should not be really required. But beware, creating zip archives on each request is bad idea and this may kill your server (not counting timeouts if the archives are large). Performance-wise approach is to cache generated output somewhere in filesystem and regenerate it only if source files have changed. Even better idea is to prepare archives in advance (eg. by cron job) and have your web server serving them as usual statics. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/67454",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9812/"
]
} |
67,457 | How can I show a web page in a transparent window and have the white part of the web page also transparent. | The solution is as follows. Use Python module zipfile to create zip archive, but as the file specify StringIO object (ZipFile constructor requires file-like object). Add files you want to compress. Then in your Django application return the content of StringIO object in HttpResponse with mimetype set to application/x-zip-compressed (or at least application/octet-stream ). If you want, you can set content-disposition header, but this should not be really required. But beware, creating zip archives on each request is bad idea and this may kill your server (not counting timeouts if the archives are large). Performance-wise approach is to cache generated output somewhere in filesystem and regenerate it only if source files have changed. Even better idea is to prepare archives in advance (eg. by cron job) and have your web server serving them as usual statics. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/67457",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/44972/"
]
} |
67,554 | I'm looking for a profiler in order to find the bottleneck in my C++ code. I'd like to find a free, non-intrusive, and good profiling tool. I'm a game developer, and I use PIX for Xbox 360 and found it very good, but it's not free. I know the Intel VTune , but it's not free either. | CodeXL has now superseded the End Of Line'd AMD Code Analyst and both are free, but not as advanced as VTune. There's also Sleepy , which is very simple, but does the job in many cases. Note: All three of the tools above are unmaintained since several years. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/67554",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10120/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.