text
stringlengths 8
267k
| meta
dict |
---|---|
Q: Checklist for Web Site Programming Vulnerabilities Watching SO come online has been quite an education for me. I'd like to make a checklist of various vunerabilities and exploits used against web sites, and what programming techniques can be used to defend against them.
*
*What categories of vunerabilities?
*
*crashing site
*breaking into server
*breaking into other people's logins
*spam
*sockpuppeting, meatpuppeting
*etc...
*What kind of defensive programming techniques?
*etc...
A: I second the OWASP info as being a valuable resource. The following may be of interest as well, notably the attack patterns:
*
*CERT Top 10 Secure Coding Practices
*Common Attack Pattern Enumeration and Classification
*Attack Patterns
*Secure Programming for Linux and Unix
*A Taxonomy of Coding Errors that Affect Security
*Secure Programming with Static Analysis Presentation
A: Obviously test every field for vulnerabilities:
*
*SQL - escape strings (e.g. mysql_real_escape_string)
*XSS
*HTML being printed from input fields (a good sign of XSS usually)
*Anything else thatis not the specific purpose that field was created for
Search for infinite loops (the only indirect thing (if a lot of people found it accidentally) that could kill a server really).
A: Some prevention techniques:
XSS
*
*If you take any parameters/input from the user and ever plan on outputting it, whether in a log or a web page, sanitize it (strip/escape anything resembling HTML, quotes, javascript...) If you print the current URI of a page within itself, sanitize! Even printing PHP_SELF, for example, is unsafe. Sanitize! Reflective XSS comes mostly from unsanitized page parameters.
*If you take any input from the user and save it or print it, warn them if anything dangerous/invalid is detected and have them re-input. an IDS is good for detection (such as PHPIDS.) Then sanitize before storage/printing. Then when you print something from storage/database, sanitize again!
Input -> IDS/sanitize -> store -> sanitize -> output
*use a code scanner during development to help spot potentially vulnerable code.
XSRF
*
*Never use GET request for
destructive functionality, i.e.
deleting a post. Instead, only
accept POST requests. GET makes it extra easy for hackery.
*Checking the
referrer to make sure the request
came from your site does not
work. It's not hard to spoof the
referrer.
*Use a random hash as a token that must be present and valid in every request, and that will expire after a while. Print the token in a hidden form field and check it on the server side when the form is posted. Bad guys would have to supply the correct token in order to forge a request, and if they managed to get the real token, it would need to be before it expired.
SQL injection
*
*your ORM or db abstraction class should have sanitizing methods - use them, always. If you're not using an ORM or db abstraction class... you should be.
A: From the Open Web Application Security Project:
*
*The OWASP Top Ten vulnerabilities (pdf)
*For a more painfully exhaustive list: Category:Vulnerability
The top ten are:
*
*Cross-site scripting (XSS)
*Injection flaws (SQL injection, script injection)
*Malicious file execution
*Insecure direct object reference
*Cross-site request forgery (XSRF)
*Information leakage and improper error handling
*Broken authentication and session management
*Insecure cryptographic storage
*Insecure communications
*Failure to restrict URL access
A: SQL injection
A: XSS (Cross Site Scripting) Attacks
A: Easy to oversee and easy to fix: the sanitizing of data received from the client side. Checking for things such as ';' can help in preventing malicious code being injected into your application.
A: G'day,
A good static analysis tool for security is FlawFinder written by David Wheeler. It does a good job looking for various security exploits,
However, it doesn't replace having a knowledgable someone read through your code. As David says on his web page, "A fool with a tool is still a fool!"
HTH.
cheers,
Rob
A: You can get good firefox addons to test multiple flaws and vulnerabilities like xss and sql injections from Security Compass. Too bad they doesn't work on firefox 3.0. I hope that those will be updated soon.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/28965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: Anyone using CouchDB? I've followed the CouchDB project with interest over the last couple of years, and see it is now an Apache Incubator project. Prior to that, the CouchDB web site was full of do not use for production code type disclaimers, so I'd done no more than keep an eye on it. I'd be interested to know your experiences if you've been using CouchDB either for a live project, or a technology pilot.
A: I am using couchdb in a few scenarios, as a document store for http://devk.it (under development) and in a much larger scale as a template store for a distributed email delivery system.
CouchDB is very slick for what it does, but I was not able to get it to run at as high of a concurrency level as I would have expected. Also note that the maximum document size is fairly limiting at 1MB due to the hardcoded max input buffer size in mochiweb. You can however alter a header file and recompile to get around this limit.
A: I'm using CouchDB to store (and serve) article ratings on my blog. It's not exactly heavy traffic but it's been rock solid so far.
Also planning on adding comments sometime which I'll most likely also store in CouchDB.
I've found it quite easy to get started with, on OSX you can just download CouchDBX to get started quickly. I use a Sinatra backend with RestClient to interact with 'the couch' through straight HTTP verbs and such.
Great fun.
A: After 18 Months of prototypes, testing and waiting for CouchDb to get ready we moved an internal application over to CouchDB in December 2008. So far I'm very happy with that move. It gets rid of a lot of filesystem objects for us (PDFs and JPEGs, now stored as attachments in CouchDB). This enables us to get rid of NFS and easier cluster/replicate our frontend webservers.
To what degree CouchDB is ready for you depends very much on the culture of your organization. We have an in-house development team maintaining several internal Erlang applications. Since CouchDB is written in Erlang and the codebase is of quite decent quality we felt confident that we could fix show stopper issues in CouchDB should the need arise - or at least get our data back out. We also hired one of the CouchDB core team as an consultant - just in case.
But CouchDB for sure isn't 1.0 yet. There are crashes in the Web worker processes all the time (if you misuse them). Replication breaks for us and we don't get error messages about it. Documentation is still very lacking. Still I'm confident that it will not eat our data and development moves forward with reasonable pace.
To give you an idea about our application: currently our biggest database is about 512000 records taking 7.5 GB of diskspace.
A: At the moment I'm working with CouchDB for a computer science thesis. I'm writing about my progresses and opinions on my blog, http://metalelf0dev.blogspot.com. I think the project is well done, but the existing documentation isn't organized as it should. A quick tutorial about the Futon web interface could be really useful for starters IMHO :)
A: I use the CouchDB to power a Facebook application (over 35k monthly active users). For a while it was using MySQL but after porting the entire project over from Perl to Erlang, I decided to go for the gold and migrate all of the data into CouchDB and use that instead.
CouchDB has been a great data store to work with. I think that it is on track to becoming a major player in web-based services.
A: I got to know one of the people (Jan) working on it a while ago (like 6 months) and have been playing with it ever since. I found the community around CouchDB to be both very knowledgable and helpful so that whenever I ran into an issue it was resolved in a matter of minutes or hours at least.
We just kicked off a project the other week which basically requires us to store data in the non-relational way and due to CouchDB's document oriented store we selected it as one of the technologies to use. So this is actually the first time that I will run it in production, but I'm still pretty confident about it. :)
Just an update here (2009-10-25):
Our first CouchDB install is 20 GB, it hosts 40 million records. It's been running in production since January 2009, and it's been great. Read (GET) speed is outstanding and we use it as a store for complex data, and then it's just pull.
Our second couchdb installment has two databases, one is 160,000,000+ documents (210 GB), and growing between 150,000-300,000 documents a day. The other is only 35,000,000 documents (7 GB). This setup has a lot more reads and writes and initial tests are performing very well.
View building on the 160,000,000 document database took roughly a week, but since then we upgraded to a larger Amazon EC2 instance and we are also getting ready to update to CouchDB 0.10.x (from 0.9.1) as this release includes a lot of performance improvements in view building.
A: I used couchdb twice in production. First was the wiki likes project and I think that couchdb was perfect candidate for that role. Saving the version of all docs helps a lot.
The second project was quite query loaded and idea was dumping social data first, then query it with various filters. It was looked like standard CouchDB query features looks a bit pure for our needs. But we add Lucene like a full text indexer and after that doing many queries during Lucene part. And that solution looks good enough.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/28975",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "59"
} |
Q: Simple explanation of MapReduce? Related to my CouchDB question.
Can anyone explain MapReduce in terms a numbnuts could understand?
A: MapReduce is a method to process vast sums of data in parallel without requiring the developer to write any code other than the mapper and reduce functions.
The map function takes data in and churns out a result, which is held in a barrier. This function can run in parallel with a large number of the same map task. The dataset can then be reduced to a scalar value.
So if you think of it like a SQL statement
SELECT SUM(salary)
FROM employees
WHERE salary > 1000
GROUP by deptname
We can use map to get our subset of employees with salary > 1000
which map emits to the barrier into group size buckets.
Reduce will sum each of those groups. Giving you your result set.
just plucked this from my university study notes of the google paper
A: If you are familiar with Python, following is the simplest possible explanation of MapReduce:
In [2]: data = [1, 2, 3, 4, 5, 6]
In [3]: mapped_result = map(lambda x: x*2, data)
In [4]: mapped_result
Out[4]: [2, 4, 6, 8, 10, 12]
In [10]: final_result = reduce(lambda x, y: x+y, mapped_result)
In [11]: final_result
Out[11]: 42
See how each segment of raw data was processed individually, in this case, multiplied by 2 (the map part of MapReduce). Based on the mapped_result, we concluded that the result would be 42 (the reduce part of MapReduce).
An important conclusion from this example is the fact that each chunk of processing doesn't depend on another chunk. For instance, if thread_1 maps [1, 2, 3], and thread_2 maps [4, 5, 6], the eventual result of both the threads would still be [2, 4, 6, 8, 10, 12] but we have halved the processing time for this. The same can be said for the reduce operation and is the essence of how MapReduce works in parallel computing.
A: I don't want to sound trite, but this helped me so much, and it's pretty simple:
cat input | map | reduce > output
A: *
*Take a bunch of data
*Perform some kind of transformation that converts every datum to another kind of datum
*Combine those new data into yet simpler data
Step 2 is Map. Step 3 is Reduce.
For example,
*
*Get time between two impulses on a pair of pressure meters on the road
*Map those times into speeds based upon the distance of the meters
*Reduce those speeds to an average speed
The reason MapReduce is split between Map and Reduce is because different parts can easily be done in parallel. (Especially if Reduce has certain mathematical properties.)
For a complex but good description of MapReduce, see: Google's MapReduce Programming Model -- Revisited (PDF).
A: MAP and REDUCE are old Lisp functions from a time when man killed the last dinosaurs.
Imagine you have a list of cities with informations about the name, number of people living there and the size of the city:
(defparameter *cities*
'((a :people 100000 :size 200)
(b :people 200000 :size 300)
(c :people 150000 :size 210)))
Now you may want to find the city with the highest population density.
First we create a list of city names and population density using MAP:
(map 'list
(lambda (city)
(list (first city)
(/ (getf (rest city) :people)
(getf (rest city) :size))))
*cities*)
=> ((A 500) (B 2000/3) (C 5000/7))
Using REDUCE we can now find the city with the largest population density.
(reduce (lambda (a b)
(if (> (second a) (second b))
a
b))
'((A 500) (B 2000/3) (C 5000/7)))
=> (C 5000/7)
Combining both parts we get the following code:
(reduce (lambda (a b)
(if (> (second a) (second b))
a
b))
(map 'list
(lambda (city)
(list (first city)
(/ (getf (rest city) :people)
(getf (rest city) :size))))
*cities*))
Let's introduce functions:
(defun density (city)
(list (first city)
(/ (getf (rest city) :people)
(getf (rest city) :size))))
(defun max-density (a b)
(if (> (second a) (second b))
a
b))
Then we can write our MAP REDUCE code as:
(reduce 'max-density
(map 'list 'density *cities*))
=> (C 5000/7)
It calls MAP and REDUCE (evaluation is inside out), so it is called map reduce.
A: Going all the way down to the basics for Map and Reduce.
Map is a function which "transforms" items in some kind of list to another kind of item and put them back in the same kind of list.
suppose I have a list of numbers: [1,2,3] and I want to double every number, in this case, the function to "double every number" is function x = x * 2. And without mappings, I could write a simple loop, say
A = [1, 2, 3]
foreach (item in A) A[item] = A[item] * 2
and I'd have A = [2, 4, 6] but instead of writing loops, if I have a map function I could write
A = [1, 2, 3].Map(x => x * 2)
the x => x * 2 is a function to be executed against the elements in [1,2,3]. What happens is that the program takes each item, execute (x => x * 2) against it by making x equals to each item, and produce a list of the results.
1 : 1 => 1 * 2 : 2
2 : 2 => 2 * 2 : 4
3 : 3 => 3 * 2 : 6
so after executing the map function with (x => x * 2) you'd have [2, 4, 6].
Reduce is a function which "collects" the items in lists and perform some computation on all of them, thus reducing them to a single value.
Finding a sum or finding averages are all instances of a reduce function. Such as if you have a list of numbers, say [7, 8, 9] and you want them summed up, you'd write a loop like this
A = [7, 8, 9]
sum = 0
foreach (item in A) sum = sum + A[item]
But, if you have access to a reduce function, you could write it like this
A = [7, 8, 9]
sum = A.reduce( 0, (x, y) => x + y )
Now it's a little confusing why there are 2 arguments (0 and the function with x and y) passed. For a reduce function to be useful, it must be able to take 2 items, compute something and "reduce" that 2 items to just one single value, thus the program could reduce each pair until we have a single value.
the execution would follows:
result = 0
7 : result = result + 7 = 0 + 7 = 7
8 : result = result + 8 = 7 + 8 = 15
9 : result = result + 9 = 15 + 9 = 24
But you don't want to start with zeroes all the time, so the first argument is there to let you specify a seed value specifically the value in the first result = line.
say you want to sum 2 lists, it might look like this:
A = [7, 8, 9]
B = [1, 2, 3]
sum = 0
sum = A.reduce( sum, (x, y) => x + y )
sum = B.reduce( sum, (x, y) => x + y )
or a version you'd more likely to find in the real world:
A = [7, 8, 9]
B = [1, 2, 3]
sum_func = (x, y) => x + y
sum = A.reduce( B.reduce( 0, sum_func ), sum_func )
Its a good thing in a DB software because, with Map\Reduce support you can work with the database without needing to know how the data are stored in a DB to use it, thats what a DB engine is for.
You just need to be able to "tell" the engine what you want by supplying them with either a Map or a Reduce function and then the DB engine could find its way around the data, apply your function, and come up with the results you want all without you knowing how it loops over all the records.
There are indexes and keys and joins and views and a lot of stuffs a single database could hold, so by shielding you against how the data is actually stored, your code are made easier to write and maintain.
Same goes for parallel programming, if you only specify what you want to do with the data instead of actually implementing the looping code, then the underlying infrastructure could "parallelize" and execute your function in a simultaneous parallel loop for you.
A: Let's take the example from the Google paper. The goal of MapReduce is to be able to use efficiently a load of processing units working in parallels for some kind of algorithms. The exemple is the following: you want to extract all the words and their count in a set of documents.
Typical implementation:
for each document
for each word in the document
get the counter associated to the word for the document
increment that counter
end for
end for
MapReduce implementation:
Map phase (input: document key, document)
for each word in the document
emit an event with the word as the key and the value "1"
end for
Reduce phase (input: key (a word), an iterator going through the emitted values)
for each value in the iterator
sum up the value in a counter
end for
Around that, you'll have a master program which will partition the set of documents in "splits" which will be handled in parallel for the Map phase. The emitted values are written by the worker in a buffer specific to the worker. The master program then delegates other workers to perform the Reduce phase as soon as it is notified that the buffer is ready to be handled.
Every worker output (being a Map or a Reduce worker) is in fact a file stored on the distributed file system (GFS for Google) or in the distributed database for CouchDB.
A: A really easy, quick and "for dummies" introduction to MapReduce is available at: http://www.marcolotz.com/?p=67
Posting some of it's content:
First of all, why was MapReduce originally created?
Basically Google needed a solution for making large computation jobs easily parallelizable, allowing data to be distributed in a number of machines connected through a network. Aside from that, it had to handle the machine failure in a transparent way and manage load balancing issues.
What are MapReduce true strengths?
One may say that MapReduce magic is based on the Map and Reduce functions application. I must confess mate, that I strongly disagree. The main feature that made MapReduce so popular is its capability of automatic parallelization and distribution, combined with the simple interface. These factor summed with transparent failure handling for most of the errors made this framework so popular.
A little more depth on the paper:
MapReduce was originally mentioned in a Google paper (Dean & Ghemawat, 2004 – link here) as a solution to make computations in Big Data using a parallel approach and commodity-computer clusters. In contrast to Hadoop, that is written in Java, the Google’s framework is written in C++. The document describes how a parallel framework would behave using the Map and Reduce functions from functional programming over large data sets.
In this solution there would be two main steps – called Map and Reduce –, with an optional step between the first and the second – called Combine. The Map step would run first, do computations in the input key-value pair and generate a new output key-value. One must keep in mind that the format of the input key-value pairs does not need to necessarily match the output format pair. The Reduce step would assemble all values of the same key, performing other computations over it. As a result, this last step would output key-value pairs. One of the most trivial applications of MapReduce is to implement word counts.
The pseudo-code for this application, is given bellow:
map(String key, String value):
// key: document name
// value: document contents
for each word w in value:
EmitIntermediate(w, “1”);
reduce(String key, Iterator values):
// key: a word
// values: a list of counts
int result = 0;
for each v in values:
result += ParseInt(v);
Emit(AsString(result));
As one can notice, the map reads all the words in a record (in this case a record can be a line) and emits the word as a key and the number 1 as a value.
Later on, the reduce will group all values of the same key. Let’s give an example: imagine that the word ‘house’ appears three times in the record. The input of the reducer would be [house,[1,1,1]]. In the reducer, it will sum all the values for the key house and give as an output the following key value: [house,[3]].
Here’s an image of how this would look like in a MapReduce framework:
As a few other classical examples of MapReduce applications, one can say:
•Count of URL access frequency
•Reverse Web-link Graph
•Distributed Grep
•Term Vector per host
In order to avoid too much network traffic, the paper describes how the framework should try to maintain the data locality. This means that it should always try to make sure that a machine running Map jobs has the data in its memory/local storage, avoiding to fetch it from the network. Aiming to reduce the network through put of a mapper, the optional combiner step, described before, is used. The Combiner performs computations on the output of the mappers in a given machine before sending it to the Reducers – that may be in another machine.
The document also describes how the elements of the framework should behave in case of faults. These elements, in the paper, are called as worker and master. They will be divided into more specific elements in open-source implementations.
Since the Google has only described the approach in the paper and not released its proprietary software, many open-source frameworks were created in order to implement the model. As examples one may say Hadoop or the limited MapReduce feature in MongoDB.
The run-time should take care of non-expert programmers details, like partitioning the input data, scheduling the program execution across the large set of machines, handling machines failures (in a transparent way, of course) and managing the inter-machine communication. An experienced user may tune these parameters, as how the input data will be partitioned between workers.
Key Concepts:
•Fault Tolerance: It must tolerate machine failure gracefully. In order to perform this, the master pings the workers periodically. If the master does not receive responses from a given worker in a definite time lapse, the master will define the work as failed in that worker. In this case, all map tasks completed by the faulty worker are thrown away and are given to another available worker. Similar happens if the worker was still processing a map or a reduce task. Note that if the worker already completed its reduce part, all computation was already finished by the time it failed and does not need to be reset. As a primary point of failure, if the master fails, all the job fails. For this reason, one may define periodical checkpoints for the master, in order to save its data structure. All computations that happen between the last checkpoint and the master failure are lost.
•Locality: In order to avoid network traffic, the framework tries to make sure that all the input data is locally available to the machines that are going to perform computations on them. In the original description, it uses Google File System (GFS) with replication factor set to 3 and block sizes of 64 MB. This means that the same block of 64 MB (that compose a file in the file system) will have identical copies in three different machines. The master knows where are the blocks and try to schedule map jobs in that machine. If that fails, the master tries to allocate a machine near a replica of the tasks input data (i.e. a worker machine in the same rack of the data machine).
•Task Granularity: Assuming that each map phase is divided into M pieces and that each Reduce phase is divided into R pieces, the ideal would be that M and R are a lot larger than the number of worker machines. This is due the fact that a worker performing many different tasks improves dynamic load balancing. Aside from that, it increases the recovery speed in the case of worker fail (since the many map tasks it has completed can be spread out across all the other machines).
•Backup Tasks: Sometimes, a Map or Reducer worker may behave a lot more slow than the others in the cluster. This may hold the total processing time and make it equal to the processing time of that single slow machine. The original paper describes an alternative called Backup Tasks, that are scheduled by the master when a MapReduce operation is close to completion. These are tasks that are scheduled by the Master of the in-progress tasks. Thus, the MapReduce operation completes when the primary or the backup finishes.
•Counters: Sometimes one may desire to count events occurrences. For this reason, counts where created. The counter values in each workers are periodically propagated to the master. The master then aggregates (Yep. Looks like Pregel aggregators came from this place ) the counter values of a successful map and reduce task and return them to the user code when the MapReduce operation is complete. There is also a current counter value available in the master status, so a human watching the process can keep track of how it is behaving.
Well, I guess with all the concepts above, Hadoop will be a piece of cake for you. If you have any question about the original MapReduce article or anything related, please let me know.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/28982",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "176"
} |
Q: Parsing XML using unix terminal Sometimes I need to quickly extract some arbitrary data from XML files to put into a CSV format. What's your best practices for doing this in the Unix terminal? I would love some code examples, so for instance how can I get the following problem solved?
Example XML input:
<root>
<myel name="Foo" />
<myel name="Bar" />
</root>
My desired CSV output:
Foo,
Bar,
A: Use a command-line XSLT processor such as xsltproc, saxon or xalan to parse the XML and generate CSV. Here's an example, which for your case is the stylesheet:
<?xml version="1.0" encoding="ISO-8859-1"?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output method="text"/>
<xsl:template match="root">
<xsl:apply-templates select="myel"/>
</xsl:template>
<xsl:template match="myel">
<xsl:for-each select="@*">
<xsl:value-of select="."/>
<xsl:value-of select="','"/>
</xsl:for-each>
<xsl:text> </xsl:text>
</xsl:template>
</xsl:stylesheet>
A: If you just want the name attributes of any element, here is a quick but incomplete solution.
(Your example text is in the file example)
grep "name" example | cut -d"\"" -f2,2
| xargs -I{} echo "{},"
A: XMLStarlet is a command line toolkit to query/edit/check/transform
XML documents (for more information, see XMLStarlet Command Line XML Toolkit)
No files to write, just pipe your file to xmlstarlet and apply an xpath filter.
cat file.xml | xml sel -t -m 'xpathExpression' -v 'elemName' 'literal' -v 'elname' -n
-m expression
-v value
'' included literal
-n newline
So for your xpath the xpath expression would be //myel/@name
which would provide the two attribute values.
Very handy tool.
A: Here's a little ruby script that does exactly what your question asks (pull an attribute called 'name' out of elements called 'myel'). Should be easy to generalize
#!/usr/bin/ruby -w
require 'rexml/document'
xml = REXML::Document.new(File.open(ARGV[0].to_s))
xml.elements.each("//myel") { |el| puts "#{el.attributes['name']}," if el.attributes['name'] }
A: Peter's answer is correct, but it outputs a trailing line feed.
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
<xsl:output method="text"/>
<xsl:template match="root">
<xsl:for-each select="myel">
<xsl:value-of select="@name"/>
<xsl:text>,</xsl:text>
<xsl:if test="not(position() = last())">
<xsl:text>
</xsl:text>
</xsl:if>
</xsl:for-each>
</xsl:template>
</xsl:stylesheet>
Just run e.g.
xsltproc stylesheet.xsl source.xml
to generate the CSV results into standard output.
A: Your test file is in test.xml.
sed -n 's/^\s*<myel\s*name="\([^"]*\)".*$/\1,/p' test.xml
It has its pitfalls; for example if it is not strictly given that each myel is on one line you have to "normalize" the XML file first (so each myel is on a separate line).
A: Answering the original question, assuming xml file is "test.xml" that contains:
<root>
<myel name="Foo" />
<myel name="Bar" />
</root>
tr -s "\"" " " < text.xml | awk '{printf "%s,\n", $3}'
A: Using xidel:
xidel -s input.xml -e '//myel/concat(@name,",")'
A: yq can be used for XML parsing.
It is a lightweight and portable command-line YAML processor and can also deal with XML.
The syntax is similar to jq
Input
<root>
<myel name="Foo" />
<myel name="Bar">
<mysubel>stairway to heaven</mysubel>
</myel>
</root>
usage example 1
yq e '.root.myel.0.+name' $INPUT (version >= 4.30: yq e '.root.myel.0.+@name' $INPUT)
Foo
usage example 2
yq has a nice builtin feature to make XML easily grep-able
yq --input-format xml --output-format props $INPUT
root.myel.0.+name = Foo
root.myel.1.+name = Bar
root.myel.1.mysubel = stairway to heaven
usage example 3
yq can also convert an XML input into JSON or YAML
yq --input-format xml --output-format json $INPUT
{
"root": {
"myel": [
{
"+name": "Foo"
},
{
"+name": "Bar",
"mysubel": "stairway to heaven"
}
]
}
}
yq --input-format xml $FILE (YAML is the default format)
root:
myel:
- +name: Foo
- +name: Bar
mysubel: stairway to heaven
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29004",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: Is there a way to combine named scopes into a new named scope? I have
class Foo < ActiveRecord::Base
named_scope :a, lambda { |a| :conditions => { :a => a } }
named_scope :b, lambda { |b| :conditions => { :b => b } }
end
I'd like
class Foo < ActiveRecord::Base
named_scope :ab, lambda { |a,b| :conditions => { :a => a, :b => b } }
end
but I'd prefer to do it in a DRY fashion. I can get the same effect by using
Foo.a(something).b(something_else)
but it's not particularly lovely.
A: Well I'm still new to rails and I'm not sure exactly what you're going for here, but if you're just going for code reuse why not use a regular class method?
def self.ab(a, b)
a(a).b(b)
end
You could make that more flexible by taking *args instead of a and b, and then possibly make one or the other optional. If you're stuck on named_scope, can't you extend it to do much the same thing?
Let me know if I'm totally off base with what you're wanting to do.
A: At least since 3.2 there is a clever solution :
scope :optional, ->() {where(option: true)}
scope :accepted, ->() {where(accepted: true)}
scope :optional_and_accepted, ->() { self.optional.merge(self.accepted) }
A: By making it a class method you won't be able to chain it to an association proxy, like:
@category.products.ab(x, y)
An alternative is applying this patch to enable a :through option for named_scope:
named_scope :a, :conditions => {}
named_scope :b, :conditions => {}
named_scope :ab, :through => [:a, :b]
A: Yes Reusing named_scope to define another named_scope
I copy it here for your convenience:
You can use proxy_options to recycle one named_scope into another:
class Thing
#...
named_scope :billable_by, lambda{|user| {:conditions => {:billable_id => user.id } } }
named_scope :billable_by_tom, lambda{ self.billable_by(User.find_by_name('Tom').id).proxy_options }
#...
end
This way it can be chained with other named_scopes.
I use this in my code and it works perfectly.
I hope it helps.
A: @PJ: you know, I had considered that, but dismissed it because I thought I wouldn't be able to later chain on a third named scope, like so:
Foo.ab(x, y).c(z)
But since ab(x, y) returns whatever b(y) would return, I think the chain would work. Way to make me rethink the obvious!
A: Check out:
http://github.com/binarylogic/searchlogic
Impressive!
To be specific:
class Foo < ActiveRecord::Base
#named_scope :ab, lambda { |a,b| :conditions => { :a => a, :b => b } }
# alias_scope, returns a Scope defined procedurally
alias_scope :ab, lambda {
Foo.a.b
}
end
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29011",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: SharePoint SPContext.List in a custom application page I have a custom SharePoint application page deployed to the _layouts folder. It's a custom "new form" for a custom content type. During my interactions with this page, I will need to add an item to my list. When the page first loads, I can use SPContext.Current.List to see the current list I'm working with. But after I fill in my form and the form posts back onto itself and IsPostBack is true, then SPContext.Current.List is null so I can't find the list that I need to add my stuff into.
Is this expected?
How should I retain some info about my context list across the postback? Should I just populate some asp:hidden control with my list's guid and then just pull it back from that on the postback? That seems safe, I guess.
FWIW, this is the MOSS 2007 Standard version.
A: Generally speaking I try and copy whatever approach the product group has taken when looking to add functionality of my own. In this case they add their own edit/view/add pages via the list definition itself.
I built a solution that also needed its own custom "New" form, not open source unfortunately, though if you are interested you can download it, its called "Tagged Links" (Social Bookmarking for SharePoint) and you can find some links on my blog.
To give you a few hints and tips, the following should set you off in the right direction:
*
*Created a new list definition.
*Created a new Content Type In the content type you can define your own "FormTemplates" that references a Rendering Template which determine what gets displayed in the "Middle" bit of those forms.
*Copied the standard Rendering Template, but then made the changes to it that I
needed.
*Wrapped it all up in a solution, and deployed.
My Rendering Template actually included an overridden "Save" Button where I did a lot of the extra work I needed to do during the save.
Anyway, it is a little too much work in my opinion but, I think, it most closely matches the standard approach taken by the product developers. Let me know if you need more detail and I will see if I can put together a step-by-step blog post, but hopefully this gets you off on the right direction.
A: I would be surprised if you could do something in a _Layouts file that you can't do in a forms template. You have pretty much the same technologies at your disposal.
Looking at the way SharePoint works with ListItems and Layouts pages (for example "Manage Permissions" on a list item), I can see that they pass some variables in via querystrings:
?obj={76113B3A-FABA-4389-BC85-4BB2CC5AB423},6,LISTITEM&List={76113B3A-FABA-4389-BC85-4BB2CC5AB423}
Perhaps they grab the context back each time programmatically using these values.
A: I'm not using a custom "new form", so this might not apply. I added an event receiver to my custom content type and then do my custom code in the ItemAdded or ItemAdding events. This code fires when the event is added to a list. You can use the event receiver properties to get to the parent List, Web, and Site.
A: I'd like to think my issue is "special" here, since I am using a custom form. I chose to use a custom form rather than a custom FormTemplate simply because I'm doing a lot of stuff that's not very SharePoint list-like (making ajax calls to get info from a third-party app then generating some dynamic form elements based on that ajax result, then subsequent processing of that data on postback). I thought it'd be a nightmare to try this within the usual custom rendering template mechanism.
I also don't think I can supply the custom form declarations in the list definition itself, because I have multiple content types associated with this list, and each content type has its own custom form (the other type is thankfully much simpler).
Actually, my simple way of keeping the list guid in my hidden field was a very low impact way to address this specific problem. My main concern is that I'm not sure why the SPContext just loses all its usefulness when I postback here, which makes me think I'm doing something wrong.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29030",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Linq To SQL: Can I eager load only one field in a joined table? I have one table "orders" with a foreing key "ProductID".
I want to show the orders in a grid with the product name, without LazyLoad for better performance, but I if use DataLoadOptions it retrieves all Product fields, which seams like a overkill.
Is there a way to retrieve only the Product name in the first query?
Can I set some attribute in the DBML?
In this table says that "Foreign-key values" are "Visible" in Linq To SQL, but don't know what this means.
Edit: Changed the title, because I'm not really sure the there is no solution.
Can't believe no one has the same problem, it is a very common scenario.
A: What you are asking for is a level of optimisation the linq-to-sql does not provide. I think your best bet is to create a query that returns exactly the data you want, possibly as an anonymous type:
from order in DB.GetTable<Orders>()
join product in DB.GetTable<Products>()
on order.ProductID = product.ID
select new { ID = order.ID, Name = order.Name, ProductName = product.Name };
A: If you select only the columns you want in the linq query, and then call .ToList() on the query, it will be immedietly executed, and will only bring back the columns you are interested in. For example if you do this:
var q = from p in dataContext.products select p.ProductName;
var results = q.ToList();
You will get back a list of product names stored in results, and when the query executes on the server it will only bring back the ProductName column.
A: I get the solution in this other question Which .net ORM can deal with this scenario, that is related to the liammclennan answer but more clear (maybe the question was more clear too)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29040",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Good Free Alternative To MS Access Consider the need to develop a lightweight desktop DB application on the Microsoft platforms.
It could be done fairly easily with MS Access but I'd like to be able to distribute it to others and I don't want to pay for a runtime license.
Requirements:
*
*easy distribution to others
*no runtime licensing issues
Considerations and Candidates:
*
*Base from the OpenOffice suite. My concerns were around its stability.
*MySQL + writing custom DB code in C++ or Python or whatever seems like a rather heavy-handed solution.
Question: What are the low cost or free database alternatives to MS Access?
See Also: Open Source Reporting Engines
@Schnapple
Bruceatk kind of hit on what I'm thinking of; it's not so much the DB engine as I want the other niceties that Access brings to the party. The nice form designer, the nice reporting engine etc. But you do raise a very good point about the installation footprint. I had considered that but I've not made any firm decisions about which way I'm going with this yet anyway. It'll probably be something fairly lightweight anyway and a small installation footprint would definitely be a plus.
@Remou,
No I was unaware that the MS Access 2007 runtime is free; thanks for pointing that out. The last time I'd bothered to investigate it (I don't remember when it was) I think it was a fairly expensive license for the runtime because I think they were trying to sell it to Corporate IT departments.
And thanks to everyone else who responded as well; I was completely unaware of those other options you all pointed out.
A: Oracle XE With Application Express.
*
*Has a nice web based gui,
*Is a "Real" database
*Will scale beyond a single desktop
*Offers a clear scale path beyond a small team
*Applications as web based, easily accessible.
*Can convert Excel spread sheets into Applications
A: The issue is finding an alternative to MS Access that includes a visual, drag and drop development environment with a "reasonable" database where the whole kit and caboodle can be deployed free of charge.
My first suggestion would be to look at this very complete list of MS Access alternatives (many of which are free), followed by a gander at this list of open source database development tools on osalt.com.
My second suggestion would be to check out WaveMaker, which is sort of an open source PowerBuilder for the cloud (disclaimer: I work there so should not be considered to be an unbiased source of information ;-)
WaveMaker combines a drag and drop IDE with an open source Java back end. It is licensed under the Apache license and boasts a 15,000-strong developer community.
A: NuBuilder (www.nubuilder.net) might be right.
NuBuilder is a GPLv3-licensed PHP web application that requires MySQL as backend database. Users and programmers both use the web interface.
They promote it as a free, web based MS Access alternative.
I'm creating my second NuBuilder application these days. The NuBuilder seems to be very actively developed, and I found it stable and well documented (provided you can stand video tutorials.)
A: Are you aware that the Access 2007 runtime can be downloaded for free?
Links for newer versions:
*
*2010 Runtime
*2013 Runtime
*2016 Runtime
A: Schnapple asks:
Are you referring to the concept of a
free database to distribute with an
application, or an Access-like "single
file, no installation" database?
Er, nobody who has any competence with Access application development would ever distribute a single MDB/ACCDB as application/data store. Any non-trivial Access application needs to be split into a front end with the forms/queries/reports (i.e., UI objects) and a back end (data tables only).
It's clear that what is needed here is a database application development tool like Access. None of the database-only answers are in any way responsive to that.
Please learn about Access before answering Access questions:
*
*Access is a database application
development tool that ships with a
default database engine called Jet.
*But an Access application can be
built to work with data in almost
any back end database, as long as
there's an ISAM, or an ODBC or OLEDB
driver for that database engine.
Microsoft itself has done a good job of obfuscating the difference between Access (development tool) and Jet (database engine), so it's not surprising that many people don't recognize the difference. But developers ought to use precise language, and when you mean the database engine, use "Jet", and when you mean the front-end development platform, use "Access".
A: One thing to keep in mind here is the MS Access product is much more than just the raw database engine. It provides a full application development platform, including form and menu designer, client application language and environment (VBA), and report designer. When you take all those things together, MS Access really has no peer.
But for the scope of this question, we're concerned with the raw database engine. With that in mind:
SQLlite,
Firebird,
VistaDB (not free),
SQL Server Compact Edition (not Express)
and now SQL Server LocalDB
all come to mind.
Another thought: while the original question does ask about desktop databases, its likely some people will land here looking for a database to use with a web site. It's important to remember that these are all in-process databases, and as such are rarely if ever appropriate for use on the web. If you want to build a web site, where it's common to need to support significant concurrent access, you generally want a database server engine, like MS SQL, Postgresql, MySQL, Oracle, or their brethren. At the same time, those server engines are rarely if ever appropriate for a single-user desktop application.
A: You may want to look into SQLite (http://sqlite.org/). All depends on your usage though. Concurrency for example is not its greatest virtue. But for example Firefox uses it to store settings etc..
A: In the context of a programming forum, we don't usually think of the programmer also needing the application portion of the database. Normally a programmer wants to use their own development environment for the business logic and front end, and just use the store, query, retrieval, and data processing capabilities of the database.
If you really want all those other things, then you're talking about a much larger and more complicated run time environment. You're not going to find anything that's 'lightweight' any more. Even MS Access itself no longer qualifies, because it's hardly light weight. It's just lucky in that a lot of users might already have it, making it appear to be light weight.
This doesn't mean you won't find anything. Just that it's not likely to have the same level of maturity or distribution as Access, especially since the underlying access engine is already baked into Windows.
A: The Access runtime license has never been all that expensive -- the cost for the developer tools/extensions has been around $300 as long as I can remember (which would be as far back to the Access 2 Developers Toolkit, or ADT), but that gives you the ability to distribute your app with the runtime to an unlimited number of users. As long as your runtime app was used by three or more users, you'd have been saving money (assuming a cost of $100/user to install a full copy of Access).
The runtime for Access 2007 is completely free, but really, the cost before that was not all that great.
Marc Gravell added (in what should have been a comment, in my opinion):
Being free, though, is certainly an encouragement for people to try it out which the $300 price really would have discouraged.
A: VistaDB has an express version which is free to use and is syntax and driver compatible with SQL Server. VistaDB is a single file and only requires their driver .dll to work in your asp.net or winforms project.
Since it is syntax and datasource compatible you can upgrade to SQL Server if needed.
from their site:
VistaDB is a fully managed and
typesafe ASP.NET and WinForms
applications using C#, VB.NET and
other CLR-compliant languages.
VistaDB.net
A: You mentioned Python, have you considered Dabo?
http://dabodev.com/
That would avoid much of the grunt work in a custom app.
A: Are you referring to the concept of a free database to distribute with an application, or an Access-like "single file, no installation" database?
As in, things like SQL Server Express Edition require things like runtimes to be installed, databases to be created and mounted, entries on people's Start menus that they won't recognize (my wife asked why SQL Server was on her laptop the other day) whereas an Access database can be run in a single file.
I guess what I'm asking is do you want to think of the database as a document you write to or as an instance of something on someone else's machine?
A: What about r:Base? Way back in the day r:Base was a very robust DOS (then Windows) RDMBS and this is pre-Access / pre-Paradox days. Its closest competitor was dBase but that wasnt fully relational, at the time. I developed some very nice r:Base applications AND, like Access today, had a built in report generator, forms facility, queries and table manipulation.. To my surprise, its still alive! http://www.rbase.com/ Its got all that access offers, it seems. Might be something for you to consider.
A: Kexi 2007.1.1 may be what you are looking for.
Its express version is free but DB size limited. Full version cost $72.
The description from its home page:
Kexi is an easy to use application for visual database design for Linux and MS Windows. Kexi competes with MS Access, FoxPro, Oracle Forms and FileMaker.
Visit http://www.kexi-project.org/about.html for details.
A: Apache Derby is a nice db alternative.
A: Gambas
A: Much in line with Aurelio's answer, I now work in Ruby on Rails on some applications that I might formerly have done in MS Access. The back end database for a Rails App. is usually, MySql (works well enough and is available on most shared Web hosting) or PostgreSQL (the better choice when possible).
A: What about Microsoft's Visual Studio Express?
http://www.microsoft.com/express/default.aspx
SQL Server Express is also at that link...
A: I'd the same problem of you. I had a MS access application but I wanted to go to a web application accessible to everybody and without paying money to MS. So I decided to use MySql and Wavemaker (open source) to get the scope..I'm very happy of this decision. and that's the result http://www.mara-database.org/
A: Also check out http://www.sagekey.com/installation_access.aspx for great installation scripts for Ms Access. Also if you need to integrate images into your application check out DBPix at ammara.com
A: What you appear to be looking for is not just a database program, but a database with forms, reports, etc (basically an IDE of sorts). I would recommend trying OpenOffice.org Base, which comes with the office suite. It's free and open source. It's nowhere near as polished as access, but it does pretty much the same things.
Plus, if you know access, it will be at least somewhat familiar.
http://www.openoffice.org/
EDIT: Sorry, failed to read that you are considering OpenOffice.org. With regard to stability, I've had it crash and do some "odd" things when I played with it, but Access has done the same thing. The best way to find out is to play with it a bit and see if it suits you.
A: To be honest - there aren't any free alternatives to MS Access. At least if you mean database development tool (forms, reports, queries, VBA support etc.). If you think about MS Access as a database engine (you mean MS Jet or ACE in fact) then yes - you have a lot of possibilities. There are a lot of free database engines - the most popular are MySQL and PostgreSQL. I can recommend both - it depends what you want to do.
For writing database frontends C++ is one of the worst choices. You should consider MS Visual C#, MS Visual Basic .NET or... Even Java/Swing (if we are talking about desktop application). If you think about the web-enabled frontend - consider PHP (with MySQL or PostgreSQL on the backend) or ASP.NET (with MSSQL Server at the backend).
I strongly recommend you not to use C++ for such job. This language is very efficient and flexible, but advanced database frontend development with C++ is not the best idea. C++ is great in system programming, games development, maths and physics simulations, everywhere where efficiency is the key - like real-time applications etc. Frontends don't have to be daemons of speed - they should look nice and have advanced end-user features (like sorting, coloring etc.). If you are looking for free tools - maybe C# Express or Visual Basic.NET Express 2008 would be the proper choice? Or maybe Java/Swing (check the NetBeans IDE)? Maybe SharpDevelop? But not C++... Leave C++ for the things it suits the best.
A: Check out suneido.
I made a fairly complicated GIS app as an experiment with it some years ago (database, complex gui, reports, client/server). It was a pleasant experience (apart from some documentation issues...) and I became productive with it very fast.
I don't use it anymore mainly because:
*
*it's not really general purpose
*it's not cross platform (windows only)
*I decided to stop exploring exotic
technologies and specialize in something
more mainstream.
A: When people ask about a replacement for Access, a lot of them only think about the database, but what they are really asking about are all of the other features in Access. They usually don't care what database Access is using.
Some of the functionality provided by Access are: Forms, Query Building, Reports, Macros, Database Management, and some kind of language when you need to go beyond what the wizards provide.
SQLite, MySQL, and FireBird are free database back ends. They do not have those additional Access functions built into them. Any free alternatives to Access require you combining something like SQLite and a development language.
Probably the best free option would be SQLite and Visual Basic 2008 or C# 2008 Express Edition. This would have a heavy runtime dependency, so installing on a bare client could take quite the installer.
There really isn't a non-Access option for free with minimum runtime requirements. I wish there was.
I'll be interested in hearing if anybody knows any good alternatives.
A: Of the Free Software alternatives these haven't been mentioned yet:
*
*Bond
*Rekall (not sure about the status of the Windows version currently though)
*Glom (Windows version under development)
I'd also keep an eye on what DB RAD tools the Flex/Air community is coming up with, since with those tools it's possible to get unified desktop and web interfaces.
A: I think the database included with OpenOffice.org has the form designer in it. I've never tried writing code for it though. A forum post I saw had a link to a tutorial they said had some code in it.
I started to set up a database for my wife and the interface was coming out pretty good as far as I could tell.
oooForum.org tutorial
A: for sqlite, check out the firefox extension. It offers a serviceable GUI.
A: VistaDB is the only alternative if you going to run your website at shared hosting (almost all of them won't let you run your websites under Full Trust mode) and also if you need simple x-copy deployment enabled website.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29044",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "105"
} |
Q: Javascript Browser Quirks - array.Length Code:
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>Unusual Array Lengths!</title>
<script type="text/javascript">
var arrayList = new Array();
arrayList = [1, 2, 3, 4, 5, ];
alert(arrayList.length);
</script>
</head>
<body>
</body>
</html>
Notice the extra comma in the array declaration.
The code above gives different outputs for various browsers:
Safari: 5
Firefox: 5
IE: 6
The extra comma in the array is being ignored by Safari and FF while IE treats it as another object in the array.
On some search, I have found mixed opinions about which answer is correct. Most people say that IE is correct but then Safari is also doing the same thing as Firefox. I haven't tested this on other browsers like Opera but I assume that there are discrepancies.
My questions:
i. Which one of these is correct?
Edit: By general consensus (and ECMAScript guidelines) we assume that IE is again at fault.
ii. Are there any other such Javascript browser quirks that I should be wary of?
Edit: Yes, there are loads of Javascript quirks. www.quirksmode.org is a good resource for the same.
iii. How do I avoid errors such as these?
Edit: Use JSLint to validate your javascript. Or, use some external libraries. Or, sanitize your code.
Thanks to DamienB, JasonBunting, John and Konrad Rudolph for their inputs.
A: It seems to me that the Firefox behavior is correct. What is the value of the 6th value in IE (sorry I don't have it handy to test). Since there is no actual value provided, I imagine it's filling it with something like 'null' which certainly doesn't seem to be what you intended to have happen when you created the array.
At the end of the day though, it doesn't really matter which is "correct" since the reality is that either you are targeting only one browser, in which case you can ignore what the others do, or you are targeting multiple browsers in which case your code needs to work on all of them. In this case the obvious solution is to never include the dangling comma in an array initializer.
If you have problems avoiding it (e.g. for some reason you have developed a (bad, imho) habit of including it) and other problems like this, then something like JSLint might help.
A: I was intrigued so I looked it up in the definition of ECMAScript 262 ed. 3 which is the basis of JavaScript 1.8. The relevant definition is found in section 11.1.4 and unfortunately is not very clear. The section explicitly states that elisions (= omissions) at the beginning or in the middle don't define an element but do contribute to the overall length.
There is no explicit statements about redundant commas at the end of the initializer but by omission I conclude that the above statement implies that they do not contribute to the overall length so I conclude that MSIE is wrong.
The relevant paragraph reads as follows:
Array elements may be elided at the beginning, middle or end of the element list. Whenever a comma in the element list is not preceded by an Assignment Expression (i.e., a comma at the beginning or after another comma), the missing array element contributes to the length of the Array and increases the index of subsequent elements. Elided array elements are not defined.
A: "3" for those cases, I usually put in my scripts
if(!arrayList[arrayList.length -1]) arrayList.pop();
You could make a utility function out of that.
A: First off, Konrad is right to quote the spec, as that is what defines the language and answers your first question.
To answer your other questions:
Are there any other such Javascript browser quirks that I should be wary of?
Oh, too many to list here! Try the QuirksMode website for a good place to find nearly everything known.
How do I avoid errors such as these?
The best way is to use a library that abstracts these problems away for you so that you can get down to worrying about the logic of the application. Although a bit esoteric, I prefer and recommend MochiKit.
A:
Which one of these is correct?
Opera also returns 5. That means IE is outnumbered and majority rules as far as what you should expect.
A: Ecma 262 edition 5.1 section 11.1.4 array initializer states that a comma at the end if the array does not contribute to the length if the array. "if an element is elided at the end of the array it does not contribute to the length of the array"
That means [ "x", ] is perfectly legal javascript and should return an array of length 1
A: @John: The value of arrayList[5] comes out to be 'undefined'.
Yes, there should never be a dangling comma in declarations. Actually, I was just going through someone else's long long javascript code which somehow was not working correctly in different browers. Turned out that the dangling comma was the culprit that has accidently been typed in! :)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29053",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27"
} |
Q: How do you use ssh in a shell script? When I try to use an ssh command in a shell script, the command just sits there. Do you have an example of how to use ssh in a shell script?
A: Depends on what you want to do, and how you use it. If you just want to execute a command remotely and safely on another machine, just use
ssh user@host command
for example
ssh user@host ls
In order to do this safely you need to either ask the user for the password during runtime, or set up keys on the remote host.
A: First, you need to make sure you've set up password-less (public key login). There are at least two flavors of ssh with slightly different configuration file formats. Check the ssh manpage on your system, consult you local sysadmin or head over to How do I setup Public-Key Authentication?.
To run ssh in batch mode (such as within a shell script), you need to pass a command you want to be run. The syntax is:
ssh host command
If you want to run more than one command at the same time, use quotes and semicolons:
ssh host "command1; command2"
The quotes are needed to protect the semicolons from the shell interpreter. If you left them out, only the first command would be run remotely and all the rest would be run on the local machine.
A: You need to put your SSH public key into the ~/.ssh/authorized_keys file on the remote host. Then you'll be able to SSH to that host password-less.
Alternatively you can use ssh-agent. I would recommend against storing the password in the script.
A: You can use expect command to populate the username/password info.
A: The easiest way is using a certificate for the user that runs the script.
A more complex one implies adding to stdin the password when the shell command asks for it. Expect, perl libraries, show to the user the prompt asking the password (if is interactive, at least), there are a lot of choices.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29061",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: Creating a UserControl Programmatically within a repeater? I have a repeater that is bound to some data.
I bind to the ItemDataBound event, and I am attempting to programmatically create a UserControl:
In a nutshell:
void rptrTaskList_ItemDataBound(object sender, RepeaterItemEventArgs e)
{
CCTask task = (CCTask)e.Item.DataItem;
if (task is ExecTask)
{
ExecTaskControl foo = new ExecTaskControl();
e.Item.Controls.Add(foo);
}
}
The problem is that while the binding works, the user control is not rendered to the main page.
A: Eh, figured out one way to do it:
ExecTaskControl foo = (ExecTaskControl)LoadControl("tasks\\ExecTaskControl.ascx");
It seems silly to have a file depedancy like that, but maybe thats how UserControls must be done.
A: You could consider inverting the problem. That is add the control to the repeaters definition and the remove it if it is not needed. Not knowing the details of your app this might be a tremendous waste of time but it might just work out in the end.
A: I think that @Craig is on the right track depending on the details of the problem you are solving. Add it to the repeater and remove it or set Visible="false" to hide it where needed. Viewstate gets tricky with dynamically created controls/user controls, so google that or check here if you must add dynamically. The article referenced also shows an alternative way to load dynamically:
Control ctrl=this.LoadControl(Request.ApplicationPath +"/Controls/" +ControlName);
A: If you are going to do it from a place where you don't have an instance of a page then you need to go one step further (e.g. from a webservice to return html or from a task rendering emails)
var myPage = new System.Web.UI.Page();
var myControl = (Controls.MemberRating)myPage.LoadControl("~/Controls/MemberRating.ascx");
I found this technique on Scott Guithrie's site so I assume it's the legit way to do it in .NET
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29067",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: What is the difference between a group and match in .NET's RegEx? What is the difference between a Group and a Match in .NET's RegEx?
A: A Match is an object that indicates a particular regular expression matched (a portion of) the target text. A Group indicates a portion of a match, if the original regular expression contained group markers (basically a pattern in parentheses). For example, with the following code:
string text = "One car red car blue car";
string pat = @"(\w+)\s+(car)";
Match m = r.Match(text);
m would be match object that contains two groups - group 1, from (\w+), and that captured "One", and group 2 (from (car)) that matched, well, "car".
A: A Match is a part of a string that matches the regular expression, and there could therefore be multiple matches within a string.
Inside a Match you can define groups, either anonymous or named, to make it easier to split up a match. A simple example is to create a regex to search for URLs, and then use groups inside to find the protocol (http), domain (www.web.com), path (/lol/cats.html) and arguments and what not.
// Example I made up on the spot, probably doesn't work very well
"(?<protocol>\w+)://(?<domain>[^/]+)(?<path>/[^?])"
A single pattern can be found multiple times inside a string, as I said, so if you use Regex.Matches(string text) you will get back multiple matches, each consisting of zero, one or more groups.
Those named groups can be found by either indexing by number, or with a string. The example above can be used like this:
Match match = pattern.Match(urls);
if (!match.Success)
continue;
string protocol = match.Groups["protocol"].Value;
string domain = match.Groups[1].Value;
To make things even more interesting, one group could be matched multiple times, but then I recommend start reading the documentation.
You can also use groups to generate back references, and to do partial search and replace, but read more of that on MSDN.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29088",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: What makes a language Object-Oriented? Since debate without meaningful terms is meaningless, I figured I would point at the elephant in the room and ask: What exactly makes a language "object-oriented"? I'm not looking for a textbook answer here, but one based on your experiences with OO languages that work well in your domain, whatever it may be.
A related question that might help to answer first is: What is the archetype of object-oriented languages and why?
A: According to Booch, the following elements:
Major:
*
*Abstraction
*Encapsulation
*Modularity
*Hierarchy (Inheritance)
Minor:
*
*Typing
*Concurrency
*Persistence
A: Basically Object Oriented really boils down to "message passing"
In a procedural language, I call a function like this :
f(x)
And the name f is probably bound to a particular block of code at compile time. (Unless this is a procedural language with higher order functions or pointers to functions, but lets ignore that possibility for a second.) So this line of code can only mean one unambiguous thing.
In an object oriented language I pass a message to an object, perhaps like this :
o.m(x)
In this case. m is not the name of a block of code, but a "method selector" and which block of code gets called actually depends on the object o in some way. This line of code is more ambiguous or general because it can mean different things in different situations, depending on o.
In the majority of OO languages, the object o has a "class", and the class determines which block of code is called. In a couple of OO languages (most famously, Javascript) o doesn't have a class, but has methods directly attached to it at runtime, or has inherited them from a prototype.
My demarcation is that neither classes nor inheritance are necessary for a language to be OO. But this polymorphic handling of messages is essential.
Although you can fake this with function pointers in say C, that's not sufficient for C to be called an OO language, because you're going to have to implement your own infrastructure. You can do that, and a OO style is possible, but the language hasn't given it to you.
A: It's not really the languages that are OO, it's the code.
It is possible to write object-oriented C code (with structs and even function pointer members, if you wish) and I have seen some pretty good examples of it. (Quake 2/3 SDK comes to mind.) It is also definitely possible to write procedural (i.e. non-OO) code in C++.
Given that, I'd say it's the language's support for writing good OO code that makes it an "Object Oriented Language." I would never bother with using function pointer members in structs in C, for example, for what would be ordinary member functions; therefore I will say that C is not an OO language.
(Expanding on this, one could say that Python is not object oriented, either, with the mandatory "self" reference on every step and constructors called init, whatnot; but that's a Religious Discussion.)
A: Definitions for Object-Orientation are of course a huge can of worms, but here are my 2 cents:
To me, Object-Orientation is all about objects that collaborate by sending messages. That is, to me, the single most important trait of an object-oriented language.
If I had to put up an ordered list of all the features that an object-oriented language must have, it would look like this:
*
*Objects sending messages to other objects
*Everything is an Object
*Late Binding
*Subtype Polymorphism
*Inheritance or something similarly expressive, like Delegation
*Encapsulation
*Information Hiding
*Abstraction
Obviously, this list is very controversial, since it excludes a great variety of languages that are widely regarded as object-oriented, such as Java, C# and C++, all of which violate points 1, 2 and 3. However, there is no doubt that those languages allow for object-oriented programming (but so does C) and even facilitate it (which C doesn't). So, I have come to call languages that satisfy those requirements "purely object-oriented".
As archetypical object-oriented languages I would name Self and Newspeak.
Both satisfy the above-mentioned requirements. Both are inspired by and successors to Smalltalk, and both actually manage to be "more OO" in some sense. The things that I like about Self and Newspeak are that both take the message sending paradigm to the extreme (Newspeak even more so than Self).
In Newspeak, everything is a message send. There are no instance variables, no fields, no attributes, no constants, no class names. They are all emulated by using getters and setters.
In Self, there are no classes, only objects. This emphasizes, what OO is really about: objects, not classes.
A: Smalltalk is usually considered the archetypal OO language, although Simula is often cited as the first OO language.
Current OO languages can be loosely categorized by which language they borrow the most concepts from:
*
*Smalltalk-like: Ruby, Objective-C
*Simula-like: C++, Object Pascal, Java, C#
A: I am happy to share this with you guys, it was quite interesting and helpful to me. This is an extract from a 1994 Rolling Stone interview where Steve (not a programmer) explains OOP in simple terms.
Jeff Goodell: Would you explain, in simple terms, exactly what object-oriented software is?
Steve Jobs: Objects are like people. They’re living, breathing things that have knowledge inside them about how to do things and have memory inside them so they can remember things. And rather than interacting with them at a very low level, you interact with them at a very high level of abstraction, like we’re doing right here.
Here’s an example: If I’m your laundry object, you can give me your dirty clothes and send me a message that says, “Can you get my clothes laundered, please.” I happen to know where the best laundry place in San Francisco is. And I speak English, and I have dollars in my pockets. So I go out and hail a taxicab and tell the driver to take me to this place in San Francisco. I go get your clothes laundered, I jump back in the cab, I get back here. I give you your clean clothes and say, “Here are your clean clothes.”
You have no idea how I did that. You have no knowledge of the laundry place. Maybe you speak French, and you can’t even hail a taxi. You can’t pay for one, you don’t have dollars in your pocket. Yet, I knew how to do all of that. And you didn’t have to know any of it. All that complexity was hidden inside of me, and we were able to interact at a very high level of abstraction. That’s what objects are. They encapsulate complexity, and the interfaces to that complexity are high level.
A: As far as I can tell, the main view of what makes a language "Object Oriented" is supporting the idea of grouping data, and methods that work on that data, which is generally achieved through classes, modules, inheritance, polymorphism, etc.
See this discussion for an overview of what people think (thought?) Object-Orientation means.
As for the "archetypal" OO language - that is indeed Smalltalk, as Kristopher pointed out.
A: Supports classes, methods, attributes, encapsulation, data hiding, inheritance, polymorphism, abstraction...?
A: Disregarding the theoretical implications, it seems to be
"Any language that has a keyword called 'class'" :-P
A: To further what aib said, I would say that a language isn't really object oriented unless the standard libraries that are available are object oriented. The biggest example of this is PHP. Although it supports all the standard object oriented concepts, the fact that such a large percentage of the standard libraries aren't object oriented means that it's almost impossible to write your code in an object oriented way.
It doesn't matter that they are introducing namespaces if all the standard libraries still require you to prefix all your function calls with stuff like mysql_ and pgsql_, when in a language that supported namespaces in the actual API, you could get rid of functions with mysql_ and have just a simple "include system.db.mysql.*" at the top of your file so that it would know where those things came from.
A: when you can make classes, it is object-oriented
for example : java is object-oriented, javascript is not, and c++ looks like some kind of "object-curious" language
A: In my experience, languages are not object-oriented, code is.
A few years ago I was writing a suite of programs in AppleScript, which doesn't really enforce any object-oriented features, when I started to grok OO. It's clumsy to write Objects in AppleScript, although it is possible to create classes, constructors, and so forth if you take the time to figure out how.
The language was the correct language for the domain: getting different programs on the Macintosh to work together to accomplish some automatic tasks based on input files. Taking the trouble to self-enforce an object-oriented style was the correct programming choice because it resulted in code that was easier to trouble-shoot, test, and understand.
The feature that I noticed the most in changing that code over from procedural to OO was encapsulation: both of properties and method calls.
A: Simples:(compare insurance character)
1-Polymorphism
2-Inheritance
3-Encapsulation
4-Re-use.
:)
A: Object: An object is a repository of data. For example, if MyList is a ShoppingList object, MyList might record your shopping list.
Class: A class is a type of object. Many objects of the same class might exist; for instance, MyList and YourList may both be ShoppingList objects.
Method: A procedure or function that operates on an object or a class. A method is associated with a particular class. For instance, addItem might be a method that adds an item to any ShoppingList object. Sometimes a method is associated with a family of classes. For instance, addItem might operate on any List, of which a ShoppingList is just one type.
Inheritance: A class may inherit properties from a more general class. For example, the ShoppingList class inherits from the List class the property of storing a sequence of items.
Polymorphism: The ability to have one method call work on several different classes of objects, even if those classes need different implementations of the method call. For example, one line of code might be able to call the "addItem" method on every kind of List, even though adding an item to a ShoppingList is completely different from adding an item to a ShoppingCart.
Object-Oriented: Each object knows its own class and which methods manipulate objects in that class. Each ShoppingList and each ShoppingCart knows which implementation of addItem applies to it.
In this list, the one thing that truly distinguishes object-oriented languages from procedural languages (C, Fortran, Basic, Pascal) is polymorphism.
Source: https://www.youtube.com/watch?v=mFPmKGIrQs4&list=PL-XXv-cvA_iAlnI-BQr9hjqADPBtujFJd
A: If a language is designed with the facilities specifically to support object-oriented programming(4 features) then it is an Object-oriented programming language.
*
*You can program in an object-orientated style in more or less any language.It’s the code that is object-oriented not the language.
*Examples of real object-oriented languages are Java, c#, Python, Ruby, C++.
Also, it's possible to have extensions to provide Object-Oriented features like PHP, Perl etc.
*You can write an object-oriented code with C but it is not object-oriented prog. lang. It is not designed for that (that was the whole point of c++)
A: Archetype
The ability to express real-world scenarios in code.
foreach(House house in location.Houses)
{
foreach(Deliverable mail in new Mailbag(new Deliverable[]
{
GetLetters(),
GetPackages(),
GetAdvertisingJunk()
})
{
if(mail.AddressedTo(house))
{
house.Deliver(mail);
}
}
}
-
foreach(Deliverable myMail in GetMail())
{
IReadable readable = myMail as IReadable;
if ( readable != null )
{
Console.WriteLine(readable.Text);
}
}
Why?
To help us understand this more easily. It makes better sense in our heads and if implemented correctly makes the code more efficient, re-usable and reduces repetition.
To achieve this you need:
*
*Pointers/References to ensure that this == this and this != that.
*Classes to point to (e.g. Arm) that store data (int hairyness) and operations (Throw(IThrowable))
*Polymorphism (Inheritance and/or Interfaces) to treat specific objects in a generic fashion so you can read books as well as graffiti on the wall (both implement IReadable)
*Encapsulation because an apple doesn't expose an Atoms[] property
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29099",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "37"
} |
Q: How are you generating tests from specifications? I came across a printed article by Bertrand Meyer where he states that tests can be generated from specifications. My development team does nothing like this, but it sounds like a good technique to consider. How are you generating tests from specifications? How would you describe the success your having in discovering program faults via this method?
A: This might be a reference to RSpec, which is a really clever way of developing tests as a series of requirements. I'm still getting used to it, but it's been very handy in both defining what I need to do and then ensuring I do it.
A: @Tim Sullivan from Bertrand Meyer it can only be related to Eiffel :)
I think he's talking about ESpec. Given the name RSpec from the Ruby Folk, I think we can give them the label "heavily inspired".
A: I would say it depends on your specs. I have yet to work anywhere where the specs were good enough to create full unit tests from specifications - the level of detail just wasn't there. My managers always told us that if we specified to that level they could just ship the specs off to India and get it coded on the cheap ;)
A: There are all sorts of ways to do it, ranging from what I'd consider an 'art form' (and not necessarily good art) all the way to mathematically derived tests from formal specifications. At the end of the day, your development team needs to decided on what they can do based on the schedule they are working with. That being said, being able to test software against specs is a Good Thing.
Only your team can gauge the 'depth' of your tests, and that will probably be a function of how good your specs are. If they say something like, 'the login UI needs to provide a cancel button and a login button, and they need to work', your tests are going to be pretty general. But keep in mind - even very general tests are a Good Thing. Testing is a Good Thing. Too many developers have a bad attitude when it comes to testing, but at the end of the day, you're shipping software which should work, and to me, that means a lot.
The effectiveness your tests will having in finding program faults will depend on the detail you put into them. What is especially nice about having test procedures written to specs is that you can test each build to the same level of detail as the previous build (typically referred to as a regression test).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29100",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Requirements Gathering How do you go about the requirements gathering phase? Does anyone have a good set of guidelines or tips to follow? What are some good questions to ask the stakeholders?
I am currently working on a new project and there are a lot of unknowns. I am in the process of coming up with a list of questions to ask the stakeholders. However I cant help but to feel that I am missing something or forgetting to ask a critical question.
A: Wow, where to start?
First, there is a set of knowledge someone should have to do analysis on some projects, but it really depends on what you are building for who. In other words, it makes a big difference if you are modifying an enterprise application for a Fortune 100 corporation, building an iPhone app, or adding functionality to a personal webpage.
Second, there are different kinds of requirements.
*
*Objectives: What does the user want to accomplish?
*Functional: What does the user need to do in order to reach their objective? (think steps to reach the objective/s)
*Non-functional: What are the constraints your program needs to perform within? (think 10 vs 10k simultaneous users, growth, back-up, etc.)
*Business rules: What dynamic constraints do you have to meet? (think calculations, definitions, legal concerns, etc.)
Third, the way to gather requirements most effectively, and then get feedback on them (which you will do, right?) is to use models. User cases and user stories are a model of what the user needs to do. Process models are another version of what needs to happen. System diagrams are just another model of how different parts of the program(s) interact. Good data modeling will define business concepts and show you the inputs, outputs, and changes that happen within your program. Models (and there are more than I listed) are really the key to the concern you list. A few good models will capture the needs and from models you can determine your requirements.
Fourth, get feedback. I know I mentioned this already, but you will not get everything right the first time, so get responses to what your customer wants.
As much as I appreciate requirements, and the models that drive them, users typically do not understand the ramifications of of all their requests. Constant communication with chances for review and feedback will give users a better understanding of what you are delivering. Further, they will refine their understanding based on what they see. Unless you're working for the government, iterations and / or prototypes are helpful.
A: First of all gather the requirements before you start coding. You can begin the design while you are gathering them depending on your project life cicle but you shouldn't ever start coding without them.
Requirements are a set of well written documents that protect both the client and yourself. Never forget that. If no requirement is present then it was not paid for (and thus it requires a formal change request), if it's present then it must be implemented and must work correctly.
Requirements must be testable. If a requirement cannot be tested then it isn't a requirement. That means something like, "The system "
Requirements must be concrete. That means stating "The system user interface shall be easy to use" is not a correct requirment.
In order to actually "gather" the requirements you need to first make sure you understand the businness model. The client will tell you what they want with its own words, it is your job to understand it and interpret it in the right context.
Make meetings with the client while you're developing the requirements. Describe them to the client with your own words and make sure you and the client have the same concept in the requirements.
Requirements require concise, testable example, but keep track of every other thing that comes up in the meetings, diagrams, doubts and try to mantain a record of every meeting.
If you can use an incremental life cycle, that will give you the ability to improve some bad gathered requirements.
A: You can never ask too many or "stupid" questions. The more questions you ask, the more answers you receive.
A: According to Steve Yegge that's the wrong question to ask. If you're gathering requirement it's already too late, your project is doomed.
A: See obligatory comic below...
In general, I try and get a feel for the business model my customer/client is trying to emulate with the application they want built. Are we building a glorified forms processor? Are we retrieving data from multiple sources in a single application to save time? Are we performing some kind of integration?
Once the general businesss model is established, I then move to the "must" and "must nots" for the application to dictate what data I can retrieve, who can perform what functions, etc.
Usually if you can get the customer to explain their model or workflow, you can move from there and find additional key questions.
The one question I always make sure to ask in some form or another is "What is the trickiest/most annoying thing you have to do when doing X. Typically the answer to that reveals the craziest business/data rule you'll have to implement.
Hope this helps!
A: You're almost certainly missing something. A lot of things, probably. Don't worry, it's ok. Even if you remembered everything and covered all the bases stakeholders aren't going to be able to give you very good, clear requirements without any point of reference. The best way to do this sort of thing is to get what you can from them now, then take that and give them something to react to. It can be a paper prototype, a mockup, version 0.1 of the software, whatever. Then they can start telling you what they really want.
A: *
*High-level discussions about purpose, scope, limitations of operating environment, size, etc
*Audition a single paragraph description of the system, hammer it out
*Mock up UI
*Formalize known requirements
*Now iterate between 3 and 4 with more and more functional prototypes and more specs with more details. Write tests as you go. Do this until you have functional software and a complete, objective, testable requirements spec.
That's the dream. The reality is usually after a couple iterations everybody goes head-down and codes until there's a month left to test.
A: Steve Yegge talks fun but there is money to be made in working out what other people's requirements are so i'd take his article with a pinch of salt.
Requirements gathering is incredibly tough because of the manner in which communication works. Its a four step process that is lossy in each step.
*
*I have an idea in my head
*I transform this into words and pictures
*You interpret the pictures and words
*You paint an image in your own mind of what my original idea was like
And humans fail miserably at this with worrying frequency through their adorable imperfections.
Agile does right in promoting iterative development. Getting early versions out to the client is important in identifying what features are most important (what ships in 0.1 - 0.5 ish), helps to keep you both on the right track in terms of how the application will work and quickly identifies the hidden features that you will miss.
The two main problem scenarios are the two ends of the scales:
*
*Not having a freaking clue about what you are doing - get some domain experts
*Having too many requirements - feature pit. - Question, cull (prioritise ;) ) features and use iterative development
Yegge does well in pointing out that domain experts are essential to produce good requirements because they know the business and have worked in it. They can help identify the core desire of the client and will help explain how their staff will use the system and what is important to the staff.
Alternatives and additions include trying to do the job yourself to get into the mindset or having a client staff member occasionally on-site, although the latter is unlikely to happen.
The feature pit is the other side, mostly full of failed government IT projects. Too much, too soon, not enough thought or application of realism (but what do you expect they have only about four years to make themselves feel important?). The aim here is to work out what the customer really wants.
As long as you work on getting the core components correct, efficient and bug-free clients usually remain tolerant of missing features that arrive in later shipments, as long as they eventually arrive. This is where iterative development really helps.
Remember to separate the client's ideas of what the program will be like and what they want the program to achieve.
Some clients can create confusion by communicating their requirements in the form of application features which may be poorly thought out or made redundant by much simpler functionality then they think they require. While I'm not advocating calling the client an idiot or not listening to them I feel that it is worth forever asking why they want a particular feature to get to its underlying purpose.
Remember that in either scenario it is of imperative importantance to root out the quickest path to fulfilling the customers core need and put you in a scenario where you are both profiting from the relationship.
A: There are some great ideas here already. Here are some requirements gathering principles that I always like to keep in mind:
Know the difference between the user and the customer.
The business owners that approve the shiny project are usually the customers. However, a devastating mistake is the tendency to confuse them as the user. The customer is usually the person that recognizes the need for your product, but the user is the person that will actually be using the solution (and will most likely complain later about a requirement your product did not meet).
Go to more than one person
Because we’re all human, and we tend to not remember every excruciating detail. You increase your likelihood of finding missed requirements as you talk to more people and cross-check.
Avoid specials
When a user asks for something very specific, be wary. Always question the biases and see if this will really make your product better.
Prototype
Don’t wait till launch to show what you have to the user. Do frequent prototypes (you can even call them beta versions) and get constant feedback throughout the development process. You’ll probably find more requirements as you do this.
A: Gathering Business Requirements Are Bullshit - Steve Yegge
A: I've been using mind mapping (like a work breakdown structure) to help gather requirements and define the unknowns (the #1 project killer). Start at a high level and work your way down. You need to work with the sponsors, users and development team to ensure you get all the angles and don't miss anything. You can't be expected to know the entire scope of what they want without their involvement...you - as a project manager/BA - need to get them involved (most important part of the job).
A: *
*read the agile manifesto - working software is the only measurement for the success of a software project
*get familiar with agile software practices - study Scrum , lean programming , xp etc - this will save you tremendous amount of time not only for the requirements gathering but also for the entire software development lifecycle
*keep regular discussions with Customers and especially the future users and key-users
*make sure you talk to the Persons understanding the problem domain - e.g. specialists in the field
*Take small notes during the talks
*After each CONVERSATION write an official requirement list and present it for approving. Later on it would be difficult to argue against all agreed documentation
*make sure your Customers know approximately what are the approximate expenses in time and money for implementing "nice to have" requirements
*make sure you label the requirements as "must have" , "should have" and "nice to have" from the very beginning, ensure Customers understand the differences between those types also
*integrate all documents into the latest and final requirements analysis (or the current one for the iteration or whatever agile programming cycle you are using ... )
*remember that requirements do change over the software life cycle , so gathering is one thing but managing and implementing another
*KISS - keep it as simple as possible
*study also the environment where the future system will reside - there are more and more technological restraints from legacy or surrounding systems , since the companies do not prefer to throw to the garbage the money they have invested for decades even if in our modern minds 20 years old code is garbage ...
A: Like most stages of the software development process its iteration works best.
First find out who your users are -- the XYZ dept,
Then find out where they fit into the organisation -- part of Z division,
Then find out what they do in general terms -- manage cash
Then in specific terms -- collect cash from tills, and check for till fraud.
Then you can start talking to them.
Ask what problem they want you want to solve -- you will get an answer like write a bamboozling system using OCR with shark technoligies.
Ignore that answer and ask some more questions to find out what the real problem is -- they cant read the till slips to reconcile the cash.
Agree a real solution with the users -- get a better ink ribbon supplier - or connect the electronic tills to the network and upload the logs to a central server.
Then agree in detail how they will measure the success of the project.
Then and only then propose and agree a detailed set of requirements.
A: I would suggest you to read Roger-Pressman's Software Engineering: A Practitioner's Approach
A: Before you go talking to the stakeholders/users/anyone be sure you will be able to put down the gathered information in a usefull and days-lasting way.
*
*Use a sound-recorder if it is OK with the other person and the information is bulky.
*If you heard something important and you need some reasonable time to write it down, you have two choices: ask the other person to wait a second, or say goodbye to that precious information. You wont remember it right, ask any neuro-scientist.
*If you detect that a point need deeper review or that you need some document you just heard of, make sure you make a commitment with the other person to send that document or schedule another meeting with a more specific purpose. Never say "I'll remember to ask for that xls file" because in most cases you wont.
*Not to long after the meeting, summarize all your notes, recordings and fresh thoughts. Just summarize it rigth. Create effective reminders for the commitments.
*Again, just after the meeting, is the perfect time to understand why the gathering you just did was not as right as you thought at the end of the meeting. That's when you will be able to put down a lot of meaningful questions for another meeting.
I know the question was in the perspective of the pre-meeting, but please be aware that you can work on this matters before the meeting and end up with a much usefull, complete and quality gathering.
A: I recently started using the concepts, standards and templates defined by the International Institute of Business Analysts organization (IIBA).
They have a pretty good BOK (Book of Knowledge) that can be downloaded from their website. They do also have a certificate.
A: i wrote a blog article about the approach i use:
http://pm4web.blogspot.com/2008/10/needs-analysis-for-business-websites.html
basically: questions to ask your client before building their website.
i should add this questionnaire sheet is only geared towards basic website builds - like a business web presence. totally different story if you are talking about web-based software. although some of it is still relavant (e.g. questions relating to look and feel).
*
*LM
A: Requirements Engineering is a bit of an art, there are lots of different ways to go about it, you really have to tailor it to your project and the stakeholders involved. A good place to start is with Requirements Engineering by Karl Wiegers:
http://www.amazon.com/Software-Requirements-Second-Pro-Best-Practices/dp/0735618798/ref=pd_bbs_sr_2?ie=UTF8&s=books&qid=1234910330&sr=8-2
and a requirements engineering process which may consist of a number of steps e.g.:
*
*Elicitation - for the basis for discussion with the business
*Analysis and Description - a technical description for the purpose of the developers
*Elaboration, Clarification, Verification and Negotiation - further refinement of the requirements
Also, there are a number of ways of documenting the requirements (Use Cases, Prototypes, Specifications, Modelling Languages). Each have their advantages and disadvantages. For example prototypes are very good for elicitation of ideas from the business and discussion of ideas.
I generally find that writing a set of use cases and including wireframe prototypes works well to identify an initial set of requirements. From that point it's a continual process of working with technical people and business people to further clarify and elaborate on the requirements. Keeping track of what was initially agreed and tracking additional requirements are essential to avoid scope creep. Negotiation plays a bit part here also between the various parties as per the Broken Iron Triangle (http://www.ambysoft.com/essays/brokenTriangle.html).
A: IMO the most important first step is to set up a dictornary of domain-specific words. When your client says "order", what does he mean? Something he receives from his customers or something he sends to his suppliers? Or maybe both?
Find the keywords in the stakeholders' business, and let them explain those words until you comprehend their meaning in the process. Without that, you will have a hard time trying to understand the requirements.
A: I prefer to keep my requirements gathering process as simple, direct and thorough as possible. You can download a sample document that I use as a template for my projects at this blog posting: http://allthingscs.blogspot.com/2011/03/documenting-software-architectural.html
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29104",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29"
} |
Q: Generic IBindingListView Implementations Can anyone suggest a good implementation of a generic collection class that implements the IBindingListView & IBindingList interfaces and provides Filtering and Searching capabilities?
I see my current options as:
*
*Using a class that someone else has written and tested
*Inheriting from BindingList<T>, and implementing the IBindingListView interfaces
*Write a custom collection from scratch, implementing IBindingListView and IBindingList.
Obviously, the first option is my preferred choice.
A: Here is the help for your method 2 and 3
Behind the Scenes: Implementing Filtering for Windows Forms Data Binding
http://www.microsoft.com/downloads/details.aspx?FamilyID=4af0c96d-61d5-4645-8961-b423318541b4&displaylang=en
A: I used and built upon an implementation I found on and old MSDN forum post from a few years ago, but recently I searched around again and found a sourceforge project called BindingListView. It looks pretty nice, I just haven't pulled it in to replace my hacked version yet.
nuget package: Equin.ApplicationFramework.BindingListView
Example code:
var lst = new List<DemoClass>
{
new DemoClass { Prop1 = "a", Prop2 = "b", Prop3 = "c" },
new DemoClass { Prop1 = "a", Prop2 = "e", Prop3 = "f" },
new DemoClass { Prop1 = "b", Prop2 = "h", Prop3 = "i" },
new DemoClass { Prop1 = "b", Prop2 = "k", Prop3 = "l" }
};
dataGridView1.DataSource = new BindingListView<DemoClass>(lst);
// you can now sort by clicking the column headings
//
// to filter the view...
var view = (BindingListView<DemoClass>)dataGridView1.DataSource;
view.ApplyFilter(dc => dc.Prop1 == "a");
A: A couple of solutions I can think of:
*
*The SubSonic Project has a pretty nice implementation of BindlingList<T> which is open source. Although this might require using the entire SubSonic binary to use their implementation.
*I enjoy using the classes from the Power Collections project. It is fairly simple to inherit from one of the base collections there and implement IBindingListView.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29107",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25"
} |
Q: Using Interop with C#, Excel Save changing original. How to negate this? The problem: Loading an excel spreadsheet template. Using the Save command with a different filename and then quitting the interop object. This ends up saving the original template file. Not the result that is liked.
public void saveAndExit(string filename)
{
excelApplication.Save(filename);
excelApplication.Quit();
}
Original file opened is c:\testing\template.xls
The file name that is passed in is c:\testing\7777 (date).xls
Does anyone have an answer?
(The answer I chose was the most correct and thorough though the wbk.Close() requires parameters passed to it. Thanks.)
A: Excel interop is pretty painful. I dug up an old project I had, did a little fiddling, and I think this is what you're looking for. The other commenters are right, but, at least in my experience, there's a lot more to calling SaveAs() than you'd expect if you've used the same objects (without the interop wrapper) in VBA.
Microsoft.Office.Interop.Excel.Workbook wbk = excelApplication.Workbooks[0]; //or some other way of obtaining this workbook reference, as Jason Z mentioned
wbk.SaveAs(filename, Type.Missing, Type.Missing, Type.Missing,
Type.Missing, Type.Missing, XlSaveAsAccessMode.xlNoChange,
Type.Missing, Type.Missing, Type.Missing, Type.Missing,
Type.Missing);
wbk.Close();
excelApplication.Quit();
Gotta love all those Type.Missings. But I think they're necessary.
A: Rather than using an ExcelApplication, you can use the Workbook object and call the SaveAs() method. You can pass the updated file name in there.
A: Have you tried the SaveAs from the Worksheet?
A: *
*Ditto on the SaveAs
*Whenever I have to do Interop I create a separate VB.NET class library and write the logic in VB. It is just not worth the hassle doing it in C#
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29141",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Getting ssh to execute a command in the background on target machine This is a follow-on question to the How do you use ssh in a shell script? question. If I want to execute a command on the remote machine that runs in the background on that machine, how do I get the ssh command to return? When I try to just include the ampersand (&) at the end of the command it just hangs. The exact form of the command looks like this:
ssh user@target "cd /some/directory; program-to-execute &"
Any ideas? One thing to note is that logins to the target machine always produce a text banner and I have SSH keys set up so no password is required.
A: Quickest and easiest way is to use the 'at' command:
ssh user@target "at now -f /home/foo.sh"
A: I think you'll have to combine a couple of these answers to get what you want. If you use nohup in conjunction with the semicolon, and wrap the whole thing in quotes, then you get:
ssh user@target "cd /some/directory; nohup myprogram > foo.out 2> foo.err < /dev/null"
which seems to work for me. With nohup, you don't need to append the & to the command to be run. Also, if you don't need to read any of the output of the command, you can use
ssh user@target "cd /some/directory; nohup myprogram > /dev/null 2>&1"
to redirect all output to /dev/null.
A: This worked for me may times:
ssh -x remoteServer "cd yourRemoteDir; ./yourRemoteScript.sh </dev/null >/dev/null 2>&1 & "
A: Redirect fd's
Output needs to be redirected with &>/dev/null which redirects both stderr and stdout to /dev/null and is a synonym of >/dev/null 2>/dev/null or >/dev/null 2>&1.
Parantheses
The best way is to use sh -c '( ( command ) & )' where command is anything.
ssh askapache 'sh -c "( ( nohup chown -R ask:ask /www/askapache.com &>/dev/null ) & )"'
Nohup Shell
You can also use nohup directly to launch the shell:
ssh askapache 'nohup sh -c "( ( chown -R ask:ask /www/askapache.com &>/dev/null ) & )"'
Nice Launch
Another trick is to use nice to launch the command/shell:
ssh askapache 'nice -n 19 sh -c "( ( nohup chown -R ask:ask /www/askapache.com &>/dev/null ) & )"'
A: I had this problem in a program I wrote a year ago -- turns out the answer is rather complicated. You'll need to use nohup as well as output redirection, as explained in the wikipedia artcle on nohup, copied here for your convenience.
Nohuping backgrounded jobs is for
example useful when logged in via SSH,
since backgrounded jobs can cause the
shell to hang on logout due to a race
condition [2]. This problem can also
be overcome by redirecting all three
I/O streams:
nohup myprogram > foo.out 2> foo.err < /dev/null &
A: This has been the cleanest way to do it for me:-
ssh -n -f user@host "sh -c 'cd /whereever; nohup ./whatever > /dev/null 2>&1 &'"
The only thing running after this is the actual command on the remote machine
A: If you don't/can't keep the connection open you could use screen, if you have the rights to install it.
user@localhost $ screen -t remote-command
user@localhost $ ssh user@target # now inside of a screen session
user@remotehost $ cd /some/directory; program-to-execute &
To detach the screen session: ctrl-a d
To list screen sessions:
screen -ls
To reattach a session:
screen -d -r remote-command
Note that screen can also create multiple shells within each session. A similar effect can be achieved with tmux.
user@localhost $ tmux
user@localhost $ ssh user@target # now inside of a tmux session
user@remotehost $ cd /some/directory; program-to-execute &
To detach the tmux session: ctrl-b d
To list screen sessions:
tmux list-sessions
To reattach a session:
tmux attach <session number>
The default tmux control key, 'ctrl-b', is somewhat difficult to use but there are several example tmux configs that ship with tmux that you can try.
A: I just wanted to show a working example that you can cut and paste:
ssh REMOTE "sh -c \"(nohup sleep 30; touch nohup-exit) > /dev/null &\""
A: You can do it like this...
sudo /home/script.sh -opt1 > /tmp/script.out &
A: It appeared quite convenient for me to have a remote tmux session using the tmux new -d <shell cmd> syntax like this:
ssh someone@elsewhere 'tmux new -d sleep 600'
This will launch new session on elsewhere host and ssh command on local machine will return to shell almost instantly. You can then ssh to the remote host and tmux attach to that session. Note that there's nothing about local tmux running, only remote!
Also, if you want your session to persist after the job is done, simply add a shell launcher after your command, but don't forget to enclose in quotes:
ssh someone@elsewhere 'tmux new -d "~/myscript.sh; bash"'
A: You can do this without nohup:
ssh user@host 'myprogram >out.log 2>err.log &'
A: Actually, whenever I need to run a command on a remote machine that's complicated, I like to put the command in a script on the destination machine, and just run that script using ssh.
For example:
# simple_script.sh (located on remote server)
#!/bin/bash
cat /var/log/messages | grep <some value> | awk -F " " '{print $8}'
And then I just run this command on the source machine:
ssh user@ip "/path/to/simple_script.sh"
A: *
*If you run remote command without allocating tty, redirect stdout/stderr works, nohup is not necessary.
ssh user@host 'background command &>/dev/null &'
*If you use -t to allocate tty to run interactive command along with background command, and background command is the last command, like this:
ssh -t user@host 'bash -c "interactive command; nohup backgroud command &>/dev/null &"'
It's possible that background command doesn't actually start. There's race here:
*
*bash exits after nohup starts. As a session leader, bash exit results in HUP signal sent to nohup process.
*nohup ignores HUP signal.
If 1 completes before 2, the nohup process will exit and won't start the background command at all. We need to wait nohup start the background command. A simple workaroung is to just add a sleep:
ssh -t user@host 'bash -c "interactive command; nohup backgroud command &>/dev/null & sleep 1"'
The question was asked and answered years ago, I don't know if openssh behavior changed since then. I was testing on:
OpenSSH_8.6p1, OpenSSL 1.1.1g FIPS 21 Apr 2020
A: I was trying to do the same thing, but with the added complexity that I was trying to do it from Java. So on one machine running java, I was trying to run a script on another machine, in the background (with nohup).
From the command line, here is what worked: (you may not need the "-i keyFile" if you don't need it to ssh to the host)
ssh -i keyFile user@host bash -c "\"nohup ./script arg1 arg2 > output.txt 2>&1 &\""
Note that to my command line, there is one argument after the "-c", which is all in quotes. But for it to work on the other end, it still needs the quotes, so I had to put escaped quotes within it.
From java, here is what worked:
ProcessBuilder b = new ProcessBuilder("ssh", "-i", "keyFile", "bash", "-c",
"\"nohup ./script arg1 arg2 > output.txt 2>&1 &\"");
Process process = b.start();
// then read from process.getInputStream() and close it.
It took a bit of trial & error to get this working, but it seems to work well now.
A: YOUR-COMMAND &> YOUR-LOG.log &
This should run the command and assign a process id you can simply tail -f YOUR-LOG.log to see results written to it as they happen. you can log out anytime and the process will carry on
A: If you are using zsh then use program-to-execute &! is a zsh-specific shortcut to both background and disown the process, such that exiting the shell will leave it running.
A: A follow-on to @cmcginty's concise working example which also shows how to alternatively wrap the outer command in double quotes. This is how the template would look if invoked from within a PowerShell script (which can only interpolate variables from within double-quotes and ignores any variable expansion when wrapped in single quotes):
ssh user@server "sh -c `"($cmd) &>/dev/null </dev/null &`""
Inner double-quotes are escaped with back-tick instead of backslash. This allows $cmd to be composed by the PowerShell script, e.g. for deployment scripts and automation and the like. $cmd can even contain a multi-line heredoc if composed with unix LF.
A: I think this is what you need:
At first you need to install sshpass on your machine.
then you can write your own script:
while read pass port user ip; do
sshpass -p$pass ssh -p $port $user@$ip <<ENDSSH1
COMMAND 1
.
.
.
COMMAND n
ENDSSH1
done <<____HERE
PASS PORT USER IP
. . . .
. . . .
. . . .
PASS PORT USER IP
____HERE
A: First follow this procedure:
Log in on A as user a and generate a pair of authentication keys. Do not enter a passphrase:
a@A:~> ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/a/.ssh/id_rsa):
Created directory '/home/a/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/a/.ssh/id_rsa.
Your public key has been saved in /home/a/.ssh/id_rsa.pub.
The key fingerprint is:
3e:4f:05:79:3a:9f:96:7c:3b:ad:e9:58:37:bc:37:e4 a@A
Now use ssh to create a directory ~/.ssh as user b on B. (The directory may already exist, which is fine):
a@A:~> ssh b@B mkdir -p .ssh
b@B's password:
Finally append a's new public key to b@B:.ssh/authorized_keys and enter b's password one last time:
a@A:~> cat .ssh/id_rsa.pub | ssh b@B 'cat >> .ssh/authorized_keys'
b@B's password:
From now on you can log into B as b from A as a without password:
a@A:~> ssh b@B
then this will work without entering a password
ssh b@B "cd /some/directory; program-to-execute &"
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29142",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "353"
} |
Q: What is the more efficient version control methodology: checkout or merge? I've always used Subversion or CVS for version control, which use a 'merge' methodology. One of my friends raves about Perforce and how great it is with its change lists and check-out methodology.
While I'm sure a lot of it comes down to experience & personal preference, I was wondering if any research had been done into which method of version control is more efficient to work in?
EDIT: To clarify, I know both Perforce & SVN allow locking & merging, but SVN 'encourages' a liberal edit & merge method, whereas as I understand it, Perforce encourages a checkout-checkin method.
A: Merge is more efficient. For the simple reason that changes to the same file simultaneously tends to be common, and merge allows you to recover from that. In contrast, single checkout prevents that little bit of extra work but it does so at the cost of huge inefficiencies in scheduling. Typically it takes a short amount of time to merge two changes to the same file (e.g. minutes), whereas it takes a significant amount of time to make changes to a file (e.g. many hours or days) so that preventing access to editing a file is a huge inefficiency.
Note that Perforce does not force the checkout methodology, it allows concurrent checkouts (equivalent to merge).
A: Honestly I think it depends on the discipline of the developers.
I use Subversion for my personal work and I've used it at a few jobs. What I like about Subversion is I don't have to hunt someone down and ask them why they're working on something and if it would be OK for me to do some work. The problem comes when someone decides to start working on something and doesn't check it in for a while; this can make merging difficult as several changes get made between their check-out and check-in.
I use Perforce right now and for some reason I like SVN better. Perforce definitely gives me a better indication that there's going to be merge conflicts, and even has built-in tools to help me resolve the merges. It has the same problem where if someone makes tons of changes over a long time, the merge will be more difficult.
Basically both models require you to check in changes often. If you make numerous check-ins, then you reduce the likelihood that you'll require a merge. I'm guilty of keeping stuff checked out for too long way too often. Personally I feel like SVN's price tag makes up for anything it lacks compared to Perforce; I haven't found a difference between them yet.
A: In our last evaluation, Perforce beat subversion in its support for branching and integrating changes between branches. Work was underway on Subversion to remedy this short-coming, but we haven't been back to check it out.
In Perforce, when you branch a file, Perforce "remembers" where it came from and which revisions have been "integrated" into the two versions. It also has some storage optimizations in the repository so that a branch copy doesn't really materialize until someone makes a change in the branch, and then (if I understand correctly), it uses diffs against the base copy, just like revisions within a branch.
Perforce's tracking of the relationships between branches is a huge asset. If Subversion has implemented this now, please give me a heads up.
A: Perhaps you meant Source Safe rather than Perforce? Perforce supports merging, and in fact had better merge support than SVN until SVN 1.5 where named merges were added (as well as change lists, which Perforce has always had and I mis very much moving to a shop that used SVN, but we won't upgrade until 1.5 has been a bit more time tested.)
It's worth noting that SVN and Perforce both allow you to do a locked checkout, so you can do the "unmerged" model if you want, but aside perhaps from managing binaries with version control, I don't see much use for this.
Anyway, the simple answer to your question is "merge models are far better any time more than one developer is involved."
A: If I understand correctly, Perforce makes all files that are not checked out read-only. This is similar to the behavior under Microsoft TFS and VSS. Subversion on the other hand does not set read-only attributes. IMO, the Subversion method is easier because you don't have to bother with a source control client in order to modify files -- you go ahead and modify with reckless abandon and then compare what has changed on disk with the server when you are ready to check in.
When all files are read-only, I find myself constantly changing a file, attempting to save, discovering it is read-only, then having to hop over to the source control client to check it out. Its not so bad if the source control client is integrated into your editor, but if you are storing things that are not source code under version control this often isn't an option.
A:
If I understand correctly, Perforce makes all files that are not checked out read-only.
This is only the default behaviour. If required, frequently changing files can be set to be read-write instead. See a full list of file modifiers here.
Also, for my environment, I am using Eclipse with the Perforce Plugin. With this plugin, editing a file immediately opens the file for edit.
A: Not sure about the research, but here's one data point for you:
My team chose PVCS (checkout) mostly because of comfort. Doubts about merge and lack of awareness of tools like Subversion definitely contributed to that.
A: I'm not sure I understand the question here - I'm not aware of any modern source control system (other than Visual SourceSafe) that doesn't fully support merging.
A: I definitely prefer the merge methodology.
I've used Visual Sourcesafe (hopefully never again), CVS, subversion and bzr. Visual sourcesafe enforces the "checkout before editing" methodology and can be painful. CVS and subversion haven't been great at accepting merges historically, though I hear subversion 1.5 has improved that.
I would recommend using a VCS that has been designed with frequent merging in mind from the start. bzr is the one I've used that does this, but the other major distributed vcs systems (git and mercurial) also do.
But ultimately I don't know of any research on this specific area. In general there is very little research into efficiency of programming, Peopleware being one of the notable exceptions.
A: I don't really get the question, to be honest. But I can vouch for Perforce's efficiency and ability to handle more than one person modifying a file asnychronously, and handling the merging of edits.
In Perforce, if someone checks in a file that you are also modifying, then when you next sync from the server (i.e. get latest files) you get informed that there are some changes that need resolving. The choice on when to do this is up to you. When you "resolve" a file, it does the merge into your local version - and the tools are good for this.
Having a choice on when you do it is important - you may be syncing so you can get some updates not directly related to your task (bugfix, say), and you don't at that stage want to deal with working out if someone else's change to the same files you are working on will affect you. So you carry on, do your build & test, then after that you resolve the files in your own time.
The other case is that you submit your edits without having first synced to the updated file. In this case, Perforce prevents the submission and flags the files to be resolved. Any sensible developer at this stage will do the merge, then recompile and/or test before submitting the change back into Perforce.
What I like about this is that it tries really hard to stop you submitting changes back to the central server that have not been explicitly processed, and hence minimises the chances of breaking the build. The resolve process is easy and very low overhead, so there is not an efficiency issue at all.
Perforce is very explicit in giving you choice and control on how changes are propagated, and backs this up with excellent tools to manage merging of edits. Personally I like choice and the power to exercise choices easily. Doubtless Subversion has its own alternatives too.
I guess it probably comes down to what you are used to - I don't think there is a significant or measurable efficiency issue.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Generating database tables from object definitions I know that there are a few (automatic) ways to create a data access layer to manipulate an existing database (LINQ to SQL, Hibernate, etc...). But I'm getting kind of tired (and I believe that there should be a better way of doing things) of stuff like:
*
*Creating/altering tables in Visio
*Using Visio's "Update Database" to create/alter the database
*Importing the tables into a "LINQ to SQL classes" object
*Changing the code accordingly
*Compiling
What about a way to generate the database schema from the objects/entities definition? I can't seem to find good references for tools like this (and I would expect some kind of built-in support in at least some frameworks).
It would be perfect if I could just:
*
*Change the object definition
*Change the code that manipulates the object
*Compile (the database changes are done auto-magically)
A: Check out DataObjects.Net - is is designed to support exactly this case. Code only, and nothing else. Its schema upgrade layer is probably the most featured one you can find, and it really fully abstracts schema upgrade SQL.
Check out product video - you'll notice nothing additional is made to sync the schema. Schema upgrade sample shows the intended usage of this feature.
A: You may be looking for an Object Database.
A: I believe this is the problem that the Microsofy Entity Framework is trying to address. Whilst not specifically designed to "Compile (the database changes are done auto-magically)" it does address the issue of handling changes to the domain model without a huge dependance on the underlying data model.
A: As Jason suggested, object db might be a good choice. Take a look at db4objects.
A: What you described is GORM. It is part of the Grails framework and is built to work with Hibernate (maybe JPA in the future). When I was first using Grails it seemed backwards. I was more comfortable with a Rails style workflow of making the tables and letting the framework generate scaffolding from the database schema. GORM persists your domain objects for you so you create and change the objects, it manages database create/update. This makes more sense now that I have gotten used to it. Sorry to tease you if you aren't looking for a new framework but it is on the roadmap for release 1.1 to make GORM available standalone.
A: When we built the first version of our own framework (Inon Datamanager) I had it read pre-existing SQL tables and autogenerate Java objects from them.
When my colleagues who came from a Smalltalkish background built the second version, they started from the objects and then autogenerated the tables.
Actually, they forgot about the SQL part altogether until I came back in and added it. But nowadays we just run a trigger on application startup which iterates over the object model, checks if the tables and all the right columns exist, and creates them if not. Very convenient.
This turned out to be a lot easier than you might expect - if your favourite tool doesn't support a similar process, you could probably write it in a couple of hours - assuming the relational to object mapping is relatively simple.
But the point is, it seems to depend on whether you're culturally an object person or a database person - you can regard either one as the authoritative source.
A: Some of the really big dogs, such as ERwin Data Modeler, will go object to DB. You need to have the big bucks to afford the product though.
A: I kept digging around some of the "major" frameworks and it seems that Django does exactly what I was talking about. Or so it seems from this screencast.
Does anyone have any remark to make about this? Does it work well?
A: Yes, Django works well.
yes, it will generate your SQL tables from your data model definitions (written in python)
It won't always alter existing tables if you update your structure, you might have to run an ALTER table manually
Ruby on Rails has an even more advanced version of these features (Rails migrations), but I don't like the framework as much, I find ruby and rails pretty idiosyncratic
A: Kind of a late answer, but here it goes:
I faced the exact same problem and ended up writing my own solution for it, working with .NET and SQL Server only however. It basicaly does implement the process you describe:
*
*All DB objects are kept as embedded CREATE scripts as part of the source code
*DB Objects are set up automatically (or on request) when using the data access functionality
*All non-table changes are also performed automatically (or on request) at the same time
*Table changes, which may require special attention to migrate data, are performend via (manually created) change scripts also upon upgrading the database
*Even manual changes made to any databse object can be detected, so that schema integrity can be verified and rectified
*An optional lightweight ORM can map stored procedures and objects as well as result sets (even multiple)
*A command-line application helps keeping the SQL source files in sync with a development database
The library including the database are free under a LGPL license.
http://code.google.com/p/bsn-modulestore/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29145",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: What are the differences between delegates and events? What are the differences between delegates and an events? Don't both hold references to functions that can be executed?
A: NOTE: If you have access to C# 5.0 Unleashed, read the "Limitations on Plain Use of Delegates" in Chapter 18 titled "Events" to understand better the differences between the two.
It always helps me to have a simple, concrete example. So here's one for the community. First I show how you can use delegates alone to do what Events do for us. Then I show how the same solution would work with an instance of EventHandler. And then I explain why we DON'T want to do what I explain in the first example. This post was inspired by an article by John Skeet.
Example 1: Using public delegate
Suppose I have a WinForms app with a single drop-down box. The drop-down is bound to an List<Person>. Where Person has properties of Id, Name, NickName, HairColor. On the main form is a custom user control that shows the properties of that person. When someone selects a person in the drop-down the labels in the user control update to show the properties of the person selected.
Here is how that works. We have three files that help us put this together:
*
*Mediator.cs -- static class holds the delegates
*Form1.cs -- main form
*DetailView.cs -- user control shows all details
Here is the relevant code for each of the classes:
class Mediator
{
public delegate void PersonChangedDelegate(Person p); //delegate type definition
public static PersonChangedDelegate PersonChangedDel; //delegate instance. Detail view will "subscribe" to this.
public static void OnPersonChanged(Person p) //Form1 will call this when the drop-down changes.
{
if (PersonChangedDel != null)
{
PersonChangedDel(p);
}
}
}
Here is our user control:
public partial class DetailView : UserControl
{
public DetailView()
{
InitializeComponent();
Mediator.PersonChangedDel += DetailView_PersonChanged;
}
void DetailView_PersonChanged(Person p)
{
BindData(p);
}
public void BindData(Person p)
{
lblPersonHairColor.Text = p.HairColor;
lblPersonId.Text = p.IdPerson.ToString();
lblPersonName.Text = p.Name;
lblPersonNickName.Text = p.NickName;
}
}
Finally we have the following code in our Form1.cs. Here we are Calling OnPersonChanged, which calls any code subscribed to the delegate.
private void comboBox1_SelectedIndexChanged(object sender, EventArgs e)
{
Mediator.OnPersonChanged((Person)comboBox1.SelectedItem); //Call the mediator's OnPersonChanged method. This will in turn call all the methods assigned (i.e. subscribed to) to the delegate -- in this case `DetailView_PersonChanged`.
}
Ok. So that's how you would get this working without using events and just using delegates. We just put a public delegate into a class -- you can make it static or a singleton, or whatever. Great.
BUT, BUT, BUT, we do not want to do what I just described above. Because public fields are bad for many, many reason. So what are our options? As John Skeet describes, here are our options:
*
*A public delegate variable (this is what we just did above. don't do this. i just told you above why it's bad)
*Put the delegate into a property with a get/set (problem here is that subscribers could override each other -- so we could subscribe a bunch of methods to the delegate and then we could accidentally say PersonChangedDel = null, wiping out all of the other subscriptions. The other problem that remains here is that since the users have access to the delegate, they can invoke the targets in the invocation list -- we don't want external users having access to when to raise our events.
*A delegate variable with AddXXXHandler and RemoveXXXHandler methods
This third option is essentially what an event gives us. When we declare an EventHandler, it gives us access to a delegate -- not publicly, not as a property, but as this thing we call an event that has just add/remove accessors.
Let's see what the same program looks like, but now using an Event instead of the public delegate (I've also changed our Mediator to a singleton):
Example 2: With EventHandler instead of a public delegate
Mediator:
class Mediator
{
private static readonly Mediator _Instance = new Mediator();
private Mediator() { }
public static Mediator GetInstance()
{
return _Instance;
}
public event EventHandler<PersonChangedEventArgs> PersonChanged; //this is just a property we expose to add items to the delegate.
public void OnPersonChanged(object sender, Person p)
{
var personChangedDelegate = PersonChanged as EventHandler<PersonChangedEventArgs>;
if (personChangedDelegate != null)
{
personChangedDelegate(sender, new PersonChangedEventArgs() { Person = p });
}
}
}
Notice that if you F12 on the EventHandler, it will show you the definition is just a generic-ified delegate with the extra "sender" object:
public delegate void EventHandler<TEventArgs>(object sender, TEventArgs e);
The User Control:
public partial class DetailView : UserControl
{
public DetailView()
{
InitializeComponent();
Mediator.GetInstance().PersonChanged += DetailView_PersonChanged;
}
void DetailView_PersonChanged(object sender, PersonChangedEventArgs e)
{
BindData(e.Person);
}
public void BindData(Person p)
{
lblPersonHairColor.Text = p.HairColor;
lblPersonId.Text = p.IdPerson.ToString();
lblPersonName.Text = p.Name;
lblPersonNickName.Text = p.NickName;
}
}
Finally, here's the Form1.cs code:
private void comboBox1_SelectedIndexChanged(object sender, EventArgs e)
{
Mediator.GetInstance().OnPersonChanged(this, (Person)comboBox1.SelectedItem);
}
Because the EventHandler wants and EventArgs as a parameter, I created this class with just a single property in it:
class PersonChangedEventArgs
{
public Person Person { get; set; }
}
Hopefully that shows you a bit about why we have events and how they are different -- but functionally the same -- as delegates.
A: You can also use events in interface declarations, not so for delegates.
A: Delegate is a type-safe function pointer. Event is an implementation of publisher-subscriber design pattern using delegate.
A: An event in .net is a designated combination of an Add method and a Remove method, both of which expect some particular type of delegate. Both C# and vb.net can auto-generate code for the add and remove methods which will define a delegate to hold the event subscriptions, and add/remove the passed in delegagte to/from that subscription delegate. VB.net will also auto-generate code (with the RaiseEvent statement) to invoke the subscription list if and only if it is non-empty; for some reason, C# doesn't generate the latter.
Note that while it is common to manage event subscriptions using a multicast delegate, that is not the only means of doing so. From a public perspective, a would-be event subscriber needs to know how to let an object know it wants to receive events, but it does not need to know what mechanism the publisher will use to raise the events. Note also that while whoever defined the event data structure in .net apparently thought there should be a public means of raising them, neither C# nor vb.net makes use of that feature.
A: Here is another good link to refer to.
http://csharpindepth.com/Articles/Chapter2/Events.aspx
Briefly, the take away from the article - Events are encapsulation over delegates.
Quote from article:
Suppose events didn't exist as a concept in C#/.NET. How would another class subscribe to an event? Three options:
*
*A public delegate variable
*A delegate variable backed by a property
*A delegate variable with AddXXXHandler and RemoveXXXHandler methods
Option 1 is clearly horrible, for all the normal reasons we abhor public variables.
Option 2 is slightly better, but allows subscribers to effectively override each other - it would be all too easy to write someInstance.MyEvent = eventHandler; which would replace any existing event handlers rather than adding a new one. In addition, you still need to write the properties.
Option 3 is basically what events give you, but with a guaranteed convention (generated by the compiler and backed by extra flags in the IL) and a "free" implementation if you're happy with the semantics that field-like events give you. Subscribing to and unsubscribing from events is encapsulated without allowing arbitrary access to the list of event handlers, and languages can make things simpler by providing syntax for both declaration and subscription.
A: An Event declaration adds a layer of abstraction and protection on the delegate instance. This protection prevents clients of the delegate from resetting the delegate and its invocation list and only allows adding or removing targets from the invocation list.
A: To define about event in simple way:
Event is a REFERENCE to a delegate with two restrictions
*
*Cannot be invoked directly
*Cannot be assigned values directly (e.g eventObj = delegateMethod)
Above two are the weak points for delegates and it is addressed in event. Complete code sample to show the difference in fiddler is here https://dotnetfiddle.net/5iR3fB .
Toggle the comment between Event and Delegate and client code that invokes/assign values to delegate to understand the difference
Here is the inline code.
/*
This is working program in Visual Studio. It is not running in fiddler because of infinite loop in code.
This code demonstrates the difference between event and delegate
Event is an delegate reference with two restrictions for increased protection
1. Cannot be invoked directly
2. Cannot assign value to delegate reference directly
Toggle between Event vs Delegate in the code by commenting/un commenting the relevant lines
*/
public class RoomTemperatureController
{
private int _roomTemperature = 25;//Default/Starting room Temperature
private bool _isAirConditionTurnedOn = false;//Default AC is Off
private bool _isHeatTurnedOn = false;//Default Heat is Off
private bool _tempSimulator = false;
public delegate void OnRoomTemperatureChange(int roomTemperature); //OnRoomTemperatureChange is a type of Delegate (Check next line for proof)
// public OnRoomTemperatureChange WhenRoomTemperatureChange;// { get; set; }//Exposing the delegate to outside world, cannot directly expose the delegate (line above),
public event OnRoomTemperatureChange WhenRoomTemperatureChange;// { get; set; }//Exposing the delegate to outside world, cannot directly expose the delegate (line above),
public RoomTemperatureController()
{
WhenRoomTemperatureChange += InternalRoomTemperatuerHandler;
}
private void InternalRoomTemperatuerHandler(int roomTemp)
{
System.Console.WriteLine("Internal Room Temperature Handler - Mandatory to handle/ Should not be removed by external consumer of ths class: Note, if it is delegate this can be removed, if event cannot be removed");
}
//User cannot directly asign values to delegate (e.g. roomTempControllerObj.OnRoomTemperatureChange = delegateMethod (System will throw error)
public bool TurnRoomTeperatureSimulator
{
set
{
_tempSimulator = value;
if (value)
{
SimulateRoomTemperature(); //Turn on Simulator
}
}
get { return _tempSimulator; }
}
public void TurnAirCondition(bool val)
{
_isAirConditionTurnedOn = val;
_isHeatTurnedOn = !val;//Binary switch If Heat is ON - AC will turned off automatically (binary)
System.Console.WriteLine("Aircondition :" + _isAirConditionTurnedOn);
System.Console.WriteLine("Heat :" + _isHeatTurnedOn);
}
public void TurnHeat(bool val)
{
_isHeatTurnedOn = val;
_isAirConditionTurnedOn = !val;//Binary switch If Heat is ON - AC will turned off automatically (binary)
System.Console.WriteLine("Aircondition :" + _isAirConditionTurnedOn);
System.Console.WriteLine("Heat :" + _isHeatTurnedOn);
}
public async void SimulateRoomTemperature()
{
while (_tempSimulator)
{
if (_isAirConditionTurnedOn)
_roomTemperature--;//Decrease Room Temperature if AC is turned On
if (_isHeatTurnedOn)
_roomTemperature++;//Decrease Room Temperature if AC is turned On
System.Console.WriteLine("Temperature :" + _roomTemperature);
if (WhenRoomTemperatureChange != null)
WhenRoomTemperatureChange(_roomTemperature);
System.Threading.Thread.Sleep(500);//Every second Temperature changes based on AC/Heat Status
}
}
}
public class MySweetHome
{
RoomTemperatureController roomController = null;
public MySweetHome()
{
roomController = new RoomTemperatureController();
roomController.WhenRoomTemperatureChange += TurnHeatOrACBasedOnTemp;
//roomController.WhenRoomTemperatureChange = null; //Setting NULL to delegate reference is possible where as for Event it is not possible.
//roomController.WhenRoomTemperatureChange.DynamicInvoke();//Dynamic Invoke is possible for Delgate and not possible with Event
roomController.SimulateRoomTemperature();
System.Threading.Thread.Sleep(5000);
roomController.TurnAirCondition (true);
roomController.TurnRoomTeperatureSimulator = true;
}
public void TurnHeatOrACBasedOnTemp(int temp)
{
if (temp >= 30)
roomController.TurnAirCondition(true);
if (temp <= 15)
roomController.TurnHeat(true);
}
public static void Main(string []args)
{
MySweetHome home = new MySweetHome();
}
}
A: For people live in 2020, and want a clean answer...
Definitions:
*
*delegate: defines a function pointer.
*event: defines
*
*(1) protected interfaces, and
*(2) operations(+=, -=), and
*(3) advantage: you don't need to use new keyword anymore.
Regarding the adjective protected:
// eventTest.SomeoneSay = null; // Compile Error.
// eventTest.SomeoneSay = new Say(SayHello); // Compile Error.
Also notice this section from Microsoft: https://learn.microsoft.com/en-us/dotnet/standard/events/#raising-multiple-events
Code Example:
with delegate:
public class DelegateTest
{
public delegate void Say(); // Define a pointer type "void <- ()" named "Say".
private Say say;
public DelegateTest() {
say = new Say(SayHello); // Setup the field, Say say, first.
say += new Say(SayGoodBye);
say.Invoke();
}
public void SayHello() { /* display "Hello World!" to your GUI. */ }
public void SayGoodBye() { /* display "Good bye!" to your GUI. */ }
}
with event:
public class EventTest
{
public delegate void Say();
public event Say SomeoneSay; // Use the type "Say" to define event, an
// auto-setup-everything-good field for you.
public EventTest() {
SomeoneSay += SayHello;
SomeoneSay += SayGoodBye;
SomeoneSay();
}
public void SayHello() { /* display "Hello World!" to your GUI. */ }
public void SayGoodBye() { /* display "Good bye!" to your GUI. */ }
}
Reference:
Event vs. Delegate - Explaining the important differences between the Event and Delegate patterns in C# and why they're useful.: https://dzone.com/articles/event-vs-delegate
A: To understand the differences you can look at this 2 examples
Example with Delegates (in this case, an Action - that is a kind of delegate that doesn't return a value)
public class Animal
{
public Action Run {get; set;}
public void RaiseEvent()
{
if (Run != null)
{
Run();
}
}
}
To use the delegate, you should do something like this:
Animal animal= new Animal();
animal.Run += () => Console.WriteLine("I'm running");
animal.Run += () => Console.WriteLine("I'm still running") ;
animal.RaiseEvent();
This code works well but you could have some weak spots.
For example, if I write this:
animal.Run += () => Console.WriteLine("I'm running");
animal.Run += () => Console.WriteLine("I'm still running");
animal.Run = () => Console.WriteLine("I'm sleeping") ;
with the last line of code, I have overridden the previous behaviors just with one missing + (I have used = instead of +=)
Another weak spot is that every class which uses your Animal class can invoke the delegate directly. For example, animal.Run() or animal.Run.Invoke() are valid outside the Animal class.
To avoid these weak spots you can use events in c#.
Your Animal class will change in this way:
public class ArgsSpecial : EventArgs
{
public ArgsSpecial (string val)
{
Operation=val;
}
public string Operation {get; set;}
}
public class Animal
{
// Empty delegate. In this way you are sure that value is always != null
// because no one outside of the class can change it.
public event EventHandler<ArgsSpecial> Run = delegate{}
public void RaiseEvent()
{
Run(this, new ArgsSpecial("Run faster"));
}
}
to call events
Animal animal= new Animal();
animal.Run += (sender, e) => Console.WriteLine("I'm running. My value is {0}", e.Operation);
animal.RaiseEvent();
Differences:
*
*You aren't using a public property but a public field (using events, the compiler protects your fields from unwanted access)
*Events can't be assigned directly. In this case, it won't give rise to the previous error that I have showed with overriding the behavior.
*No one outside of your class can raise or invoke the event. For example, animal.Run() or animal.Run.Invoke() are invalid outside the Animal class and will produce compiler errors.
*Events can be included in an interface declaration, whereas a field cannot
Notes:
EventHandler is declared as the following delegate:
public delegate void EventHandler (object sender, EventArgs e)
it takes a sender (of Object type) and event arguments. The sender is null if it comes from static methods.
This example, which uses EventHandler<ArgsSpecial>, can also be written using EventHandler instead.
Refer here for documentation about EventHandler
A: In addition to the syntactic and operational properties, there's also a semantical difference.
Delegates are, conceptually, function templates; that is, they express a contract a function must adhere to in order to be considered of the "type" of the delegate.
Events represent ... well, events. They are intended to alert someone when something happens and yes, they adhere to a delegate definition but they're not the same thing.
Even if they were exactly the same thing (syntactically and in the IL code) there will still remain the semantical difference. In general I prefer to have two different names for two different concepts, even if they are implemented in the same way (which doesn't mean I like to have the same code twice).
A: What a great misunderstanding between events and delegates!!! A delegate specifies a TYPE (such as a class, or an interface does), whereas an event is just a kind of MEMBER (such as fields, properties, etc). And, just like any other kind of member an event also has a type. Yet, in the case of an event, the type of the event must be specified by a delegate. For instance, you CANNOT declare an event of a type defined by an interface.
Concluding, we can make the following Observation: the type of an event MUST be defined by a delegate. This is the main relation between an event and a delegate and is described in the section II.18 Defining events of ECMA-335 (CLI) Partitions I to VI:
In typical usage, the TypeSpec (if present) identifies a delegate whose signature matches the arguments passed to the event’s fire method.
However, this fact does NOT imply that an event uses a backing delegate field. In truth, an event may use a backing field of any different data structure type of your choice. If you implement an event explicitly in C#, you are free to choose the way you store the event handlers (note that event handlers are instances of the type of the event, which in turn is mandatorily a delegate type---from the previous Observation). But, you can store those event handlers (which are delegate instances) in a data structure such as a List or a Dictionary or any other else, or even in a backing delegate field. But don’t forget that it is NOT mandatory that you use a delegate field.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29155",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "374"
} |
Q: How do I make a PictureBox use Nearest Neighbor resampling? I am using StretchImage because the box is resizable with splitters. It looks like the default is some kind of smooth bilinear filtering, causing my image to be blurry and have moire patterns.
A: I suspect you're going to have to do the resizing manually thru the Image class and DrawImage function and respond to the resize events on the PictureBox.
A: I needed this functionality also. I made a class that inherits PictureBox, overrides OnPaint and adds a property to allow the interpolation mode to be set:
using System.Drawing.Drawing2D;
using System.Windows.Forms;
/// <summary>
/// Inherits from PictureBox; adds Interpolation Mode Setting
/// </summary>
public class PictureBoxWithInterpolationMode : PictureBox
{
public InterpolationMode InterpolationMode { get; set; }
protected override void OnPaint(PaintEventArgs paintEventArgs)
{
paintEventArgs.Graphics.InterpolationMode = InterpolationMode;
base.OnPaint(paintEventArgs);
}
}
A: I did a MSDN search and turns out there's an article on this, which is not very detailed but outlines that you should use the paint event.
http://msdn.microsoft.com/en-us/library/k0fsyd4e.aspx
I edited a commonly available image zooming example to use this feature, see below
Edited from: http://www.dotnetcurry.com/ShowArticle.aspx?ID=196&AspxAutoDetectCookieSupport=1
Hope this helps
private void Form1_Load(object sender, EventArgs e)
{
// set image location
imgOriginal = new Bitmap(Image.FromFile(@"C:\images\TestImage.bmp"));
picBox.Image = imgOriginal;
// set Picture Box Attributes
picBox.SizeMode = PictureBoxSizeMode.StretchImage;
// set Slider Attributes
zoomSlider.Minimum = 1;
zoomSlider.Maximum = 5;
zoomSlider.SmallChange = 1;
zoomSlider.LargeChange = 1;
zoomSlider.UseWaitCursor = false;
SetPictureBoxSize();
// reduce flickering
this.DoubleBuffered = true;
}
// picturebox size changed triggers paint event
private void SetPictureBoxSize()
{
Size s = new Size(Convert.ToInt32(imgOriginal.Width * zoomSlider.Value), Convert.ToInt32(imgOriginal.Height * zoomSlider.Value));
picBox.Size = s;
}
// looks for user trackbar changes
private void trackBar1_Scroll(object sender, EventArgs e)
{
if (zoomSlider.Value > 0)
{
SetPictureBoxSize();
}
}
// redraws image using nearest neighbour resampling
private void picBox_Paint_1(object sender, PaintEventArgs e)
{
e.Graphics.InterpolationMode = InterpolationMode.NearestNeighbor;
e.Graphics.DrawImage(
imgOriginal,
new Rectangle(0, 0, picBox.Width, picBox.Height),
// destination rectangle
0,
0, // upper-left corner of source rectangle
imgOriginal.Width, // width of source rectangle
imgOriginal.Height, // height of source rectangle
GraphicsUnit.Pixel);
}
A: When resizing an image in .net, the System.Drawing.Drawing2D.InterpolationMode offers the following resize methods:
*
*Bicubic
*Bilinear
*High
*HighQualityBicubic
*HighQualityBilinear
*Low
*NearestNeighbor
*Default
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29157",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30"
} |
Q: Deploying a Git subdirectory in Capistrano My master branch layout is like this:
/ <-- top level
/client <-- desktop client source files
/server <-- Rails app
What I'd like to do is only pull down the /server directory in my deploy.rb, but I can't seem to find any way to do that. The /client directory is huge, so setting up a hook to copy /server to / won't work very well, it needs to only pull down the Rails app.
A: Without any dirty forking action but even dirtier !
In my config/deploy.rb :
set :deploy_subdir, "project/subdir"
Then I added this new strategy to my Capfile :
require 'capistrano/recipes/deploy/strategy/remote_cache'
class RemoteCacheSubdir < Capistrano::Deploy::Strategy::RemoteCache
private
def repository_cache_subdir
if configuration[:deploy_subdir] then
File.join(repository_cache, configuration[:deploy_subdir])
else
repository_cache
end
end
def copy_repository_cache
logger.trace "copying the cached version to #{configuration[:release_path]}"
if copy_exclude.empty?
run "cp -RPp #{repository_cache_subdir} #{configuration[:release_path]} && #{mark}"
else
exclusions = copy_exclude.map { |e| "--exclude=\"#{e}\"" }.join(' ')
run "rsync -lrpt #{exclusions} #{repository_cache_subdir}/* #{configuration[:release_path]} && #{mark}"
end
end
end
set :strategy, RemoteCacheSubdir.new(self)
A: For Capistrano 3.0, I use the following:
In my Capfile:
# Define a new SCM strategy, so we can deploy only a subdirectory of our repo.
module RemoteCacheWithProjectRootStrategy
def test
test! " [ -f #{repo_path}/HEAD ] "
end
def check
test! :git, :'ls-remote', repo_url
end
def clone
git :clone, '--mirror', repo_url, repo_path
end
def update
git :remote, :update
end
def release
git :archive, fetch(:branch), fetch(:project_root), '| tar -x -C', release_path, "--strip=#{fetch(:project_root).count('/')+1}"
end
end
And in my deploy.rb:
# Set up a strategy to deploy only a project directory (not the whole repo)
set :git_strategy, RemoteCacheWithProjectRootStrategy
set :project_root, 'relative/path/from/your/repo'
All the important code is in the strategy release method, which uses git archive to archive only a subdirectory of the repo, then uses the --strip argument to tar to extract the archive at the right level.
UPDATE
As of Capistrano 3.3.3, you can now use the :repo_tree configuration variable, which makes this answer obsolete. For example:
set :repo_url, 'https://example.com/your_repo.git'
set :repo_tree, 'relative/path/from/your/repo' # relative path to project root in repo
See http://capistranorb.com/documentation/getting-started/configuration.
A: You can have two git repositories (client and server) and add them to a "super-project" (app). In this "super-project" you can add the two repositories as submodules (check this tutorial).
Another possible solution (a bit more dirty) is to have separate branches for client and server, and then you can pull from the 'server' branch.
A: There is a solution. Grab crdlo's patch for capistrano and the capistrano source from github. Remove your existing capistrano gem, appy the patch, setup.rb install, and then you can use his very simple configuration line set :project, "mysubdirectory" to set a subdirectory.
The only gotcha is that apparently github doesn't "support the archive command" ... at least when he wrote it. I'm using my own private git repo over svn and it works fine, I haven't tried it with github but I imagine if enough people complain they'll add that feature.
Also see if you can get capistrano authors to add this feature into cap at the relevant bug.
A: For Capistrano 3, based on @Thomas Fankhauser answer:
set :repository, "[email protected]:name/project.git"
set :branch, "master"
set :subdir, "relative_path_to_my/subdir"
namespace :deploy do
desc "Checkout subdirectory and delete all the other stuff"
task :checkout_subdir do
subdir = fetch(:subdir)
subdir_last_folder = File.basename(subdir)
release_subdir_path = File.join(release_path, subdir)
tmp_base_folder = File.join("/tmp", "capistrano_subdir_hack")
tmp_destination = File.join(tmp_base_folder, subdir_last_folder)
cmd = []
# Settings for my-zsh
# cmd << "unsetopt nomatch && setopt rmstarsilent"
# create temporary folder
cmd << "mkdir -p #{tmp_base_folder}"
# delete previous temporary files
cmd << "rm -rf #{tmp_base_folder}/*"
# move subdir contents to tmp
cmd << "mv #{release_subdir_path}/ #{tmp_destination}"
# delete contents inside release
cmd << "rm -rf #{release_path}/*"
# move subdir contents to release
cmd << "mv #{tmp_destination}/* #{release_path}"
cmd = cmd.join(" && ")
on roles(:app) do
within release_path do
execute cmd
end
end
end
end
after "deploy:updating", "deploy:checkout_subdir"
A: We're also doing this with Capistrano by cloning down the full repository, deleting the unused files and folders and move the desired folder up the hierarchy.
deploy.rb
set :repository, "[email protected]:name/project.git"
set :branch, "master"
set :subdir, "server"
after "deploy:update_code", "deploy:checkout_subdir"
namespace :deploy do
desc "Checkout subdirectory and delete all the other stuff"
task :checkout_subdir do
run "mv #{current_release}/#{subdir}/ /tmp && rm -rf #{current_release}/* && mv /tmp/#{subdir}/* #{current_release}"
end
end
As long as the project doesn't get too big this works pretty good for us, but if you can, create an own repository for each component and group them together with git submodules.
A: Unfortunately, git provides no way to do this. Instead, the 'git way' is to have two repositories -- client and server, and clone the one(s) you need.
A: I created a snipped that works with Capistrano 3.x based in previous anwers and other information found in github:
# Usage:
# 1. Drop this file into lib/capistrano/remote_cache_with_project_root_strategy.rb
# 2. Add the following to your Capfile:
# require 'capistrano/git'
# require './lib/capistrano/remote_cache_with_project_root_strategy'
# 3. Add the following to your config/deploy.rb
# set :git_strategy, RemoteCacheWithProjectRootStrategy
# set :project_root, 'subdir/path'
# Define a new SCM strategy, so we can deploy only a subdirectory of our repo.
module RemoteCacheWithProjectRootStrategy
include Capistrano::Git::DefaultStrategy
def test
test! " [ -f #{repo_path}/HEAD ] "
end
def check
test! :git, :'ls-remote -h', repo_url
end
def clone
git :clone, '--mirror', repo_url, repo_path
end
def update
git :remote, :update
end
def release
git :archive, fetch(:branch), fetch(:project_root), '| tar -x -C', release_path, "--strip=#{fetch(:project_root).count('/')+1}"
end
end
It's also available as a Gist on Github.
A: dont know if anyone is still interested on this. but just letting you guys if anyone is looking for an answer.
now we can use: :repo_tree
https://capistranorb.com/documentation/getting-started/configuration/
A: Looks like it's also not working with codebasehq.com so I ended up making capistrano tasks that cleans the mess :-) Maybe there's actually a less hacky way of doing this by overriding some capistrano tasks...
A: This has been working for me for a few hours.
# Capistrano assumes that the repository root is Rails.root
namespace :uploads do
# We have the Rails application in a subdirectory rails_app
# Capistrano doesn't provide an elegant way to deal with that
# for the git case. (For subversion it is straightforward.)
task :mv_rails_app_dir, :roles => :app do
run "mv #{release_path}/rails_app/* #{release_path}/ "
end
end
before 'deploy:finalize_update', 'uploads:mv_rails_app_dir'
You might declare a variable for the directory (here rails_app).
Let's see how robust it is. Using "before" is pretty weak.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29168",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "67"
} |
Q: SimpleModal breaks ASP.Net Postbacks I'm using jQuery and SimpleModal in an ASP.Net project to make some nice dialogs for a web app. Unfortunately, any buttons in a modal dialog can no longer execute their postbacks, which is not really acceptable.
There is one source I've found with a workaround, but for the life of me I can't get it to work, mostly because I am not fully understanding all of the necessary steps.
I also have a workaround, which is to replace the postbacks, but it's ugly and probably not the most reliable. I would really like to make the postbacks work again. Any ideas?
UPDATE: I should clarify, the postbacks are not working because the Javascript used to execute the post backs has broken in some way, so nothing happens at all when the button is clicked.
A: All standard ASP.NET postbacks work by calling a __doPostBack javascript method on the page. That function submits the form (ASP.NET only really likes one form per page) which includes some hidden input field in which all the viewstate and other goodness lives.
On the face of it I can't see anything in SimpalModal that would screw up your page's form or any of the standard hidden inputs, unless the contents of that modal happened to come from a HTTP GET to an ASP.NET page. That would result in two ASP.NET forms being rendered into one DOM and would would almost certainly screw up the __doPostBack function.
Have you considered using the ASP.NET AJAX ModalPopup control?
A: Web browsers will not POST any disabled or hidden form elements.
So what's happening is:
*
*The user clicks on a button in your dialog.
*The button calls SimpleModal's close() method, hiding the dialog and the button
*The client POSTs the form (without the button's ID)
*The ASP.NET framework can't figure out which button was clicked
*Your server-side code doesn't get executed.
The solution is to do whatever you need to do on the client (closing the dialog in this case) and then call __doPostback() yourself.
For example (where "dlg" is the client-side SimpleModal dialog reference):
btn.OnClientClick = string.Format("{0}; dlg.close();",
ClientScript.GetPostBackEventReference(btn, null));
That should hide the dialog, submit the form, and call whatever server-side event you have for that button.
@Dan
All standard ASP.NET postbacks work by calling a __doPostBack javascript method on the page.
asp:Buttons do not call __doPostback() because HTML input controls already submit the form.
A: Both of you were on the right track. What I realized is that SimpleModal appends the dialog to the body, which is outside ASP.Net's <form>, which breaks the functionality, since it can't find the elements.
To fix it, I just modified the SimpleModal source to append eveything to 'form' instead of 'body'. When I create the dialog, I also use the persist: true option, to make sure the buttons stay through opening and closing.
Thanks everyone for the suggestions!
UPDATE: Version 1.3 adds an appendTo option in the configuration for specifying which element the modal dialog should be appended to. Here are the docs.
A: got caught out by this one - many thanks to tghw and all the other contributors on the appendto form instead of body fix. (resolved by attributes on the 1.3 version)
btw: If anyone needs to close the dialog programmatically from .net - you can use this type of syntax
private void CloseDialog()
{
string script = string.Format(@"closeDialog()");
ScriptManager.RegisterClientScriptBlock(this, typeof(Page), UniqueID, script, true);
}
where the javascript of closedialog is like this....
function closeDialog() {
$.modal.close();
}
A: I have found the following works without modifying simplemodal.js:
function modalShow(dialog) {
// if the user clicks "Save" in dialog
dialog.data.find('#ButtonSave').click(function(ev) {
ev.preventDefault();
//Perfom validation
// close the dialog
$.modal.close();
//Fire the click event of the hidden button to cause a postback
dialog.data.find('#ButtonSaveTask').click();
});
dialog.data.find("#ButtonCancel").click(function(ev) {
ev.preventDefault();
$.modal.close();
});
}
So instead of using the buttons in the dialog to cause the postback you prevent their submit and then find a hidden button in the form and call its click event.
A: FWIW, I've updated the blog post you pointed to with come clarification, reposted here - the reasoning & other details are in the blog post:
The solution (as of my last checkin before lunch):
*
*Override the dialog's onClose event, and do the following:
*
*Call the dialog's default Close function
*Set the dialog div's innerHTML to a single
*Hijack __doPostBack, pointing it to a new function, newDoPostBack
From some comments I’ve seen on the web, point 1 needs some clarification. Unfortunately, I’m no longer with the same employer, and don’t have access to the code I used, but I’ll do what I can. First off, you need to override the dialog’s onClose function by defining a new function, and pointing your dialog to it, like this:
$('#myJQselector').modal({onClose: mynewClose});
*
*Call the dialog's default Close function. In the function you define, you should first call the default functionality (a best practice for just about anything you override usually):
*Set the dialog div's innerHTML to a single – This is not a required step, so skip it if you don’t understand this.
*Hijack __doPostBack, pointing it to a new function, newDoPostBack
function myNewClose (dialog)
{
dialog.close();
__doPostBack = newDoPostBack;
}
*
*Write the newDoPostBack function:
function newDoPostBack(eventTarget, eventArgument)
{
var theForm = document.forms[0];
if (!theForm)
{
theForm = document.aspnetForm;
}
if (!theForm.onsubmit || (theForm.onsubmit() != false))
{
document.getElementById("__EVENTTARGET").value = eventTarget;
document.getElementById("__EVENTARGUMENT").value = eventArgument;
theForm.submit();
}
}
A: The new Jquery.simplemodal-1.3.js has an option called appendTo. So add an option called appendTo:'form' because the default is appendTo:'body' which doesn't work in asp.net.
A: Had the same problem, but {appendTo:'form'} caused the modal popup to be rendered completely wrong (as though I had a CSS issue).
Turns out the template I'm building on top of has includes that put other forms on the page. Once I set {appendTo:'#aspnetForm'} (the default Asp.net form ID), everything worked great (including the postback).
A: In addition to tghw's answer, this excellent blog post helped me: jQuery: Fix your postbacks in Modal forms -- specifically BtnMike's comment: "You also must not have CssClass=”simplemodal-close” set on your asp:button." Taking that off the class was the not-obvious-to-me solution.
-John
A: if you don want modify the SimpleModal source.
try this..
After you call the modal() method add this:
$("#simplemodal-overlay").appendTo('form');
$("#simplemodal-container").appendTo('form');
the SimpleModal plugin add two this to your markup.
*
*'simplemodal-overlay' for the background
*'simplemodal-container' containig the div that you whant as pop up modal.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29174",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25"
} |
Q: (IIS/Win2000Pro) Granting Registry read rights to IIS user? Okay, so I'm running a small test webserver on my private network. I've got a machine running Windows 2000 Pro, and I'm trying to run an ASP.NET app through IIS.
I wrote it so that the webpage would use the registry to store certain settings (connection strings, potentially volatile locations of other web services, paths in the local filesystem where certain information is stored etc...) Of course, it worked fine when testing with VStudio.NET 2005, because the user running the app has elevated privileges. However, running it on IIS I get a "Access to the registry key 'HKEY_LOCAL_MACHINE\Software' is denied.", which suggests the IIS user doesn't have read access to that part of the registry (I only do reads through the website itself, never writes).
I was like "okay, simple enough, I'll just go give that user rights to that part of the registry through regedit." The problem is, I don't see an option anywhere in regedit to change security settings... at all. Which got me thinking... I don't think I've ever actually had to change security settings for registry hives/keys before, and I don't think I know how to do it.
Half an hour of searching the web later, I haven't found any usable information on this subject. What I'm wondering is... how DO you change security rights to portions of the registry? I'm stumped, and it seems my ability to find the answer on Google is failing me utterly... and since I just signed up here, I figured I'd see if anyone here knew. =)
A: If your having touble with RegEdit in Windows 2000 you can try the following:
*
*Copy the Windows XP RegEdt32.exe to the Windows 2000 Machine
*Using a Windows XP Machine, connect to the Windows 2000 registry remotely: File > Connect Network Registry
A: You can set permissions at the folder level for which you want to grant user permissions read/write access.
In your case, right click on the "Software" folder and select "Permissions".
You'll probably know the rest from there.
EDIT: If you still run into issues, you may want to modify your web.config file and use impersonation to have your web application run as a certain user account. Then you can put a tighter reign on the controls.
A: RegEdt32.exe will allow you to set permissions to registry keys.
Simply right click on a Key (Folder) and click Permissions, then you can edit the permissions as you would an file system folder.
A: I did so, assuming that a Security setting would be available. I didn't see any "Security" option when I right-clicked on the Key. =( I triple-checked just to make sure... and I just tried it on my XP machine, and it does indeed have the "Permissions" section... but the Windows 2000 machine doesn't. (how's that for wierd?)
In my searching, I found:
http://www.experts-exchange.com/Programming/Languages/.NET/ASP.NET/Q_21563044.html
Which notes that RegEdit for Windows 2000 doesn't have the Security/Permissions settings... but it proposes no solution to the problem. (Whoever asked the question was using Windows XP so he was okay... but in my case, it's 2000)
Is there any way to make it happen specifically in 2000?
EDIT: Ahhhh... if worse come to worse, I suppose I can do the impersonation as mentioned below... though if I can't set security settings for the registry in 2000, I'm left with making that user have Administrative access (I assume?) to actually get those rights, which sadly defeats the purpose. =(
A: Oh, let me try that! I didn't realize you could remotely connect to another registry.
(EDIT: I was wrong, it did work... it just took several minutes to respond to my request to change permissions remotely)
The remote connection idea did it! You're good! Thanks so much for your help! I never realized you could remote connect with RegEdit... you learn something new every day, they say! =) Thanks again for your assistance! =)
On another note though, about copying the XP version of RegEdit to Windows 2000... is that safe? I figured they would be coded in such a way as to be incompatible... but I could be assuming too much. =)
A: Just use RegEdt32.exe instead of Regedit.exe.
Go to the desired key or folder, then open the security menu and click on 'permissions'.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29175",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: DoDragDrop and MouseUp Is there an easy way to ensure that after a drag-and-drop fails to complete, the MouseUp event isn't eaten up and ignored by the framework?
I have found a blog post describing one mechanism, but it involves a good deal of manual bookkeeping, including status flags, MouseMove events, manual "mouse leave" checking, etc. all of which I would rather not have to implement if it can be avoided.
A: I was recently wanting to put Drag and Drop functionality in my project and I hadn't come across this issue, but I was intrigued and really wanted to see if I could come up with a better method than the one described in the page you linked to. I hope I clearly understood everything you wanted to do and overall I think I succeeded in solving the problem in a much more elegant and simple fashion.
On a quick side note, for problems like this it would be great if you provide some code so we can see exactly what it is you are trying to do. I say this only because I assumed a few things about your code in my solution...so hopefully it's pretty close.
Here's the code, which I will explain below:
this.LabelDrag.QueryContinueDrag += new System.Windows.Forms.QueryContinueDragEventHandler(this.LabelDrag_QueryContinueDrag);
this.LabelDrag.MouseDown += new System.Windows.Forms.MouseEventHandler(this.LabelDrag_MouseDown);
this.LabelDrag.MouseUp += new System.Windows.Forms.MouseEventHandler(this.LabelDrag_MouseUp);
this.LabelDrop.DragDrop += new System.Windows.Forms.DragEventHandler(this.LabelDrop_DragDrop);
this.LabelDrop.DragEnter += new System.Windows.Forms.DragEventHandler(this.LabelMain_DragEnter);
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
private void LabelDrop_DragDrop(object sender, DragEventArgs e)
{
LabelDrop.Text = e.Data.GetData(DataFormats.Text).ToString();
}
private void LabelMain_DragEnter(object sender, DragEventArgs e)
{
if (e.Data.GetDataPresent(DataFormats.Text))
e.Effect = DragDropEffects.Copy;
else
e.Effect = DragDropEffects.None;
}
private void LabelDrag_MouseDown(object sender, MouseEventArgs e)
{
//EXTREMELY IMPORTANT - MUST CALL LabelDrag's DoDragDrop method!!
//Calling the Form's DoDragDrop WILL NOT allow QueryContinueDrag to fire!
((Label)sender).DoDragDrop(TextMain.Text, DragDropEffects.Copy);
}
private void LabelDrag_MouseUp(object sender, MouseEventArgs e)
{
LabelDrop.Text = "LabelDrag_MouseUp";
}
private void LabelDrag_QueryContinueDrag(object sender, QueryContinueDragEventArgs e)
{
//Get rect of LabelDrop
Rectangle rect = new Rectangle(LabelDrop.Location, new Size(LabelDrop.Width, LabelDrop.Height));
//If the left mouse button is up and the mouse is not currently over LabelDrop
if (Control.MouseButtons != MouseButtons.Left && !rect.Contains(PointToClient(Control.MousePosition)))
{
//Cancel the DragDrop Action
e.Action = DragAction.Cancel;
//Manually fire the MouseUp event
LabelDrag_MouseUp(sender, new MouseEventArgs(Control.MouseButtons, 0, Control.MousePosition.X, Control.MousePosition.Y, 0));
}
}
}
I have left out most of the designer code, but included the Event Handler link up code so you can be sure what is linked to what. In my example, the drag/drop is occuring between the labels LabelDrag and LabelDrop.
The main piece of my solution is using the QueryContinueDrag event. This event fires when the keyboard or mouse state changes after DoDragDrop has been called on that control. You may already be doing this, but it is very important that you call the DoDragDrop method of the control that is your source and not the method associated with the form. Otherwise QueryContinueDrag will NOT fire!
One thing to note is that QueryContinueDrag will actually fire when you release the mouse on the drop control so we need to make sure we allow for that. This is handled by checking that the Mouse position (retrieved with the global Control.MousePosition property) is inside of the LabelDrop control rectangle. You must also be sure to convert MousePosition to a point relative to the Client Window with PointToClient as Control.MousePosition returns a screen relative position.
So by checking that the mouse is not over the drop control and that the mouse button is now up we have effectively captured a MouseUp event for the LabelDrag control! :) Now, you could just do whatever processing you want to do here, but if you already have code you are using in the MouseUp event handler, this is not efficient. So just call your MouseUp event from here, passing it the necessary parameters and the MouseUp handler won't ever know the difference.
Just a note though, as I call DoDragDrop from within the MouseDown event handler in my example, this code should never actually get a direct MouseUp event to fire. I just put that code in there to show that it is possible to do it.
Hope that helps!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29177",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: Off-the-Shelf C++ Hex Dump Code I work a lot with network and serial communications software, so it is often necessary for me to have code to display or log hex dumps of data packets.
Every time I do this, I write yet another hex-dump routine from scratch. I'm about to do so again, but figured I'd ask here: Is there any good free hex dump code for C++ out there somewhere?
Features I'd like:
*
*N bytes per line (where N is somehow configurable)
*optional ASCII/UTF8 dump alongside the hex
*configurable indentation, per-line prefixes, per-line suffixes, etc.
*minimal dependencies (ideally, I'd like the code to all be in a header file, or be a snippet I can just paste in)
Edit: Clarification: I am looking for code that I can easily drop in to my own programs to write to stderr, stdout, log files, or other such output streams. I'm not looking for a command-line hex dump utility.
A: The unix tool xxd is distributed as part of vim, and according to http://www.vmunix.com/vim/util.html#xxd, the source for xxd is ftp://ftp.uni-erlangen.de:21/pub/utilities/etc/xxd-1.10.tar.gz. It was written in C and is about 721 lines. The only licensing information given for it is this:
* Distribute freely and credit me,
* make money and share with me,
* lose money and don't ask me.
The unix tool hexdump is available from http://gd.tuwien.ac.at/softeng/Aegis/hexdump.html. It was written in C and can be compiled from source. It's quite a bit bigger than xxd, and is distributed under the GPL.
A: I often use this little snippet I've written long time ago. It's short and easy to add anywhere when debugging etc...
#include <ctype.h>
#include <stdio.h>
void hexdump(void *ptr, int buflen) {
unsigned char *buf = (unsigned char*)ptr;
int i, j;
for (i=0; i<buflen; i+=16) {
printf("%06x: ", i);
for (j=0; j<16; j++)
if (i+j < buflen)
printf("%02x ", buf[i+j]);
else
printf(" ");
printf(" ");
for (j=0; j<16; j++)
if (i+j < buflen)
printf("%c", isprint(buf[i+j]) ? buf[i+j] : '.');
printf("\n");
}
}
A: Just in case someone finds it useful...
I've found single function implementation for ascii/hex dumper in this answer.
A C++ version based on the same answer with ANSI terminal colours can be found here.
More lightweight than xxd.
A: Could you write your own dissector for Wireshark?
Edit: written before the precision in the question
A: I have seen PSPad used as a hex editor, but I usually do the same thing you do. I'm surprised there's not an "instant answer" for this question. It's a very common need.
A: I used this in one of my internal tools at work.
A: xxd is the 'standard' hex dump util and looks like it should solve your problems
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29242",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: How do I create an xml document in python Here is my sample code:
from xml.dom.minidom import *
def make_xml():
doc = Document()
node = doc.createElement('foo')
node.innerText = 'bar'
doc.appendChild(node)
return doc
if __name__ == '__main__':
make_xml().writexml(sys.stdout)
when I run the above code I get this:
<?xml version="1.0" ?>
<foo/>
I would like to get:
<?xml version="1.0" ?>
<foo>bar</foo>
I just guessed that there was an innerText property, it gives no compiler error, but does not seem to work... how do I go about creating a text node?
A: Setting an attribute on an object won't give a compile-time or a run-time error, it will just do nothing useful if the object doesn't access it (i.e. "node.noSuchAttr = 'bar'" would also not give an error).
Unless you need a specific feature of minidom, I would look at ElementTree:
import sys
from xml.etree.cElementTree import Element, ElementTree
def make_xml():
node = Element('foo')
node.text = 'bar'
doc = ElementTree(node)
return doc
if __name__ == '__main__':
make_xml().write(sys.stdout)
A: @Daniel
Thanks for the reply, I also figured out how to do it with the minidom (I'm not sure of the difference between the ElementTree vs the minidom)
from xml.dom.minidom import *
def make_xml():
doc = Document();
node = doc.createElement('foo')
node.appendChild(doc.createTextNode('bar'))
doc.appendChild(node)
return doc
if __name__ == '__main__':
make_xml().writexml(sys.stdout)
I swear I tried this before posting my question...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29243",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: HTML Select Tag with black background - dropdown triangle is invisible in Firefox 3 I have the following HTML (note the CSS making the background black and text white)
<html>
<select id="opts" style="background-color: black; color: white;">
<option>first</option>
<option>second</option>
</select>
</html>
Safari is smart enough to make the small triangle that appears to the right of the text the same color as the foreground text.
Other browsers basically ignore the CSS, so they're fine too.
Firefox 3 however applies the background color but leaves the triangle black, so you can't see it, like this
I can't find out how to fix this - can anyone help? Is there a -moz-select-triangle-color or something obscure like that?
A: Must be a Vista problem. I have XP SP 2 and it looks normal.
A: Problem with the fix above is it doesn't work on Safari - you end up with the white background showing up which looks bad. I got round this by using this Moz specific pseudo-class:
select:-moz-system-metric(windows-default-theme) {
background-image: url(../images/selectBox.gif);
background-position: right;
background-repeat: no-repeat;
}
In theory this only applies this CSS if a fancy Windows theme is in effect, see this https://developer.mozilla.org/en/CSS/%3a-moz-system-metric(windows-default-theme)
A: Does the button need to be black? you could apply the black background to the options instead.
A: To make the little black arrow show on vista (with a black background), I made a white box gif and used the following CSS:
select {
background-image: url(../images/selectBox.gif);
background-position: right;
background-repeat: no-repeat;
}
A:
I dropped that code into a file and pushed it to ff3 and I don't see what you see...the arrow is default color with gray background and black arrow.
Are you styling scrollbars too?
I've updated the post, the HTML in there is now literally all the html that is being loaded, no other CSS/JS or anything, and it still looks exactly as posted in the pic.
Note I'm on vista. It may do different things on XP, I haven't checked
A:
Must be a Vista problem. I have XP SP 2 and it looks normal.
So it is.
I tried it on XP and it's fine, and on vista with the theme set to windows classic it's also fine. Must just be a bug in the firefox-vista-aero theme.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29244",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: What versions of Visual Studio can be installed concurrently? Are there any conflicts with having any combination of Visual Studio 2003, 2005 and/or 2008 installed? I noticed a related question here but wanted a more general answer.
A: 6, 2000/2001 (I can't remember which is .net 1.0), 2003, 2005, 2008... of course within .NET you may have issues with getting the right solution with the right version. I haven't really seen any conflicts in particular.
A: Just make sure you only have RTM versions and not Beta or RC versions installed. You'll have no end of pain if you don't cleanly remove the beta or RC versions before installing the RTM versions.
A: I have all 3 installed and have had no adverse problems...knocking on wood
A: 6/2002/2003/2005/2008, I believe, can all coexist.
Though just this weekend I purged 'em all except 2008 as it went totally mad and stopped showing the build output. Plus my splash screen wasn't right. Now it is.
A: I've got 2005 and 2008 installed concurrently.
2008 is a superset of 2005, so I have no reason whatsoever to have them both, I just haven't gotten around to un-installing it yet
A: The only minor problem I had was that I installed 03 after 08, and all my solutions then became assigned to 03. Assigning them to the version selector instead was all I needed to do.
A: Yes no problems, and I typically have 2-3 versions installed at the same time. 2003 is the one I haven't used much, but my production code is currently split between 2005, 2008, and 2010. over the next year all the 2005 code will be moved to 2010 and .NET 4, so it will be installed.
I would have VS6 installed for legacy support but I have to run at it in a VM because Win7 doesn't like it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29247",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Windows Vista: Unable to load DLL 'x.dll': Invalid access to memory location. (DllNotFoundException) I was testing on a customer's box this afternoon which has Windows Vista (He had home, but I am testing on a Business Edition with same results).
We make use of a .DLL that gets the Hardware ID of the computer. It's usage is very simple and the sample program I have created works. The Dll is This from AzSdk.
In fact, this works perfectly under Windows XP. However, for some strange reason, inside our project (way bigger), we get this exception:
Exception Type: System.DllNotFoundException
Exception Message: Unable to load DLL 'HardwareID.dll': Invalid access to memory location. (Exception from HRESULT: 0x800703E6)
Exception Target Site: GetHardwareID
I don't know what can be causing the problem, since I have full control over the folder. The project is a c#.net Windows Forms application and everything works fine, except the call for the external library.
I am declaring it like this: (note: it's not a COM library and it doesn't need to be registered).
[DllImport("HardwareID.dll")]
public static extern String GetHardwareID(bool HDD,
bool NIC, bool CPU, bool BIOS, string sRegistrationCode);
And then the calling code is quite simple:
private void button1_Click(object sender, EventArgs e)
{
textBox1.Text = GetHardwareID(cb_HDD.Checked,
cb_NIC.Checked,
cb_CPU.Checked,
cb_BIOS.Checked,
"*Registration Code*");
}
When you create a sample application, it works, but inside my projectit doesn't. Under XP works fine. Any ideas about what should I do in Vista to make this work?
As I've said, the folder and its sub-folders have Full Control for "Everybody".
UPDATE: I do not have Vista SP 1 installed.
UPDATE 2: I have installed Vista SP1 and now, with UAC disabled, not even the simple sample works!!! :( Damn Vista.
A:
Unable to load DLL 'HardwareID.dll':
Invalid access to memory location.
(Exception from HRESULT: 0x800703E6)
The name of DllNotFoundException is confusing you - this isn't a problem with finding or loading the DLL file, the problem is that when the DLL is loaded, it does an illegal memory access which causes the loading process to fail.
Like another poster here, I think this is a DEP problem, and that your UAC, etc, changes have finally allowed you to disable DEP for this application.
A: @Martín
The reason you were not getting the UAC prompt is because UAC can only change how a process is started, once the process is running it must stay at the same elevation level. The UAC will prompt will happen if:
*
*Vista thinks it's an installer (lots of rules here, the simplest one is if it's called "setup.exe"),
*If it's flagged as "Run as Administrator" (you can edit this by changing the properties of the shortcut or the exe), or
*If the exe contains a manifest requesting admin privileges.
The first two options are workarounds for 'legacy' applications that were around before UAC, the correct way to do it for new applications is to embed a manifest resource asking for the privileges that you need.
Some program, such as Process Explorer appear to elevate a running process (when you choose "Show details for all process" in the file menu in this case) but what they really do is start a new instance, and it's that new instance that gets elevated - not the one that was originally running. This is the recommend way of doing it if only some parts of your application need elevation (e.g. a special 'admin options' dialog).
A: Is the machine you have the code deployed on a 64-bit machine? You could also be running into a DEP issue.
Edit
This is a 1st gen Macbook Pro with a 1st gen Core Duo 2 Intel processor. Far from 64 bits.
I mentioned 64 bit, because at low levels structs from 32 bit to 64 bit do not get properly handled. Since the machines aren't 64bit, then more than likely disabling DEP would be a good logical next step. Vista did get more secure than XP SP2.
Well, I've just turned DEP globally off to no avail. Same error.
Well, I also read that people were getting this error after updating a machine to Vista SP1. Do these Vista installs have SP1 on them?
Turns out to be something completely different. Just for the sake of testing, I've disabled de UAC (note: I was not getting any prompt).
Great, I was actually going to suggest that, but I figured you probably tried it already.
A: Have you made a support request to the vendor? Perhaps there's something about the MacBook Pro hardware that prevents the product from working.
A: Given that the exception is a DllNotFoundException, you might want to try checking the HardwareID.dll with Dependency Walker BEFORE installing any dev tools on the Vista install to see if there is in fact a dependency missing.
A:
In addition to allowing full control to "Everyone" does the location also allow processes with a medium integrity level to write?
How do I check that ? I am new to Vista, I don't like it too much, it's too slow inside a VM for daily work and for VStudio usage inside a Virtual Machine, it doesn't bring anything new.
From a command prompt to you can execute:
icacls C:\Folder
If you see a line such as "Mandatory Label\High Mandatory Level" then the folder is only accessible to a high integrity process. If there is no such line then medium integrity processes can access it provided there are no other ACLs denying access (based on user for example).
EDIT: Forgot to mention you can use the /setintegritylevel switch to actually change the required integrity level for accessing the object.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29284",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Version Control for Graphics Say a development team includes (or makes use of) graphic artists who create all the images that go into a product. Such things include icons, bitmaps, window backgrounds, button images, animations, etc.
Obviously, everything needed to build a piece of software should be under some form of version control. But most version control systems for developers are designed primarily for text-based information. Should the graphics people use the same version-control system and repository that the coders do? If not, what should they use, and what is the best way to keep everything synchronized?
A: We, too, just put the binaries in source control. We use Git, but it would apply just as well to Subversion.
One suggestion I have is to use SVGs where possible, because you can see actual differences. With binaries (most other image formats), the best you can get is a version history.
A: A lot of the graphics type people will want something more sophisticated than subversion. While it's good for version control, they will want a content management system that allows cross-referencing of assets, tagging, thumbnails and that sort of thing (as well as versioning).
A: TortoiseSVN can show image revisions side-by-side, which is really useful. I've used it with different teams with a great degree of success. The artists loved having the ability to roll back things (after they got used to the concepts). It does take a lot of space, though.
A: Interesting question. I don't have a bunch of experience working directly with designers on a project. When I have, it's been through a contractual sort of agreement where they "delivered" a design. I have done some of my own design work for both web sites and desktop applications, and though I have not used source control in the past, I am in the process of implementing SVN for my own use as I am starting to do some paid freelance work. I intend to utilize version/source control precisely the way I would with source code. It just becomes another folder in the project trunk. The way I have worked without source control is to create an assets folder in which all media files that are equivalents of source code reside. I like to think of Photoshop PSD's as graphics source code while the JPEG output for a web site or otherwise is the compiled version.
In the case of working with designers, which is a distinct possibility I face in the near future, I'd like to make an attempt to have them "check-in" their different versions of their source files on a regular basis. I'll be curious to read what others with some experience will say in response to this.
A: We use subversion. Just place a folder under /trunk/docs for comps and have designers check out and commit to that folder. Works like a champ.
A: I would definitely put the graphics under version control. The diff might not be very useful from within a diff tool like diffmerge, but you can still checkout two versions of the graphic and view them side by side to see the differences.
I don't see any reason why the resultant graphics shouldn't be kept in the same version control system that the coders use. However, when you're creating graphics using PSD files or PDN files you might want to create a seperate repository for those as they have a different context to the actual end jpeg or gif that is produced and deployed with the developed application.
A: @lomaxx TortoiseSVN includes a program called TortoiseIDiff which looks to be a diff for images. I haven't used it but looks intriguing.
A: In my opinion Pixelapse combined with a backup solution is the best version control software for graphics that I've found thus far. It supports adobe files and a bunch of normal raster images. It has version by version preview. It autosaves when the files update(on save). It works like dropbox but have a great web interface.
You can use it in teams and share projects to different people. It also support infinite reviewers which is great for design agencies. And if you want you can publicly collaborate on projects that are "open".
Unfortunately you can't have a local pixelapse server, so for backup my current setup is that I have the Pixelapse folder(like a dropbox folder) inside a git repo for snapshot creation.
A: Yes, having art assets in version control is very useful. You get the ability to track history, roll back changes, and you have a single source to do backups with. Keep in mind that art assets are MUCH larger so your server needs to have lots of disk space & network bandwidth.
I've had success with using perforce on very large projects (+100 GB), however we had to wrap access to the version control server with something a little more artist friendly.
I've heard some good things about Alienbrain as well, it does seem to have a very slick UI.
A: GitHub recently introduced "image view modes", take a look: https://github.com/blog/817-behold-image-view-modes.
A: With respect to diff and merging, I think the version control is more critical for graphics and media elements. If you think about it, most designers are going to be the sole owners of a file -- at least in the case of graphics -- or at least I would think that'd be the case. I'd be curious to hear from a designer.
A: @Damian - Good point about the tagging and cross referencing. That's true; while I haven't working with many designers on a software development project, I have worked for a company that had a design department and know that this is an issue. Designers are still (perpetually) looking for the perfect system to handle this sort of thing. I think this is more suited to a design department for shared access, searching and versioning, etc to all assets -- where there is a business incentive to not reinvent the wheel wherever/whenever possible. I don't think it would apply for a project-oriented manner as tagging and cross referencing wouldn't be quite as applicable.
A: We keep the binary files and images in revision control, using Perforce. It's great!
We keep a lot of art assets, and it scales well for lots of large files. It recognizes binary files, the ones that can't be diffed, and stores them as full file copies in the back end.
It has P4V (cross-platform visual browser), and a thumbnail system so image files can be seen in the browser.
A: You might want to take a look at Boar: "Simple version control and backup for photos, videos and other binary files". It can handle binary files of any size. http://code.google.com/p/boar/
A: A free and slightly wonky solution is Adobe version Cue its comes with the Adobe Suites up to CS4 and is easy to install and maintain. Offers user level control and is artist friendly. Adobe has discontinued support though for it which is a shame. Adobe Bridge acts as the client between the user and the Version Cue server. If used properly its an inexpensive solution to version control. I use CS3 version cue with CS3 Bridge. Works great for small teams.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29292",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "53"
} |
Q: How to Track Queries on a Linq-to-sql DataContext In the herding code podcast 14 someone mentions that stackoverflow displayed the queries that were executed during a request at the bottom of the page.
It sounds like an excellent idea to me. Every time a page loads I want to know what sql statements are executed and also a count of the total number of DB round trips.
Does anyone have a neat solution to this problem?
What do you think is an acceptable number of queries? I was thinking that during development I might have my application throw an exception if more than 30 queries are required to render a page.
EDIT: I think I must not have explained my question clearly. During a HTTP request a web application might execute a dozen or more sql statements. I want to have those statements appended to the bottom of the page, along with a count of the number of statements.
HERE IS MY SOLUTION:
I created a TextWriter class that the DataContext can write to:
public class Logger : StreamWriter
{
public string Buffer { get; private set; }
public int QueryCounter { get; private set; }
public Logger() : base(new MemoryStream())
{}
public override void Write(string value)
{
Buffer += value + "<br/><br/>";
if (!value.StartsWith("--")) QueryCounter++;
}
public override void WriteLine(string value)
{
Buffer += value + "<br/><br/>";
if (!value.StartsWith("--")) QueryCounter++;
}
}
In the DataContext's constructor I setup the logger:
public HeraldDBDataContext()
: base(ConfigurationManager.ConnectionStrings["Herald"].ConnectionString, mappingSource)
{
Log = new Logger();
}
Finally, I use the Application_OnEndRequest event to add the results to the bottom of the page:
protected void Application_OnEndRequest(Object sender, EventArgs e)
{
Logger logger = DataContextFactory.Context.Log as Logger;
Response.Write("Query count : " + logger.QueryCounter);
Response.Write("<br/><br/>");
Response.Write(logger.Buffer);
}
A: If you put .ToString() to a var query variable you get the sql. You can laso use this in Debug en VS2008. Debug Visualizer
ex:
var query = from p in db.Table
select p;
MessageBox.SHow(query.ToString());
A: System.IO.StreamWriter httpResponseStreamWriter =
new StreamWriter(HttpContext.Current.Response.OutputStream);
dataContext.Log = httpResponseStreamWriter;
Stick that in your page and you'll get the SQL dumped out on the page. Obviously, I'd wrap that in a little method that you can enable/disable.
A: I have a post on my blog that covers sending to log files, memory, the debug window or multiple writers.
A: From Linq in Action
Microsoft has a Query Visualizer tool that can be downloaded separetly from VS 2008. it is at http://weblogs.asp.net/scottgu/archive/2007/07/31/linq-to-sql-debug-visualizer.aspx
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29308",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Which 3D cards support full scene antialiasing? Is there a list of 3D cards available that provide full scene antialiasing as well as which are able to do it in hardware (decent performance)?
A: Pretty much all cards since DX7-level technology (GeForce 2 / Radeon 7000) can do it. Most notable exceptions are Intel cards (Intel 945 aka GMA 950 and earlier can't do it; I think Intel 965 aka GMA X3100 can't do it either).
Older cards (GeForce 2 / 4MX, Radeon 7000-9250) were using supersampling (render everything into internally larger buffer, downsample at the end). All later cards have multisampling, where this expensive process is only performed at polygon edges (simply speaking, shaders are run for each pixel, while depth/coverage is stored for each sample).
A: Off the top of my head, pretty much any card since a geforce 2 or something can do it. There's always a performance hit, but this varies on the card and AA mode (of which there are about 100 different kinds) but generally it's quite a performance hit.
A: Agree with Orion Edwards, pretty much everything new can. Performance also depends greatly on the resolution you run at.
A: Integrated GPUs are going to be really poor performers with games FSAA or no. If you want even moderate performance, buy a separate video card.
For something that's not crazy expensive go with either a nVidia Geforce 8000 series card or an ATI 3000 series card. Even as a nVidia 8800 GTS owner, I will tell you the ATIs have better support for older games.
Although I personally still like FSAA, it is becoming less important with higher resolution screens. Also, more and more games are using deferred rendering which makes FSAA impossible.
A: Yes, of course integrated cards are awful. :) But this wasn't a question about gaming, but rather about an application that we are writing that will use OpenGL/D3D for 3D rendering. The 3D scene is relatively small, but antialiasing makes a dramatic difference in terms of the quality of the rendering. We are curious if there is some way to easily determine which cards support these features fully and which do not.
With the exception of the 3100, so far all of the cards we've found that do antialiasing are plenty fast for our purposes (as is my GeForce 9500).
A: Having seen a pile of machines recently that don't do it, I don't think that's quite true. The GMA 950 integrated ones don't do it to start with, and I don't think that the 3100/X3100 do either (at least not in hardware... the 3100 was enormously slow in a demo). Also, I don't believe that the GeForce MX5200 supported it either.
Or perhaps I'm just misunderstanding what you mean when you refer to "AA mode". Are there a lot of cards which support modes that are virtually unnoticable? :)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29311",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How do I create a hash table in Java? What is the most straightforward way to create a hash table (or associative array...) in Java? My google-fu has turned up a couple examples, but is there a standard way to do this?
And is there a way to populate the table with a list of key->value pairs without individually calling an add method on the object for each pair?
A: Also don't forget that both Map and Hashtable are generic in Java 5 and up (as in any other class in the Collections framework).
Map<String, Integer> numbers = new HashMap<String, Integer>();
numbers.put("one", 1);
numbers.put("two", 2);
numbers.put("three", 3);
Integer one = numbers.get("one");
Assert.assertEquals(1, one);
A: Map map = new HashMap();
Hashtable ht = new Hashtable();
Both classes can be found from the java.util package. The difference between the 2 is explained in the following jGuru FAQ entry.
A: You can use double-braces to set up the data. You still call add, or put, but it's less ugly:
private static final Hashtable<String,Integer> MYHASH = new Hashtable<String,Integer>() {{
put("foo", 1);
put("bar", 256);
put("data", 3);
put("moredata", 27);
put("hello", 32);
put("world", 65536);
}};
A: import java.util.HashMap;
Map map = new HashMap();
A: What Edmund said.
As for not calling .add all the time, no, not idiomatically. There would be various hacks (storing it in an array and then looping) that you could do if you really wanted to, but I wouldn't recommend it.
A:
And is there a way to populate the table with a list of key->value pairs without individually calling an add method on the object for each pair?
One problem with your question is that you don't mention what what form your data is in to begin with. If your list of pairs happened to be a list of Map.Entry objects it would be pretty easy.
Just to throw this out, there is a (much maligned) class named java.util.Properties that is an extension of Hashtable. It expects only String keys and values and lets you load and store the data using files or streams. The format of the file it reads and writes is as follows:
key1=value1
key2=value2
I don't know if this is what you're looking for, but there are situations where this can be useful.
A: It is important to note that Java's hash function is less than optimal. If you want less collisions and almost complete elimination of re-hashing at ~50% capacity, I'd use a Buz Hash algorithm Buz Hash
The reason Java's hashing algorithm is weak is most evident in how it hashes Strings.
"a".hash() give you the ASCII representation of "a" - 97, so "b" would be 98. The whole point of hashing is to assign an arbitrary and "as random as possible" number.
If you need a quick and dirty hash table, by all means, use java.util. If you are looking for something robust that is more scalable, I'd look into implementing your own.
A: Hashtable<Object, Double> hashTable = new Hashtable<>();
put values
...
get max
Optional<Double> optionalMax = hashTable.values().stream().max(Comparator.naturalOrder());
if (optionalMax.isPresent())
System.out.println(optionalMax.get());
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29324",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: Best practices for development environment and API dev? My current employer uses a 3rd party hosted CRM provider and we have a fairly sophisticated integration tier between the two systems. Amongst the capabilities of the CRM provider is for developers to author business logic in a Java like language and on events such as the user clicking a button or submitting a new account into the system, have validation and/or business logic fire off.
One of the capabilities that we make use of is for that business code running on the hosted provider to invoke web services that we host. The canonical example is a sales rep entering in a new sales lead and hitting a button to ping our systems to see if we can identify that new lead based on email address, company/first/last name, etc, and if so, return back an internal GUID that represents that individual. This all works for us fine, but we've run into a wall again and again in trying to setup a sane dev environment to work against.
So while our use case is a bit nuanced, this can generally apply to any development house that builds APIs for 3rd party consumption: what are some best practices when designing a development pipeline and environment when you're building APIs to be consumed by the outside world?
At our office, all our devs are behind a firewall, so code in progress can't be hit by the outside world, in our case the CRM provider. We could poke holes in the firewall but that's less than ideal from a security surface area standpoint. Especially if the # of devs who need to be in a DMZ like area is high. We currently are trying a single dev machine in the DMZ and then remoting into it as needed to do dev work, but that's created a resource scarcity issue if multiple devs need the box, let alone they're making potentially conflicting changes (e.g. different branches).
We've considered just mocking/faking incoming requests by building fake clients for these services, but that's a pretty major overhead in building out feature sets (though it does by nature reinforce a testability of our APIs). This also doesn't obviate the fact that sometimes we really do need to diagnose/debug issues coming from the real client itself, not some faked request payload.
What have others done in these types of scenarios? In this day and age of mashups, there have to be a lot of folks out there w/ experiences of developing APIs--what's worked (and not worked so) well for the folks out there?
A: In the occasions when this has been relevant to me (which, truth be told, is not often) we have tended to do a combination of hosting a dev copy of the solution in-house and mocking what we can't host.
I personally think that the more you can host on individual dev boxes the better-- if your dev's PCs are powerful enough to have the entire thing running plus whatever else they need to develop then they should be doing this. It allows them to have tonnes of flexability to develop without worrying about other people.
A: For dev, it would make sense to use mock objects and write good unit tests that define the task at hand. It would help to ensure that the developers understand the business requirements. The mock libraries are very sophisticated and help solve this problem.
Then perhaps a continuous build process that moves the code to the dev box in the DMZ. A robust QA process would make sense plus general UAT testing.
Also, for general debugging, you again need to have access the machine in the DMZ where you remote in.
This is probably an "ideal" situation, but you did ask for best practices :).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29335",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: Access files from network share in c# web app I have a web application that needs to read (and possibly write) files from a network share. I was wondering what the best way to do this would be?
I can't give the network service or aspnet accounts access to the network share. I could possibly use impersonation.
The network share and the web application are both hosted on the same domain and I can create a new user on the domain specifically for this purpose however I'm not quite sure how to join the dots between creating the filestream and specifying the credentials to use in the web application.
Unfortunately the drive isn't mapped as a network drive on the machine, it's only available to me as a network share so unfortunately I can't make a transparent call.
There is one problem I can think of with impersonation... I can only impersonate one user per application domain I think but I'm happy to be corrected. I may need to write this file to several different shares which means I may have to impersonate several users.
I like the idea of creating a token... if I can do that I'll be able to ask the use up front for their credentials and then dynamically apply the security and give them meaningful error messages if access is denied... I'm off to play but I'll be back with an update.
A: Given everyone already has domain accounts. Try IIS integrated authentication. You will get an ugly logon box off network but your creds should pass down to the file share.
@lomaxx
Are you saying that only you have perms to the share or that you manually mapped it to a drive letter. If the later you can use ucn \host\share the same way you would use a c:\shared_folder.
Random
Would it be a burden to mirror the share to a local folder on the host? I hear ROBOCOPY is pretty handy.
Another Idea. Run IIS on your target share you can read via http and if you need to write investigate webdav.
A: I've had no problems connecting to network shares transparently as if they were local drives. The only issue you may have is what you mentioned: having the aspnet account gain access to the share. Impersonation is probably the best way to do this.
You should be able to use any filestream objects to access the network share as long as it has a drive letter on the server machine.
A: Impersonation worked well for me in this scenario. We had a wizard that uploaded a zip file through the website, but we load balanced the site. Therefore needed to setup a way to save the file on all the machines.
There are many different ways to do it. We decided to make all requests to run under the user we setup and just added the web.config entry and setup the security permissions on the folders for the user. This kb article explains the setup very well.
A: You do have some options and one of of those is impersonation as you mentioned. However, another one I like to use and have used in the past is a trusted service call. Let's assume for a moment that it's always much safer to limit access through IIS to ensure there are as few holes as possible. With that let's go down this road.
Build a WCF service that has a couple of entry points and the interface might look like this.
public interface IDocumentService
{
public string BuildTrustedRelationship(string privateKey);
public byte[] ReadFile(string token, string fileName);
public void WriteFile(string token, string fileName, byte[] file);
}
Now, you can host this service via a Windows service very easily and so now all you need to do is on Application_start build the relationship with the service to get your token and you're off to the races. The other nice thing here is that this service is internal, trusted, and I've even hosted it on the file server before and so it's much easier to grant permissions to this operation.
A: If you can create a new AD user, I think the simplest solution is to have the Application Pool run under that AD account's authority, which would mean your application is now running as the AD user. You would need to add the AD user to the IIS Worker Process Group on the machine running your application. Then as long as your AD user has write permissions on the network share, you should be able to use the UNC path in your file operations.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29346",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Security For Voting Application I have a project to build a voting desktop application for a class in Java. While security isn't the focus of the project, I would like to be as realistic as I can. What are some of the primary tools to integrate security into a Java application.
Edit: I'm not primarily worried about physical security, we are simply building an application not a whole system. I want to ensure votes are recorded correctly and not able to be changed or read by someone else.
A: It really depends on what kind of security you are looking to integrate. Do you want security to ensure that the user isn't running any debuggers or such to flip bits in your application to change the votes? Do you want to ensure that the user doesn't install logging software to keep track of who voted for who? Do you want to ensure that the person who is supposed to be voting is actually voting? Security is a very broad subject, and it's hard to give an answer without knowing what exactly you are looking for.
A: My company did lately app with very strong security. Maybe it helps.
Our app
It was java EE app.
Architecture is following:
*
*Client computer has a cryptography package.
*Dirty serwer that stores encrypted user input and output
*Clean serwer that is not accesible from outside that stores keys and decrypted data.
Users are issued cryptography cards (you may want to use something less safe - eg. pgp), and are required by jsp pages to encrypt with them all input. Page contains component that connects to cryctography app, asks user for key passphrase, encrypts it with server public key and signs it with user private key, then submits.
Data is stored in external server then transferred to internal server, where it is decrypted and signature is verified, then data is processed and reencrypted, then it is sent to dirty server, and then user may get it.
So even if someone cracked the dirty server (even get hold of database) he would get mostly useless data.
Your app
I'd send encrypted and signed votes to server. It would assert two things:
*
*You know who sent the vote
*Noone wil be able to know what the vote was.
Then get data from server, assert that everyone voted at most once count the votes, voila!
A: If you're looking for a "higher-level" explanation of this stuff (as in, not code), Applied Cryptography has quite a few relevant examples (and I believe a section on "secure elections" that covers some voting strategies).
A: I believe that physical security is more important for voting booth system rather than you know, code security.
These machine by their very nature shouldn't be connected to any kind of public networks, especially not the the internet. But having a good physical security to prevent any sort of physical tampering is very important.
A: I'm not primarily worried about physical security, we are simply building an application not a whole system. I want to ensure votes are recorded correctly and not able to be changed or read by someone else.
A: Putting to one side questions of protecting against physical tampering (e.g. of the underlying database), since you've stipulated that physical security is not the present concern...
I think the primary consideration is how to ensure that a given voter votes only once. At a paper poll, each registered voter is restricted to a particular booth/location and verification is done by name+SSN and a signature.
You might need a high resolution digital signature capture and therefore a touchscreen capture peripheral or a touch screen terminal. A more sophisticated approach would be a biometric scanner, but that would require government records of thumb/finger prints or retinal scan - I can already see the privacy advocates lining up at the lawyer's offices.
Another approach would be for the voter "registrar office" to issue digital keys to each voter prior to the election - a (relatively) short (cryptographically strong) random alpha/numeric key that is entered with the voter's name and/or SSN into the application. Knowledge of that key is required for that particular voter in that particular election. These keys would be issued by post in tamper-evident envelopes, like those used by banks for postal confirmation of wire transfers and delivery of PIN numbers. The key must include checksum data so that the user can have the entry of it immediately validated and it should be in groups of 4, so something like XXXX-XXXX-XXXX-CCCC.
Any other "secret" knowledge, such as SSN, is likely too easily discovered for a large percentage of the population (though we don't seem to be able to make credit-granting organizations understand this), and therefore is unsuitable for authentication.
Vote counting can be done by generating a public key encrypted data file which is transferred (by sneaker net?) to the central system. This must include the "voting booth" identity information and a record for each voter including their SSN and the digital key (or signature, or biometric data). Votes with invalid keys are eliminated. Multiple votes with the same key and same votes are treated as a single vote for that candidate. Multiple votes with the same key and different votes are flagged for fraud investigation (with the constituent contacted by phone, issued a new key, and directed to revote).
A: Your problem is that you need to identify the user reliably, so that you can prevent them from re-voting and accessing each others votes.
This is not any different from any other desktop application that requires authentication (and potentially authorization). If your voters are a closed group on a network with user accounts, you could integrate with the directory and require users to log in.
If voters do not have network user accounts, this is where it gets interesting. Each user will still need to authenticate with the application. You could generate accounts with passwords in the application and distribute this information securely prior to voting. Your application could ask users to select a password when the access the application for the first time.
Without knowing the specifics, it is hard give a more specific answer.
A: You are aware that electronic voting is an unsolved research problem? Large scale fraud should take a large effort.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29357",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: SQL1159 Initialization error with DB2 .NET Data Provider, reason code 7, tokens 9.5.0.DEF.2, SOFTWARE\IBM\DB2\InstalledCopies I am trying to get the DB2 data provider from a 32-bit .Net application to connect to DB2 running as a 32-bit application on Vista 64 (is that confusing enough yet)? Unfortunately, I am getting the following error:
SQL1159 Initialization error with DB2 .NET Data Provider, reason code 7, tokens 9.5.0.DEF.2, SOFTWARE\IBM\DB2\InstalledCopies
There are several IBM forum posts mentioning it, but little useful guidance. Has anyone experienced this before? Or do you have better ideas for fixing it?
A: Are you required to have it run as x86? I had similar issues with web apps under Visual Studio's dev web server (which is x86), but switching over to IIS (x64) worked for me. Since I was deploying to IIS x64, I called it a day at that point.
I tried tracing with Filemon and Regmon, but didn't get any denied or missing keys errors. If I were to look again, I'd check HKLM\Software\WOW6432Node, guessing that the installer writes to the x64 HKLM\Software node, but not the x86 one.
A: I vaguely remember having a similar sounding problem with the DB2 for as/400 oledb driver when trying to set up a linked server from sql 2005 to the as/400. It was a permissions issue and I eventually found that only sql server accounts (not windows) could use the linked server because (i think) then the driver was loading using the credentials of the sql instead of impersonated ones. If it works when "run as" admin then it gotta be permissions.
A: I assume you have seen the writeup of SQL1159 in the DB2 Reference Guide?
Unfortunately for you, the reason codes stop at 6 and don't continue to 7. It does say:
User response: There was a problem with your DB2 installation. If this is the first time DB2 was installed on this computer, review the install logs for any possible errors and run a repair of DB2 from the Add/Remove Programs control panel applet. The default location of the installation logs is the My Documents/DB2LOG folder of the user that performed the installation. If this does not resolve the issue please contact IBM Support and provide the reason code associated with this message along with any installation logs.
So I guess try to reinstall it and if the problem continues you'll have to contact IBM.
Sorry, I know that's not much help.
A: I uninstalled the previous 32bit version, reinstalled as 64bit, and now I get a completely different error. Its mentioned as requiring FP2 to fix, but since I'm using Express-C, I can't install the fixpack (IBM doesn't provide fixpacks for free DB2 products). Anyway, thanks for the help. At least I can come closer to connecting now. :)
A: I encountered this error on a Windows 2003 x86 server as well. Originally my problem was
Unable to find the requested .net
framework data provider. it may not be
installed.
which led to comments that c:\Windows\Microsoft.NET\Framework\v2.0.50727\CONFIG\machine.config was missing the necessary entries for the DbProviderFactories section. And indeed, there were no IBM DB2 entries there. When I manually added in an entry, I then encountered this error of yours, suggesting that there is more than just editing machine.config.
Evenutally I uninstalled the IBM DB2 driver set, reboot the system, reinstalled it, and got it initializing connections properly.
A: Just as a quick note...
@Micheal: the link you had for SQL1159 is to the Version 9.1 docs
The Version 9.5 documentation goes up to reason code 9
http://publib.boulder.ibm.com/infocenter/db2luw/v9r5/topic/com.ibm.db2.luw.messages.sql.doc/doc/msql01159n.html
Unfortunately, I think there is a 10th reason code that is undocumented there but it is in a developerWorks topic
http://www.ibm.com/developerworks/wikis/display/DB2/DB2+and+.NET+FAQ#DB2and.NETFAQ-WhatisSQL1159InitializationError%3F
A: I had the same problem with DB2 .net provider.
If you have windows 64 bit then download and install
IBM Data Server Runtime Client (Windows AMD 64) Version 9.5
from
_https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?lang=en_US&source=swg-idsrc11&S_TACT=appddnet&S_CMP=ibm_im
If you run your program you would get the following exception
Unhandled Exception: System.Overflow Exception: Arithmetic
operation resulted in an overflow.
at IBM.Data.DB2.DB2ConnPool.Open(DB2Connection connection,
StringszConnectionStringIn, DB2ConnSettings& ppSettings, Object&
ppConn)
at IBM.Data.DB2.DB2Connection.Open()
Download and install the fix for your db2 version from
http://www-01.ibm.com/support/docview.wss?uid=swg1IZ09579
this would fix the problem.
A: Install DB2 Express-C for win x64, version 9.7.1
and it would work
A: I had similar issue, my machine is 64 bit. I installed both 32bit and 64 bit db2 run time clients, set the target framework to 32 bit in my project, worked perfectly for me. I was able to run the application on other 64/32 bit machines, just they need to install either the 32bit or the 64 bit from the db2 run time client, depending on the machine OS.
A: Yes. This should happen in you windows 7 and not in windows xp. The soultion is :
*
*right click the project in solution explorer
*Properties
*Compile tab (left side)
*Scroll down to see Advanced Compile option button
*Change the drop down Target Cpu to x86.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29370",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Rails requires RubyGems >= 0.9.4. Please install RubyGems I'm deploying to Ubuntu slice on slicehost, using Rails 2.1.0 (from gem)
If I try mongrel_rails start or script/server I get this error:
Rails requires RubyGems >= 0.9.4. Please install RubyGems
When I type gem -v I have version 1.2.0 installed. Any quick tips on what to look at to fix?
A: Have you tried reinstalling RubyGems? I had a pretty similar error message until I reuninstalled and for some reason, it installed into a different directory and then the problem went away.
A: Just finally found this answer... I was missing a gem, and thrown off by bad error message from Rails...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29382",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Converting bool to text in C++ Maybe this is a dumb question, but is there any way to convert a boolean value to a string such that 1 turns to "true" and 0 turns to "false"? I could just use an if statement, but it would be nice to know if there is a way to do that with the language or standard libraries. Plus, I'm a pedant. :)
A: We're talking about C++ right? Why on earth are we still using macros!?
C++ inline functions give you the same speed as a macro, with the added benefit of type-safety and parameter evaluation (which avoids the issue that Rodney and dwj mentioned.
inline const char * const BoolToString(bool b)
{
return b ? "true" : "false";
}
Aside from that I have a few other gripes, particularly with the accepted answer :)
// this is used in C, not C++. if you want to use printf, instead include <cstdio>
//#include <stdio.h>
// instead you should use the iostream libs
#include <iostream>
// not only is this a C include, it's totally unnecessary!
//#include <stdarg.h>
// Macros - not type-safe, has side-effects. Use inline functions instead
//#define BOOL_STR(b) (b?"true":"false")
inline const char * const BoolToString(bool b)
{
return b ? "true" : "false";
}
int main (int argc, char const *argv[]) {
bool alpha = true;
// printf? that's C, not C++
//printf( BOOL_STR(alpha) );
// use the iostream functionality
std::cout << BoolToString(alpha);
return 0;
}
Cheers :)
@DrPizza: Include a whole boost lib for the sake of a function this simple? You've got to be kidding?
A: C++20 std::format("{}"
https://en.cppreference.com/w/cpp/utility/format/formatter#Standard_format_specification claims that the default output format will be the string by default:
#include <format>
auto s6 = std::format("{:6}", true); // value of s6 is "true "
and:
The available bool presentation types are:
*
*none, s: Copies textual representation (true or false, or the locale-specific form) to the output.
*b, B, c, d, o, x, X: Uses integer presentation types with the value static_cast(value).
The existing fmt library implements it for before it gets official support: https://github.com/fmtlib/fmt Install on Ubuntu 22.04:
sudo apt install libfmt-dev
Modify source to replace:
*
*<format> with <fmt/core.h>
*std::format to fmt::format
main.cpp
#include <string>
#include <iostream>
#include <fmt/core.h>
int main() {
std::string message = fmt::format("The {} answer is {}.", true, false);
std::cout << message << std::endl;
}
and compile and run with:
g++ -std=c++11 -o main.out main.cpp -lfmt
./main.out
Output:
The true answer is false.
Related: std::string formatting like sprintf
A: If you decide to use macros (or are using C on a future project) you should add parenthesis around the 'b' in the macro expansion (I don't have enough points yet to edit other people's content):
#define BOOL_STR(b) ((b)?"true":"false")
This is a defensive programming technique that protects against hidden order-of-operations errors; i.e., how does this evaluate for all compilers?
1 == 2 ? "true" : "false"
compared to
(1 == 2) ? "true" : "false"
A: I use a ternary in a printf like this:
printf("%s\n", b?"true":"false");
If you macro it :
B2S(b) ((b)?"true":"false")
then you need to make sure whatever you pass in as 'b' doesn't have any side effects. And don't forget the brackets around the 'b' as you could get compile errors.
A: Without dragging ostream into it:
constexpr char const* to_c_str(bool b) {
return
std::array<char const*, 2>{"false", "true "}[b]
;
};
A: C++ has proper strings so you might as well use them. They're in the standard header string. #include <string> to use them. No more strcat/strcpy buffer overruns; no more missing null terminators; no more messy manual memory management; proper counted strings with proper value semantics.
C++ has the ability to convert bools into human-readable representations too. We saw hints at it earlier with the iostream examples, but they're a bit limited because they can only blast the text to the console (or with fstreams, a file). Fortunately, the designers of C++ weren't complete idiots; we also have iostreams that are backed not by the console or a file, but by an automatically managed string buffer. They're called stringstreams. #include <sstream> to get them. Then we can say:
std::string bool_as_text(bool b)
{
std::stringstream converter;
converter << std::boolalpha << b; // flag boolalpha calls converter.setf(std::ios_base::boolalpha)
return converter.str();
}
Of course, we don't really want to type all that. Fortunately, C++ also has a convenient third-party library named Boost that can help us out here. Boost has a nice function called lexical_cast. We can use it thus:
boost::lexical_cast<std::string>(my_bool)
Now, it's true to say that this is higher overhead than some macro; stringstreams deal with locales which you might not care about, and create a dynamic string (with memory allocation) whereas the macro can yield a literal string, which avoids that. But on the flip side, the stringstream method can be used for a great many conversions between printable and internal representations. You can run 'em backwards; boost::lexical_cast<bool>("true") does the right thing, for example. You can use them with numbers and in fact any type with the right formatted I/O operators. So they're quite versatile and useful.
And if after all this your profiling and benchmarking reveals that the lexical_casts are an unacceptable bottleneck, that's when you should consider doing some macro horror.
A: With C++11 you might use a lambda to get a slightly more compact code and in place usage:
bool to_convert{true};
auto bool_to_string = [](bool b) -> std::string {
return b ? "true" : "false";
};
std::string str{"string to print -> "};
std::cout<<str+bool_to_string(to_convert);
Prints:
string to print -> true
A: How about using the C++ language itself?
bool t = true;
bool f = false;
std::cout << std::noboolalpha << t << " == " << std::boolalpha << t << std::endl;
std::cout << std::noboolalpha << f << " == " << std::boolalpha << f << std::endl;
UPDATE:
If you want more than 4 lines of code without any console output, please go to cppreference.com's page talking about std::boolalpha and std::noboolalpha which shows you the console output and explains more about the API.
Additionally using std::boolalpha will modify the global state of std::cout, you may want to restore the original behavior go here for more info on restoring the state of std::cout.
A: This should be fine:
const char* bool_cast(const bool b) {
return b ? "true" : "false";
}
But, if you want to do it more C++-ish:
#include <iostream>
#include <string>
#include <sstream>
using namespace std;
string bool_cast(const bool b) {
ostringstream ss;
ss << boolalpha << b;
return ss.str();
}
int main() {
cout << bool_cast(true) << "\n";
cout << bool_cast(false) << "\n";
}
A: A really quick and clean solution, if you're only doing this once or don't want to change the global settings with bool alpha, is to use a ternary operator directly in the stream, like so:
bool myBool = true;
std::cout << "The state of myBool is: " << (myBool ? "true" : "false") << std::endl;
enter code here
Ternarys are easy to learn. They're just an IF statement on a diet, that can be dropped pretty well anywhere, and:
(myBool ? "true" : "false")
is pretty well this (sort of):
{
if(myBool){
return "true";
} else {
return "false";
}
}
You can find all kinds of fun uses for ternarys, including here, but if you're always using them to output a "true" "false" into the stream like this, you should just turn the boolalpha feature on, unless you have some reason not to:
std::cout << std::boolalpha;
somewhere at the top of your code to just turn the feature on globally, so you can just drop those sweet sweet booleans right into the stream and not worry about it.
But don't use it as a tag for one-off use, like this:
std::cout << "The state of myBool is: " << std::boolalpha << myBool << std::noboolalpha;
That's a lot of unnecessary function calls and wasted performance overhead and for a single bool, when a simple ternary operator will do.
A: This post is old but now you can use std::to_string to convert a lot of variable as std::string.
http://en.cppreference.com/w/cpp/string/basic_string/to_string
A: Use boolalpha to print bool to string.
std::cout << std::boolalpha << b << endl;
std::cout << std::noboolalpha << b << endl;
C++ Reference
A: How about the simple:
constexpr char const* toString(bool b)
{
return b ? "true" : "false";
}
A: Try this Macro. Anywhere you want the "true" or false to show up just replace it with PRINTBOOL(var) where var is the bool you want the text for.
#define PRINTBOOL(x) x?"true":"false"
A: As long as strings can be viewed directly as a char array it's going to be really hard to convince me that std::string represents strings as first class citizens in C++.
Besides, combining allocation and boundedness seems to be a bad idea to me anyways.
A: I agree that a macro might be the best fit. I just whipped up a test case (believe me I'm no good with C/C++ but this sounded fun):
#include <stdio.h>
#include <stdarg.h>
#define BOOL_STR(b) (b?"true":"false")
int main (int argc, char const *argv[]) {
bool alpha = true;
printf( BOOL_STR(alpha) );
return 0;
}
A: #include <iostream>
#include <string>
using namespace std;
string toBool(bool boolean)
{
string result;
if(boolean == true)
result = "true";
else
result = "false";
return result;
}
int main()
{
bool myBoolean = true; //Boolean
string booleanValue;
booleanValue = toBool(myBoolean);
cout << "bool: " << booleanValue << "\n";
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29383",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "114"
} |
Q: Is using an obfuscator enough to secure my JavaScript code? I'm working on building a development tool that is written in JavaScript.
This will not be an open source project and will be sold (hopefully) as a commercial product.
I'm looking for the best way to protect my investment. Is using an obfuscator (code mangler) enough to reasonably secure the code?
Are there other alternatives that I am not aware of?
(I'm not sure if obfuscator is the right word, it's one of the apps that takes your code and makes it very unreadable.)
A: I deeply disagree with most answers above.
It's true that every software can be stolen despite of obfuscation but, at least, it makes harder to extract and reuse individual parts of the software and that is the point.
Maybe it's cheaper and less risky to use an obfuscation than leaving the code open and fighting at court after somebody stole the best parts of our software and made dangerous concurrency.
Unobfuscated code whispers:
*
*Come on, analyze me, reuse me. Maybe you could make a better software using me.
Obfuscated code says:
*
*Go away dude. It's cheaper to use your own ideas than trying to crack me.
A: You are going to be fighting a losing battle if you try to obfuscate your code in the hopes of someone not stealing it. You may stop the casual browser from getting at it, but someone dedicated would almost certainly be able to overcome any measure you use.
In the past I have seen people do several things:
*
*Paste a lot of whitespace at the top of the page with a message telling people that the code is unavailable, when in actuality you just need to scroll down a few pages to get at it.
*Running it through an encoder of some kind, this is so so useful as it can just be run through the decoder.
*Another method is to reduce variable names to one character and remove whitespace (this is also an efficiency thing).
There are many other methods.
In the end, your efforts are only likely to stop the casual browser from seeing your stuff. If someone dedicated comes along then there is not much you will be able to do. You will have to live with this.
My advice would be to make a really awesome product that attracts the most people and beat off any competition by having the best product/service/community and not the most obfuscated code.
A: I'm going to tell you a secret. Once you understand it, you'll feel a lot better about the fact that Javascript obfuscation is only really useful for saving bandwidth when sending scripts over the wire.
Your source-code is not worth stealing.
I know this comes as a shock to the ego, but I can say this confidently without ever having seen a line of code you've written because outside the very few realms of development where serious magic happens, it's true of all source-code.
Say, tomorrow, someone dumped a pile of DVDs on your doorstep containing the source code for Windows Vista. What would you be able to do with it? Sure, you could compile it and give away copies, but that's just one step more effort than copying the retail version. You could painstakingly find and remove the license-checking code, but that's something some bright kid has already done to the binaries. Replace the logo and graphics, pretend you wrote it yourself and market it as "Vicrosoft Mista"? You'll get caught.
You could spend an enormous amount of time reading the code, trying to understand it and truly "stealing the intellectual property" that Microsoft invested in developing the product. But you'd be disappointed. You'd find the code was a long series of mundane decisions, made one after the other. Some would be smarter than you could think of. Some would leave you shaking your head wondering what kind of monkeys they're hiring over there. Most would just make you shrug and say "yeah, that's how you do that."
In the process you'll learn a lot about writing operating systems, but that's not going to hurt Microsoft.
Replace "Vista" with "Leopard" and the above paragraphs don't change one bit. It's not Microsoft, it's software. Half the people on this site could probably develop a Stack Overflow clone, with or without looking at the source of this site. They just haven't. The source-code of Firefox and WebKit are out there for anyone to read. Now go write your own browser from scratch. See you in a few years.
Software development is an investment of time. It's utter hubris to imagine that what you're doing is so special that nobody could clone it without looking at your source, or even that it would make their job that much easier without an actionable (and easily detectable) amount of cut and paste.
A: You're always faced with the fact that any user that comes to your webpage will download some working version of your Javascript source. They will have the source code. Obfuscating it may make it very difficult to be reused by someone with the intent to steal your hard work. However, in many cases someone can even reuse the obfuscated source! Or in the worst case they can unravel it by hand and eventually comprehend it.
An example of a situation like yours might be Google Maps. The Javascript source is clearly obfuscated. However, for really private/sensitive logic they push the data to the server and have the server process that information using XMLHttpRequests (AJAX). With this design you have the important parts on the server side, much more tightly controlled.
A: That's probably about the best you can do. Just be aware that anybody with enough dedication, can probably de-obfuscate your program. Just make sure you're comfortable with that before embarking on your project. I think the biggest problem with this would be to control who's using it on their site. If somebody goes to a site with your code on it, and likes what it does, it doesn't matter that they don't understand what the code does, or can't read it, when they can just copy the code, and use it on their own site.
A: A obfuscator won't help you at all if someone wants to figure out the code. The code still exists on the client machine and they can grab a copy of it and study it at their leisure.
There is simply no way to hide code written in Javascript since the source code has to be handed to the browser for execution.
If you want to hide your code, you have the following options:
1) Use an environment where compiled code (not source) is downloaded to the client, e.g. Flash or Silverlight. I'm not even sure that's foolproof, but it's certainly much better than Javascript.
2) Have a back end on the server side that does the work and a thin client that just makes requests to the server.
A: I'd say yes, it's enough if you also make sure than you compress the code as well using a tool like Dean Edward's Packer or similar. If you think about what is possible with tools like .NET Reflector in terms of reverse engineering compiled code / IL in .NET, you realize that there's nothing you can do to completely protect your investment.
On the other hand, remember that folks who release their source code also seem to make do quite nicely anyway - it's their experience that people want more than their intellectual property.
A: code obfuscator is enough for something that needs minimal protection, but I think it will definitely not enough to really protect you. if you are patient you can realy de-mangle the whole thing.. and i'm sure there are programs to do it for you.
That being said, you can't stop anyone from pirating your stuff because they'll eventually will break any kind of protection you create anyway. and it is espcially easy in scripted language where the code is not compiled.
If you are using some other language, maybe java or .NET, You can try doing things like "calling home" to verify that a license number matches a given url. Which works if you your app is some sort of online app that is going to be connected online all the time. But having access to the source, people can easily bypass that part.
In short, javascript is a poor choice for what you are doing.
A step up from what you are doing is maybe using a webservice backend to get your data. Let the webservice handle the authentication/verification process. Requires a bit of work to make sure it is bulletproof, but it might work
A: If this is for a website, which by its very nature puts viewing of its code one menu click away, is there really any reason to hide anything? If someone wants to steal your code they will most likely go through the effort of making even the most mangled code human readable. Look at commercial websites, they don't obfuscate their code, and no one goes out and steals code from the google apps. If you are really worried about code theft, I would argue for writing it in some other compiled language. (which does of course destroy the whole webapp thing...) Even then, you aren't totally safe, there are many de-compilers out there.
So really, there is no way to do what you want in the face of anyone with sufficient motivation.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29399",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27"
} |
Q: Guidelines for writing a framework I'm faced with writing a framework to simplify working with a large and complex object library (ArcObjects). What guidelines would you suggest for creating a framework of this kind? Are static methods preferred? How do you handle things like logging? How do you future proof your framework code from changes that a vendor might introduce?
I think of all of the various wrappers and helpers I've seen for NHibernate, log4net, and code I've read from projects like NLog and NetTopologySuite and I see so many good approaches, but honestly I'm at a loss where to start.
BTW - I'm working in C# 3.5 but it's more about recommended approach rather than language.
A: Brad Abrams' Framework Design Guidelines book is all about this. Might be worth a look.
A: Try to write code to be more flexible. For example, if you have a method that accepts an array as a parameter, would you be able to accept an IEnumerable or IList instead?
A: I think that you're consistent is more important than what conventions you go with. As far as future-proofing yourself, that's a matter of the code that you're making a framework for. It's a lot easier to build on a brick house than a sand one.
A: Writing code for framework is absolutely very different from writing application code.
I have always consulted (and have others consult) the Design Guidelines for Class Library Developers when writing framework level code.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29406",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: How automated is too automated when it comes to deployment? I have ci, so our staging environment builds itself.
Should I have a script that not only builds production but does all the branching for it as well?
When you have one code base on two different urls with skinning, should they be required to build at once?
A: The only way to be too automated is if you are spending more time fighting with building or fixing automation scripts than you would just doing the job manually. As long as your automation scripts take less time and produce fewer errors than doing the job manually, then automation is great.
Scripts to build and branch for production are a great idea!
A: In my opinion anything the computer is capable of doing automatically it should do, because it can do it faster, easier and without thought from you. Within reason of course, but stuff like that can be very trivial to automate, so I've always been a proponent of automating that whole process.
and plus it can be fun too!
A: I like to separate the build and deploy steps into two separate steps. The output of the build step should be a package that is placed in a repository or staging area. This package should be independent of the target environments.
The deploy step is responsible for configuring the target environment and installing the package.
The reasons I prefer this approach are:
*
*I have one package that can run in my development, test and production environments. That should cut down the arguments between QA and development.
*There may be different elements that need to be configured during deployment. Application server settings, database schemas, data loads, etc. that might not be as easy to do from the automated build script.
A: In my opinion it's only too automated if no one in your production support group can deploy an application manually in a pinch. Automated deployments really cut down on simple but common errors such as configuration mistakes. However, a manual deployment must always be an option.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29423",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Best GUI designer for eclipse? I'm looking for a good GUI designer for swing in eclipse. My preference is for a free/open-source plugin.
A: Window Builder Pro is a great GUI Designer for eclipse and is now offered for free by google.
A: 'Jigloo' is a very cool GUI designer. It is not free for commercial use however.
It auto-generates code and allows for custom editing of the code it creates.
http://www.cloudgarden.com/jigloo/
A: Another good GUI designer for Eclipse is Window Builder Pro. Like Jigloo, it's not free for commercial use.
It allows you to design user interfaces for Swing, SWT and even the Google Web Toolkit (GWT).
A: Here is a quite good but old comparison http://wiki.computerwoche.de/doku.php/programmierung/gui-builder_fuer_eclipse
Window Builder Pro is now free at Google Web Toolkit
A: Visual Editor is a good choice.
It generates very clean code, with no "layout" files beside of your sourcen using a simple but convenient pattern. It's very easy to patch the generated code and directly see the result.
There are some stability problems (some times, the preview window does not refresh anymore...), but nothing that a "clean Project" can't fix...
A: visualswing4eclipse looks good but the eclipse update URL didn't work for me (I raised ticket 137)
I was only able to install a previous version. Here's a url in case anyone wants it:
http://visualswing4eclipse.googlecode.com/svn-history/r858/trunk/org.dyno.visual.swing.site/site.xml
The plugin actually looks very good.
A: Old question, but have you checked out JFormDesigner?
A: GWT Designer is very good and allows for rapid development of GWT websites. (http://www.instantiations.com/gwtdesigner/)
A: Look at my plugin for developing swing application. It is as easy as that of netbeans':
http://code.google.com/p/visualswing4eclipse/
A: I use GWTDesigner http://www.instantiations.com/gwtdesigner/ which is not free but works well. Best of all, their customer support is top notch - very responsive.
A: well check out the eclipse distro easyeclipse at EasyEclipse. it has Visual editor project already added as a plugin, so no hassles of eclipse version compatibility.Plus the eclipse help section has a tutorial on VE.
A: It's not free or open source. But you can give Intellij Idea's SWING GUI designer a try.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29426",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "125"
} |
Q: Compact Framework - how do I dynamically create type with no default constructor? I'm using the .NET CF 3.5. The type I want to create does not have a default constructor so I want to pass a string to an overloaded constructor. How do I do this?
Code:
Assembly a = Assembly.LoadFrom("my.dll");
Type t = a.GetType("type info here");
// All ok so far, assembly loads and I can get my type
string s = "Pass me to the constructor of Type t";
MyObj o = Activator.CreateInstance(t); // throws MissMethodException
A: MyObj o = null;
Assembly a = Assembly.LoadFrom("my.dll");
Type t = a.GetType("type info here");
ConstructorInfo ctor = t.GetConstructor(new Type[] { typeof(string) });
if(ctor != null)
o = ctor.Invoke(new object[] { s });
A: Ok, here's a funky helper method to give you a flexible way to activate a type given an array of parameters:
static object GetInstanceFromParameters(Assembly a, string typeName, params object[] pars)
{
var t = a.GetType(typeName);
var c = t.GetConstructor(pars.Select(p => p.GetType()).ToArray());
if (c == null) return null;
return c.Invoke(pars);
}
And you call it like this:
Foo f = GetInstanceFromParameters(a, "SmartDeviceProject1.Foo", "hello", 17) as Foo;
So you pass the assembly and the name of the type as the first two parameters, and then all the constructor's parameters in order.
A: See if this works for you (untested):
Type t = a.GetType("type info here");
var ctors = t.GetConstructors();
string s = "Pass me to the ctor of t";
MyObj o = ctors[0].Invoke(new[] { s }) as MyObj;
If the type has more than one constructor then you may have to do some fancy footwork to find the one that accepts your string parameter.
Edit: Just tested the code, and it works.
Edit2: Chris' answer shows the fancy footwork I was talking about! ;-)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29436",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How to shift an array of bytes by 12-bits I want to shift the contents of an array of bytes by 12-bit to the left.
For example, starting with this array of type uint8_t shift[10]:
{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0A, 0xBC}
I'd like to shift it to the left by 12-bits resulting in:
{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xAB, 0xC0, 0x00}
A: Hurray for pointers!
This code works by looking ahead 12 bits for each byte and copying the proper bits forward. 12 bits is the bottom half (nybble) of the next byte and the top half of 2 bytes away.
unsigned char length = 10;
unsigned char data[10] = {0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x0A,0xBC};
unsigned char *shift = data;
while (shift < data+(length-2)) {
*shift = (*(shift+1)&0x0F)<<4 | (*(shift+2)&0xF0)>>4;
shift++;
}
*(data+length-2) = (*(data+length-1)&0x0F)<<4;
*(data+length-1) = 0x00;
Justin wrote:
@Mike, your solution works, but does not carry.
Well, I'd say a normal shift operation does just that (called overflow), and just lets the extra bits fall off the right or left. It's simple enough to carry if you wanted to - just save the 12 bits before you start to shift. Maybe you want a circular shift, to put the overflowed bits back at the bottom? Maybe you want to realloc the array and make it larger? Return the overflow to the caller? Return a boolean if non-zero data was overflowed? You'd have to define what carry means to you.
unsigned char overflow[2];
*overflow = (*data&0xF0)>>4;
*(overflow+1) = (*data&0x0F)<<4 | (*(data+1)&0xF0)>>4;
while (shift < data+(length-2)) {
/* normal shifting */
}
/* now would be the time to copy it back if you want to carry it somewhere */
*(data+length-2) = (*(data+length-1)&0x0F)<<4 | (*(overflow)&0x0F);
*(data+length-1) = *(overflow+1);
/* You could return a 16-bit carry int,
* but endian-ness makes that look weird
* if you care about the physical layout */
unsigned short carry = *(overflow+1)<<8 | *overflow;
A: Here's my solution, but even more importantly my approach to solving the problem.
I approached the problem by
*
*drawing the memory cells and drawing arrows from the destination to the source.
*made a table showing the above drawing.
*labeling each row in the table with the relative byte address.
This showed me the pattern:
*
*let iL be the low nybble (half byte) of a[i]
*let iH be the high nybble of a[i]
*iH = (i+1)L
*iL = (i+2)H
This pattern holds for all bytes.
Translating into C, this means:
a[i] = (iH << 4) OR iL
a[i] = ((a[i+1] & 0x0f) << 4) | ((a[i+2] & 0xf0) >> 4)
We now make three more observations:
*
*since we carry out the assignments left to right, we don't need to store any values in temporary variables.
*we will have a special case for the tail: all 12 bits at the end will be zero.
*we must avoid reading undefined memory past the array. since we never read more than a[i+2], this only affects the last two bytes
So, we
*
*handle the general case by looping for N-2 bytes and performing the general calculation above
*handle the next to last byte by it by setting iH = (i+1)L
*handle the last byte by setting it to 0
given a with length N, we get:
for (i = 0; i < N - 2; ++i) {
a[i] = ((a[i+1] & 0x0f) << 4) | ((a[i+2] & 0xf0) >> 4);
}
a[N-2] = (a[N-1) & 0x0f) << 4;
a[N-1] = 0;
And there you have it... the array is shifted left by 12 bits. It could easily be generalized to shifting N bits, noting that there will be M assignment statements where M = number of bits modulo 8, I believe.
The loop could be made more efficient on some machines by translating to pointers
for (p = a, p2=a+N-2; p != p2; ++p) {
*p = ((*(p+1) & 0x0f) << 4) | (((*(p+2) & 0xf0) >> 4);
}
and by using the largest integer data type supported by the CPU.
(I've just typed this in, so now would be a good time for somebody to review the code, especially since bit twiddling is notoriously easy to get wrong.)
A: Lets make it the best way to shift N bits in the array of 8 bit integers.
N - Total number of bits to shift
F = (N / 8) - Full 8 bit integers shifted
R = (N % 8) - Remaining bits that need to be shifted
I guess from here you would have to find the most optimal way to make use of this data to move around ints in an array. Generic algorithms would be to apply the full integer shifts by starting from the right of the array and moving each integer F indexes. Zero fill the newly empty spaces. Then finally perform an R bit shift on all of the indexes, again starting from the right.
In the case of shifting 0xBC by R bits you can calculate the overflow by doing a bitwise AND, and the shift using the bitshift operator:
// 0xAB shifted 4 bits is:
(0xAB & 0x0F) >> 4 // is the overflow (0x0A)
0xAB << 4 // is the shifted value (0xB0)
Keep in mind that the 4 bits is just a simple mask: 0x0F or just 0b00001111. This is easy to calculate, dynamically build, or you can even use a simple static lookup table.
I hope that is generic enough. I'm not good with C/C++ at all so maybe someone can clean up my syntax or be more specific.
Bonus: If you're crafty with your C you might be able to fudge multiple array indexes into a single 16, 32, or even 64 bit integer and perform the shifts. But that is prabably not very portable and I would recommend against this. Just a possible optimization.
A: Here a working solution, using temporary variables:
void shift_4bits_left(uint8_t* array, uint16_t size)
{
int i;
uint8_t shifted = 0x00;
uint8_t overflow = (0xF0 & array[0]) >> 4;
for (i = (size - 1); i >= 0; i--)
{
shifted = (array[i] << 4) | overflow;
overflow = (0xF0 & array[i]) >> 4;
array[i] = shifted;
}
}
Call this function 3 times for a 12-bit shift.
Mike's solution maybe faster, due to the use of temporary variables.
A: The 32 bit version... :-) Handles 1 <= count <= num_words
#include <stdio.h>
unsigned int array[] = {0x12345678,0x9abcdef0,0x12345678,0x9abcdef0,0x66666666};
int main(void) {
int count;
unsigned int *from, *to;
from = &array[0];
to = &array[0];
count = 5;
while (count-- > 1) {
*to++ = (*from<<12) | ((*++from>>20)&0xfff);
};
*to = (*from<<12);
printf("%x\n", array[0]);
printf("%x\n", array[1]);
printf("%x\n", array[2]);
printf("%x\n", array[3]);
printf("%x\n", array[4]);
return 0;
}
A: @Joseph, notice that the variables are 8 bits wide, while the shift is 12 bits wide. Your solution works only for N <= variable size.
If you can assume your array is a multiple of 4 you can cast the array into an array of uint64_t and then work on that. If it isn't a multiple of 4, you can work in 64-bit chunks on as much as you can and work on the remainder one by one.
This may be a bit more coding, but I think it's more elegant in the end.
A: There are a couple of edge-cases which make this a neat problem:
*
*the input array might be empty
*the last and next-to-last bits need to be treated specially, because they have zero bits shifted into them
Here's a simple solution which loops over the array copying the low-order nibble of the next byte into its high-order nibble, and the high-order nibble of the next-next (+2) byte into its low-order nibble. To save dereferencing the look-ahead pointer twice, it maintains a two-element buffer with the "last" and "next" bytes:
void shl12(uint8_t *v, size_t length) {
if (length == 0) {
return; // nothing to do
}
if (length > 1) {
uint8_t last_byte, next_byte;
next_byte = *(v + 1);
for (size_t i = 0; i + 2 < length; i++, v++) {
last_byte = next_byte;
next_byte = *(v + 2);
*v = ((last_byte & 0x0f) << 4) | (((next_byte) & 0xf0) >> 4);
}
// the next-to-last byte is half-empty
*(v++) = (next_byte & 0x0f) << 4;
}
// the last byte is always empty
*v = 0;
}
Consider the boundary cases, which activate successively more parts of the function:
*
*When length is zero, we bail out without touching memory.
*When length is one, we set the one and only element to zero.
*When length is two, we set the high-order nibble of the first byte to low-order nibble of the second byte (that is, bits 12-16), and the second byte to zero. We don't activate the loop.
*When length is greater than two we hit the loop, shuffling the bytes across the two-element buffer.
If efficiency is your goal, the answer probably depends largely on your machine's architecture. Typically you should maintain the two-element buffer, but handle a machine word (32/64 bit unsigned integer) at a time. If you're shifting a lot of data it will be worthwhile treating the first few bytes as a special case so that you can get your machine word pointers word-aligned. Most CPUs access memory more efficiently if the accesses fall on machine word boundaries. Of course, the trailing bytes have to be handled specially too so you don't touch memory past the end of the array.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29437",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: Stop the taskbar flashing I know I can programatically make the taskbar item for a particular window start flashing when something changes, but is there any way I can stop it from flashing either programatically after a certain period of time or at least is there a keyboard shortcur I can give to my users to somehow stop the flashing?
A: The FlashWindowEx function which controls the flashing takes a FLASHWINFO struct which has a uCount field to control how many times it flashes. Also, a possible value for the dwFlags field is FLASHW_STOP to cause the flashing to stop.
EDIT: Forgot was a C# tagged question ... so P/Invoke goodness found here.
A: Instead of flashing the tasbar you can consider using the NotifyIcon. This will let you put something on the system tray (something else many consider evil because of the proliferation of apps that do this). Then you can popup a ballon tip with any change that actually describes the change itself.
To use:
(1) Drag the NotifyIcon onto your form or create in your app NotifyIcon notify = new NotifyIcon();
(2) Set the icon property to the required image
(3) Control whether it is visible on the system tray using the Visible property
(4) Call ShowBalloonText to show popup text (limited to 64 characters)
Either way, you shoudl add an option to the program that allows the end user to turn this feature on/off based on their feelings about it all. I personally like the notify icon because the ballon text can say something like "Server went down"
A: @thomas -- Amazingly Microsoft's own Windows Vista User Experience Guidelines agree with you ...
While having a background window flash its taskbar button is better than having it automatically come to the top and steal input focus, flashing taskbar buttons are still very intrusive. It is hard for users to concentrate when a taskbar button is flashing, so you should assume that users will immediately stop what they are doing to make the flashing stop. Consequently, reserve taskbar flashing only for situations where immediate attention is required.
Of course who knows who actually follows those guidelines ... or who even reads them. :)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29460",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: What is a MUST COVER in my Groovy presentation? I'm working on getting an Introduction to Groovy presentation ready for my local Java User's Group and I've pretty much got it together. What I'd like to see is what you all think I just have to cover.
Remember, this is an introductory presentation. Most of the people are experienced Java developers, but I'm pretty sure they have little to no Groovy knowledge. I won't poison the well by mentioning what I've already got down to cover as I want to see what the community has to offer.
What are the best things I can cover (in a 1 hour time frame) that will help me effectively communicate to these Java developers how useful Groovy could be to them?
p.s. I'll share my presentation here later for anyone interested.
as promised now that my presentation has been presented here it is
A: I don't know anything about groovy so in a sense I've qualified to answer this...
I would want you to:
*
*Tell me why I would want to use Scripting (in general) as opposed to Java-- what does it let me do quicker (as in development time), what does it make more readable. Give tantalising examples of ways I can use chunks of scripting in my mostly Java app. You want to make this relevant to Java devs moreso than tech-junkies.
*With that out of the way, why Groovy? Why not Ruby, Python or whatever (which are all runnable on the JVM).
*Don't show me syntax that Java can already do (if statements, loops etc) or if you do make it quick. It's as boring as hell to watch someone walk through language syntax 101 for 20min.
*
*For syntax that has a comparible feature in Java maybe show them side by side quickly.
*For syntax that is not in Java (closures etc) you can talk to them in a bit more detail.
*Remember those examples from the first point. Show me one, fully working (or at least looking like it is).
*At the end have question time. That is crazy important, and with that comes a burden on you to be a psuedo-guru :P.
I'm not sure about how the Java6 scripting support works but I'm fairly sure it can be made secure. I remember something about defining the API the script can use before it's run.
If this is the case then an example you could show would be some thick-client application (e.g. a music player) where users can write their own scripts with an API you provide them in Groovy which allows them to script their app in interesting and secure ways (e.g. creating custom columns in the playlist)
A: I'd go for:
*
*Closures
*Duck typing
*Builders (XML builder and slurper)
*GStrings
*Grails
A: I'd mention the following things in addition to what has already been stated:
*
*GDK - extensions/additions to existing JDK classes
*Interaction between Groovy and Java code (basically a non-issue)
*Compiling Groovy code to Java .class files
*XML parsing and mechanisms for accessing document content
One thing I like doing with Groovy is implementing an interface defined in Java as a map from method names to closures. It's a cool thing you can do with Groovy, but probably well beyond an introductory presentation though.
A: Include an example of how making Java code more groovy takes away soooo much code. Wait for them to pick their jaws up off of the floor before continuing. Scott Davis has a simple example at the beginning of Groovy Recipes that takes 35 lines of Java or 3 lines of Groovy.
A: You should definitely show them how to create a quick Grails application. Two domain classes that are related. Build a basic CRUD app. Explain that tables are being created behind the scenes using GORM(Hibernate). Then explain that you can create a war file and deploy it as you would any other Java war file. You can also add Grails/Groovy to an existing Java/JSP project so it doesn't require a huge commitment or paradigm change.
Groovy/Grails is simply Ruby/Rails for Java people. I'd cover the plugins for Netbeans/Eclipse too. Groovy/Grails are just now getting full support in the major IDE's.
Finally, if you can find a good diagram that shows how Grails is built on top of Spring, Hibernate, Quartz, Sitemesh and Groovy, I think people will understand that there is a treasure chest waiting to be unlocked.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29461",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Writing a game for the Nintendo Wii I'd like to write a game for the Nintendo Wii. How do I go about obtaining an SDK and/or any other tools necessary for writing a game?
A: If you are a one-man team, then your only option is really WiiWare. At $2000 for the kit, you picked the right console. That's a tiny fraction of the cost of a 360 or PS3 dev kit.
You do have to have your own business. You also have to get your game rated by the ESRB which will put you back another $2500. Your game also has to be really good. In the end you could spend all the money and time and have Nintendo refuse to publish your game for any reason whatsoever.
A: A different approach... Flash.
You could develop a Flash game that is controlled only with the mouse. Put the game on the web so that it can be played on the Wii via the Wii's browser. The game might not be as exciting as a direct-to-Wii game, but you won't have to deal with things like development kits and modded Wii's either.
A: You would have to get in contact with Nintendo of America and obtain a developer kit from them. Be prepared to spend a wad of dough though.
Check this out:
http://www.nintendo.com/corp/developer.jsp
A: The Wii Remote and Wii Balance Board use bluetooth. You can pair them with your PC and write your own PC apps that interact with them (like this guy). If you want to make something that actually runs on the Wii, you can try finding some homebrew development help.
If you want to actually sell your software for Wii, you need:
*
*game development experience
*secure office facilities
*$2,000 - $10,000 for dev kit (WiiWare is cheapest)
The Nintendo Software Development Support Group
Authorized Developer Application
UPDATE: Also see the Wii U Developer Site. Nintendo now has a simple application for individual developers to makes games for the Wii U, giving you access to the SDK and dev-kits.
A: Yes the SDKs (and dev hardware) are expensive, and you must be an actual company with an actual office to get one.
A: The information in this post is dated. Today I set up an account with Nintendo as an individual using my full name as the company name. With this account I have access to the Nintendo Development Portal. There is some level of support provided. I should be able to develop an app. To expose my app on the Nintendo site will require that they have access to the code and full approval rights. I'm mostly in it for fun right now. But if you are looking for something interesting to do and have some ideas I would check it out.
https://developer.nintendo.com/
A: You could spend literally thousands of dollars on the dev kit or you could be a bad person and go look at the homebrew stuff. It is technically hacking though, so I only hypothetically recommend it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29466",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
} |
Q: How do I cast int to enum in C#? How do I cast an int to an enum in C#?
A: using System;
using System.Collections.Generic;
using System.Linq;
using System.Text.RegularExpressions;
namespace SamplePrograme
{
public class Program
{
public enum Suit : int
{
Spades = 0,
Hearts = 1,
Clubs = 2,
Diamonds = 3
}
public static void Main(string[] args)
{
//from string
Console.WriteLine((Suit) Enum.Parse(typeof(Suit), "Clubs"));
//from int
Console.WriteLine((Suit)1);
//From number you can also
Console.WriteLine((Suit)Enum.ToObject(typeof(Suit) ,1));
}
}
}
A: I prefer a short way using a nullable enum type variable.
var enumValue = (MyEnum?)enumInt;
if (!enumValue.HasValue)
{
throw new ArgumentException(nameof(enumValue));
}
A: You just do like below:
int intToCast = 1;
TargetEnum f = (TargetEnum) intToCast ;
To make sure that you only cast the right values and that you can throw an exception otherwise:
int intToCast = 1;
if (Enum.IsDefined(typeof(TargetEnum), intToCast ))
{
TargetEnum target = (TargetEnum)intToCast ;
}
else
{
// Throw your exception.
}
Note that using IsDefined is costly and even more than just casting, so it depends on your implementation to decide to use it or not.
A: You can use an extension method.
public static class Extensions
{
public static T ToEnum<T>(this string data) where T : struct
{
if (!Enum.TryParse(data, true, out T enumVariable))
{
if (Enum.IsDefined(typeof(T), enumVariable))
{
return enumVariable;
}
}
return default;
}
public static T ToEnum<T>(this int data) where T : struct
{
return (T)Enum.ToObject(typeof(T), data);
}
}
Use it like the below code:
Enum:
public enum DaysOfWeeks
{
Monday = 1,
Tuesday = 2,
Wednesday = 3,
Thursday = 4,
Friday = 5,
Saturday = 6,
Sunday = 7,
}
Usage:
string Monday = "Mon";
int Wednesday = 3;
var Mon = Monday.ToEnum<DaysOfWeeks>();
var Wed = Wednesday.ToEnum<DaysOfWeeks>();
A: I am using this piece of code to cast int to my enum:
if (typeof(YourEnum).IsEnumDefined(valueToCast)) return (YourEnum)valueToCast;
else { //handle it here, if its not defined }
I find it the best solution.
A: Below is a nice utility class for Enums
public static class EnumHelper
{
public static int[] ToIntArray<T>(T[] value)
{
int[] result = new int[value.Length];
for (int i = 0; i < value.Length; i++)
result[i] = Convert.ToInt32(value[i]);
return result;
}
public static T[] FromIntArray<T>(int[] value)
{
T[] result = new T[value.Length];
for (int i = 0; i < value.Length; i++)
result[i] = (T)Enum.ToObject(typeof(T),value[i]);
return result;
}
internal static T Parse<T>(string value, T defaultValue)
{
if (Enum.IsDefined(typeof(T), value))
return (T) Enum.Parse(typeof (T), value);
int num;
if(int.TryParse(value,out num))
{
if (Enum.IsDefined(typeof(T), num))
return (T)Enum.ToObject(typeof(T), num);
}
return defaultValue;
}
}
A: Simple you can cast int to enum
public enum DaysOfWeeks
{
Monday = 1,
Tuesday = 2,
Wednesday = 3,
Thursday = 4,
Friday = 5,
Saturday = 6,
Sunday = 7,
}
var day= (DaysOfWeeks)5;
Console.WriteLine("Day is : {0}", day);
Console.ReadLine();
A: For numeric values, this is safer as it will return an object no matter what:
public static class EnumEx
{
static public bool TryConvert<T>(int value, out T result)
{
result = default(T);
bool success = Enum.IsDefined(typeof(T), value);
if (success)
{
result = (T)Enum.ToObject(typeof(T), value);
}
return success;
}
}
A: Sometimes you have an object to the MyEnum type. Like
var MyEnumType = typeof(MyEnum);
Then:
Enum.ToObject(typeof(MyEnum), 3)
A: If you're ready for the 4.0 .NET Framework, there's a new Enum.TryParse() function that's very useful and plays well with the [Flags] attribute. See Enum.TryParse Method (String, TEnum%)
A: I need two instructions:
YourEnum possibleEnum = (YourEnum)value; // There isn't any guarantee that it is part of the enum
if (Enum.IsDefined(typeof(YourEnum), possibleEnum))
{
// Value exists in YourEnum
}
A: From an int:
YourEnum foo = (YourEnum)yourInt;
From a string:
YourEnum foo = (YourEnum) Enum.Parse(typeof(YourEnum), yourString);
// The foo.ToString().Contains(",") check is necessary for
// enumerations marked with a [Flags] attribute.
if (!Enum.IsDefined(typeof(YourEnum), foo) && !foo.ToString().Contains(","))
{
throw new InvalidOperationException(
$"{yourString} is not an underlying value of the YourEnum enumeration."
);
}
From a number:
YourEnum foo = (YourEnum)Enum.ToObject(typeof(YourEnum), yourInt);
A: If you have an integer that acts as a bitmask and could represent one or more values in a [Flags] enumeration, you can use this code to parse the individual flag values into a list:
for (var flagIterator = 0; flagIterator < 32; flagIterator++)
{
// Determine the bit value (1,2,4,...,Int32.MinValue)
int bitValue = 1 << flagIterator;
// Check to see if the current flag exists in the bit mask
if ((intValue & bitValue) != 0)
{
// If the current flag exists in the enumeration, then we can add that value to the list
// if the enumeration has that flag defined
if (Enum.IsDefined(typeof(MyEnum), bitValue))
Console.WriteLine((MyEnum)bitValue);
}
}
Note that this assumes that the underlying type of the enum is a signed 32-bit integer. If it were a different numerical type, you'd have to change the hardcoded 32 to reflect the bits in that type (or programatically derive it using Enum.GetUnderlyingType())
A: This is an flags enumeration aware safe convert method:
public static bool TryConvertToEnum<T>(this int instance, out T result)
where T: Enum
{
var enumType = typeof (T);
var success = Enum.IsDefined(enumType, instance);
if (success)
{
result = (T)Enum.ToObject(enumType, instance);
}
else
{
result = default(T);
}
return success;
}
A: Alternatively, use an extension method instead of a one-liner:
public static T ToEnum<T>(this string enumString)
{
return (T) Enum.Parse(typeof (T), enumString);
}
Usage:
Color colorEnum = "Red".ToEnum<Color>();
OR
string color = "Red";
var colorEnum = color.ToEnum<Color>();
A:
To convert a string to ENUM or int to ENUM constant we need to use Enum.Parse function. Here is a youtube video https://www.youtube.com/watch?v=4nhx4VwdRDk which actually demonstrate's with string and the same applies for int.
The code goes as shown below where "red" is the string and "MyColors" is the color ENUM which has the color constants.
MyColors EnumColors = (MyColors)Enum.Parse(typeof(MyColors), "Red");
A: Slightly getting away from the original question, but I found an answer to Stack Overflow question Get int value from enum useful. Create a static class with public const int properties, allowing you to easily collect together a bunch of related int constants, and then not have to cast them to int when using them.
public static class Question
{
public static readonly int Role = 2;
public static readonly int ProjectFunding = 3;
public static readonly int TotalEmployee = 4;
public static readonly int NumberOfServers = 5;
public static readonly int TopBusinessConcern = 6;
}
Obviously, some of the enum type functionality will be lost, but for storing a bunch of database id constants, it seems like a pretty tidy solution.
A: The following is a slightly better extension method:
public static string ToEnumString<TEnum>(this int enumValue)
{
var enumString = enumValue.ToString();
if (Enum.IsDefined(typeof(TEnum), enumValue))
{
enumString = ((TEnum) Enum.ToObject(typeof (TEnum), enumValue)).ToString();
}
return enumString;
}
A: This parses integers or strings to a target enum with partial matching in .NET 4.0 using generics like in Tawani's utility class. I am using it to convert command-line switch variables which may be incomplete. Since an enum cannot be null, you should logically provide a default value. It can be called like this:
var result = EnumParser<MyEnum>.Parse(valueToParse, MyEnum.FirstValue);
Here's the code:
using System;
public class EnumParser<T> where T : struct
{
public static T Parse(int toParse, T defaultVal)
{
return Parse(toParse + "", defaultVal);
}
public static T Parse(string toParse, T defaultVal)
{
T enumVal = defaultVal;
if (defaultVal is Enum && !String.IsNullOrEmpty(toParse))
{
int index;
if (int.TryParse(toParse, out index))
{
Enum.TryParse(index + "", out enumVal);
}
else
{
if (!Enum.TryParse<T>(toParse + "", true, out enumVal))
{
MatchPartialName(toParse, ref enumVal);
}
}
}
return enumVal;
}
public static void MatchPartialName(string toParse, ref T enumVal)
{
foreach (string member in enumVal.GetType().GetEnumNames())
{
if (member.ToLower().Contains(toParse.ToLower()))
{
if (Enum.TryParse<T>(member + "", out enumVal))
{
break;
}
}
}
}
}
FYI: The question was about integers, which nobody mentioned will also explicitly convert in Enum.TryParse()
A: I think to get a complete answer, people have to know how enums work internally in .NET.
How stuff works
An enum in .NET is a structure that maps a set of values (fields) to a basic type (the default is int). However, you can actually choose the integral type that your enum maps to:
public enum Foo : short
In this case the enum is mapped to the short data type, which means it will be stored in memory as a short and will behave as a short when you cast and use it.
If you look at it from a IL point of view, a (normal, int) enum looks like this:
.class public auto ansi serializable sealed BarFlag extends System.Enum
{
.custom instance void System.FlagsAttribute::.ctor()
.custom instance void ComVisibleAttribute::.ctor(bool) = { bool(true) }
.field public static literal valuetype BarFlag AllFlags = int32(0x3fff)
.field public static literal valuetype BarFlag Foo1 = int32(1)
.field public static literal valuetype BarFlag Foo2 = int32(0x2000)
// and so on for all flags or enum values
.field public specialname rtspecialname int32 value__
}
What should get your attention here is that the value__ is stored separately from the enum values. In the case of the enum Foo above, the type of value__ is int16. This basically means that you can store whatever you want in an enum, as long as the types match.
At this point I'd like to point out that System.Enum is a value type, which basically means that BarFlag will take up 4 bytes in memory and Foo will take up 2 -- e.g. the size of the underlying type (it's actually more complicated than that, but hey...).
The answer
So, if you have an integer that you want to map to an enum, the runtime only has to do 2 things: copy the 4 bytes and name it something else (the name of the enum). Copying is implicit because the data is stored as value type - this basically means that if you use unmanaged code, you can simply interchange enums and integers without copying data.
To make it safe, I think it's a best practice to know that the underlying types are the same or implicitly convertible and to ensure the enum values exist (they aren't checked by default!).
To see how this works, try the following code:
public enum MyEnum : int
{
Foo = 1,
Bar = 2,
Mek = 5
}
static void Main(string[] args)
{
var e1 = (MyEnum)5;
var e2 = (MyEnum)6;
Console.WriteLine("{0} {1}", e1, e2);
Console.ReadLine();
}
Note that casting to e2 also works! From the compiler perspective above this makes sense: the value__ field is simply filled with either 5 or 6 and when Console.WriteLine calls ToString(), the name of e1 is resolved while the name of e2 is not.
If that's not what you intended, use Enum.IsDefined(typeof(MyEnum), 6) to check if the value you are casting maps to a defined enum.
Also note that I'm explicit about the underlying type of the enum, even though the compiler actually checks this. I'm doing this to ensure I don't run into any surprises down the road. To see these surprises in action, you can use the following code (actually I've seen this happen a lot in database code):
public enum MyEnum : short
{
Mek = 5
}
static void Main(string[] args)
{
var e1 = (MyEnum)32769; // will not compile, out of bounds for a short
object o = 5;
var e2 = (MyEnum)o; // will throw at runtime, because o is of type int
Console.WriteLine("{0} {1}", e1, e2);
Console.ReadLine();
}
A: From a string: (Enum.Parse is out of Date, use Enum.TryParse)
enum Importance
{}
Importance importance;
if (Enum.TryParse(value, out importance))
{
}
A: You should build in some type matching relaxation to be more robust.
public static T ToEnum<T>(dynamic value)
{
if (value == null)
{
// default value of an enum is the object that corresponds to
// the default value of its underlying type
// https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/default-values-table
value = Activator.CreateInstance(Enum.GetUnderlyingType(typeof(T)));
}
else if (value is string name)
{
return (T)Enum.Parse(typeof(T), name);
}
return (T)Enum.ToObject(typeof(T),
Convert.ChangeType(value, Enum.GetUnderlyingType(typeof(T))));
}
Test Case
[Flags]
public enum A : uint
{
None = 0,
X = 1 < 0,
Y = 1 < 1
}
static void Main(string[] args)
{
var value = EnumHelper.ToEnum<A>(7m);
var x = value.HasFlag(A.X); // true
var y = value.HasFlag(A.Y); // true
var value2 = EnumHelper.ToEnum<A>("X");
var value3 = EnumHelper.ToEnum<A>(null);
Console.ReadKey();
}
A: Take the following example:
int one = 1;
MyEnum e = (MyEnum)one;
A: The easy and clear way for casting an int to enum in C#:
public class Program
{
public enum Color : int
{
Blue = 0,
Black = 1,
Green = 2,
Gray = 3,
Yellow = 4
}
public static void Main(string[] args)
{
// From string
Console.WriteLine((Color) Enum.Parse(typeof(Color), "Green"));
// From int
Console.WriteLine((Color)2);
// From number you can also
Console.WriteLine((Color)Enum.ToObject(typeof(Color), 2));
}
}
A: Here's an extension method that casts Int32 to Enum.
It honors bitwise flags even when the value is higher than the maximum possible. For example if you have an enum with possibilities 1, 2, and 4, but the int is 9, it understands that as 1 in absence of an 8. This lets you make data updates ahead of code updates.
public static TEnum ToEnum<TEnum>(this int val) where TEnum : struct, IComparable, IFormattable, IConvertible
{
if (!typeof(TEnum).IsEnum)
{
return default(TEnum);
}
if (Enum.IsDefined(typeof(TEnum), val))
{//if a straightforward single value, return that
return (TEnum)Enum.ToObject(typeof(TEnum), val);
}
var candidates = Enum
.GetValues(typeof(TEnum))
.Cast<int>()
.ToList();
var isBitwise = candidates
.Select((n, i) => {
if (i < 2) return n == 0 || n == 1;
return n / 2 == candidates[i - 1];
})
.All(y => y);
var maxPossible = candidates.Sum();
if (
Enum.TryParse(val.ToString(), out TEnum asEnum)
&& (val <= maxPossible || !isBitwise)
){//if it can be parsed as a bitwise enum with multiple flags,
//or is not bitwise, return the result of TryParse
return asEnum;
}
//If the value is higher than all possible combinations,
//remove the high imaginary values not accounted for in the enum
var excess = Enumerable
.Range(0, 32)
.Select(n => (int)Math.Pow(2, n))
.Where(n => n <= val && n > 0 && !candidates.Contains(n))
.Sum();
return Enum.TryParse((val - excess).ToString(), out asEnum) ? asEnum : default(TEnum);
}
A: For string, you can do the following:
var result = Enum.TryParse(typeof(MyEnum), yourString, out yourEnum)
And make sure to check the result to determine if the conversion failed.
For int, you can do the following:
MyEnum someValue = (MyEnum)myIntValue;
A: Just cast it:
MyEnum e = (MyEnum)3;
Check if it's in range using Enum.IsDefined:
if (Enum.IsDefined(typeof(MyEnum), 3)) { ... }
A: In my case, I needed to return the enum from a WCF service. I also needed a friendly name, not just the enum.ToString().
Here's my WCF Class.
[DataContract]
public class EnumMember
{
[DataMember]
public string Description { get; set; }
[DataMember]
public int Value { get; set; }
public static List<EnumMember> ConvertToList<T>()
{
Type type = typeof(T);
if (!type.IsEnum)
{
throw new ArgumentException("T must be of type enumeration.");
}
var members = new List<EnumMember>();
foreach (string item in System.Enum.GetNames(type))
{
var enumType = System.Enum.Parse(type, item);
members.Add(
new EnumMember() { Description = enumType.GetDescriptionValue(), Value = ((IConvertible)enumType).ToInt32(null) });
}
return members;
}
}
Here's the Extension method that gets the Description from the Enum.
public static string GetDescriptionValue<T>(this T source)
{
FieldInfo fileInfo = source.GetType().GetField(source.ToString());
DescriptionAttribute[] attributes = (DescriptionAttribute[])fileInfo.GetCustomAttributes(typeof(DescriptionAttribute), false);
if (attributes != null && attributes.Length > 0)
{
return attributes[0].Description;
}
else
{
return source.ToString();
}
}
Implementation:
return EnumMember.ConvertToList<YourType>();
A: It can help you to convert any input data to user desired enum. Suppose you have an enum like below which by default int. Please add a Default value at first of your enum. Which is used at helpers medthod when there is no match found with input value.
public enum FriendType
{
Default,
Audio,
Video,
Image
}
public static class EnumHelper<T>
{
public static T ConvertToEnum(dynamic value)
{
var result = default(T);
var tempType = 0;
//see Note below
if (value != null &&
int.TryParse(value.ToString(), out tempType) &&
Enum.IsDefined(typeof(T), tempType))
{
result = (T)Enum.ToObject(typeof(T), tempType);
}
return result;
}
}
N.B: Here I try to parse value into int, because enum is by default int
If you define enum like this which is byte type.
public enum MediaType : byte
{
Default,
Audio,
Video,
Image
}
You need to change parsing at helper method from
int.TryParse(value.ToString(), out tempType)
to
byte.TryParse(value.ToString(), out tempType)
I check my method for following inputs
EnumHelper<FriendType>.ConvertToEnum(null);
EnumHelper<FriendType>.ConvertToEnum("");
EnumHelper<FriendType>.ConvertToEnum("-1");
EnumHelper<FriendType>.ConvertToEnum("6");
EnumHelper<FriendType>.ConvertToEnum("");
EnumHelper<FriendType>.ConvertToEnum("2");
EnumHelper<FriendType>.ConvertToEnum(-1);
EnumHelper<FriendType>.ConvertToEnum(0);
EnumHelper<FriendType>.ConvertToEnum(1);
EnumHelper<FriendType>.ConvertToEnum(9);
sorry for my english
A: Different ways to cast to and from Enum
enum orientation : byte
{
north = 1,
south = 2,
east = 3,
west = 4
}
class Program
{
static void Main(string[] args)
{
orientation myDirection = orientation.north;
Console.WriteLine(“myDirection = {0}”, myDirection); //output myDirection =north
Console.WriteLine((byte)myDirection); //output 1
string strDir = Convert.ToString(myDirection);
Console.WriteLine(strDir); //output north
string myString = “north”; //to convert string to Enum
myDirection = (orientation)Enum.Parse(typeof(orientation),myString);
}
}
A: I don't know anymore where I get the part of this enum extension, but it is from stackoverflow. I am sorry for this! But I took this one and modified it for enums with Flags.
For enums with Flags I did this:
public static class Enum<T> where T : struct
{
private static readonly IEnumerable<T> All = Enum.GetValues(typeof (T)).Cast<T>();
private static readonly Dictionary<int, T> Values = All.ToDictionary(k => Convert.ToInt32(k));
public static T? CastOrNull(int value)
{
T foundValue;
if (Values.TryGetValue(value, out foundValue))
{
return foundValue;
}
// For enums with Flags-Attribut.
try
{
bool isFlag = typeof(T).GetCustomAttributes(typeof(FlagsAttribute), false).Length > 0;
if (isFlag)
{
int existingIntValue = 0;
foreach (T t in Enum.GetValues(typeof(T)))
{
if ((value & Convert.ToInt32(t)) > 0)
{
existingIntValue |= Convert.ToInt32(t);
}
}
if (existingIntValue == 0)
{
return null;
}
return (T)(Enum.Parse(typeof(T), existingIntValue.ToString(), true));
}
}
catch (Exception)
{
return null;
}
return null;
}
}
Example:
[Flags]
public enum PetType
{
None = 0, Dog = 1, Cat = 2, Fish = 4, Bird = 8, Reptile = 16, Other = 32
};
integer values
1=Dog;
13= Dog | Fish | Bird;
96= Other;
128= Null;
A: You simply use Explicit conversion Cast int to enum or enum to int
class Program
{
static void Main(string[] args)
{
Console.WriteLine((int)Number.three); //Output=3
Console.WriteLine((Number)3);// Outout three
Console.Read();
}
public enum Number
{
Zero = 0,
One = 1,
Two = 2,
three = 3
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29482",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3803"
} |
Q: Automated script to zip IIS logs? I'd like to write a script/batch that will bunch up my daily IIS logs and zip them up by month.
ex080801.log which is in the format of exyymmdd.log
ex080801.log - ex080831.log gets zipped up and the log files deleted.
The reason we do this is because on a heavy site a log file for one day could be 500mb to 1gb so we zip them up which compresses them by 98% and dump the real log file. We use webtrend to analyze the log files and it is capable of reading into a zip file.
Does anyone have any ideas on how to script this or would be willing to share some code?
A: Here's my script which basically adapts David's, and zips up last month's logs, moves them and deletes the original log files. this can be adapted for Apache logs too.
The only problem with this is you may need to edit the replace commands, if your DOS date function outputs date of the week.
You'll also need to install 7-zip.
You can also download IISlogslite but it compresses each day's file into a single zip file which I didn't find useful. There is a vbscript floating about the web that does the same thing.
-------------------------------------------------------------------------------------
@echo on
:: Name - iislogzip.bat
:: Description - Server Log File Manager
::
:: History
:: Date Authory Change
:: 27-Aug-2008 David Crow Original (found on stack overflow)
:: 15-Oct-2008 AIMackenzie Slimmed down commands
:: ========================================================
:: setup variables and parameters
:: ========================================================
:: generate date and time variables
set month=%DATE:~3,2%
set year=%DATE:~8,2%
::Get last month and check edge conditions
set /a lastmonth=%month%-1
if %lastmonth% equ 0 set /a year=%year%-1
if %lastmonth% equ 0 set lastmonth=12
if %lastmonth% lss 10 set lastmonth=0%lastmonth%
set yymm=%year%%lastmonth%
set logpath="C:\WINDOWS\system32\LogFiles"
set zippath="C:\Program Files\7-Zip\7z.exe"
set arcpath="C:\WINDOWS\system32\LogFiles\WUDF"
:: ========================================================
:: Change to log file path
:: ========================================================
cd /D %logpath%
:: ========================================================
:: zip last months IIS log files, move zipped file to archive
:: then delete old logs
:: ========================================================
%zippath% a -tzip ex%yymm%-logs.zip %logpath%\ex%yymm%*.log
move "%logpath%\*.zip" "%arcpath%"
del %logpath%\ex%yymm%*.log
A: We use a script like the following. Gzip is from the cygwin project. I'm sure you could modify the syntax to use a zip tool instead. The "skip" argument is the number of files to not archive off -- we keep 11 days in the 'current' directory.
@echo off
setlocal
For /f "skip=11 delims=/" %%a in ('Dir D:\logs\W3SVC1\*.log /B /O:-N /T:C')do move "D:\logs\W3SVC1\%%a" "D:\logs\W3SVC1\old\%%a"
d:
cd "\logs\W3SVC1\old"
gzip -n *.log
Endlocal
exit
A: You'll need a command line tool to zip up the files. I recommend 7-Zip which is free and easy to use. The self-contained command line version (7za.exe) is the most portable choice.
Here's a two-line batch file that would zip the log files and delete them afterwards:
7za.exe a -tzip ex%1-logs.zip %2\ex%1*.log
del %2\ex%1*.log
The first parameter is the 4 digit year-and-month, and the second parameter is the path to the directory containing your logs. For example: ziplogs.bat 0808 c:\logs
It's possible to get more elaborate (i.e. searching the filenames to determine which months to archive). You might want to check out the Windows FINDSTR command for searching input text with regular expressions.
A: You can grab the command-line utilities package from DotNetZip to get tools to create zips from scripts. There's a nice little tool called Zipit.exe that runs on the command line, adds files or directories to zip files. It is fast, efficient.
A better option might be to just do the zipping from within PowerShell.
function ZipUp-Files ( $directory )
{
$children = get-childitem -path $directory
foreach ($o in $children)
{
if ($o.Name -ne "TestResults" -and
$o.Name -ne "obj" -and
$o.Name -ne "bin" -and
$o.Name -ne "tfs" -and
$o.Name -ne "notused" -and
$o.Name -ne "Release")
{
if ($o.PSIsContainer)
{
ZipUp-Files ( $o.FullName )
}
else
{
if ($o.Name -ne ".tfs-ignore" -and
!$o.Name.EndsWith(".cache") -and
!$o.Name.EndsWith(".zip") )
{
Write-output $o.FullName
$e= $zipfile.AddFile($o.FullName)
}
}
}
}
}
[System.Reflection.Assembly]::LoadFrom("c:\\\bin\\Ionic.Zip.dll");
$zipfile = new-object Ionic.Zip.ZipFile("zipsrc.zip");
ZipUp-Files "DotNetZip"
$zipfile.Save()
A: Borrowed zip function from http://blogs.msdn.com/daiken/archive/2007/02/12/compress-files-with-windows-powershell-then-package-a-windows-vista-sidebar-gadget.aspx
Here is powershell answer that works wonders:
param([string]$Path = $(read-host "Enter the path"))
function New-Zip
{
param([string]$zipfilename)
set-content $zipfilename ("PK" + [char]5 + [char]6 + ("$([char]0)" * 18))
(dir $zipfilename).IsReadOnly = $false
}
function Add-Zip
{
param([string]$zipfilename)
if(-not (test-path($zipfilename)))
{
set-content $zipfilename ("PK" + [char]5 + [char]6 + ("$([char]0)" * 18))
(dir $zipfilename).IsReadOnly = $false
}
$shellApplication = new-object -com shell.application
$zipPackage = $shellApplication.NameSpace($zipfilename)
foreach($file in $input)
{
$zipPackage.CopyHere($file.FullName)
Start-sleep -milliseconds 500
}
}
$FilesToZip = dir $Path -recurse -include *.log
foreach ($file in $FilesToZip) {
New-Zip $file.BaseName
dir $($file.directoryname+"\"+$file.name) | Add-zip $($file.directoryname+"\$($file.basename).zip")
del $($file.directoryname+"\"+$file.name)
}
A: We use this powershell script: http://gallery.technet.microsoft.com/scriptcenter/31db73b4-746c-4d33-a0aa-7a79006317e6
It uses 7-zip and verifys the files before deleting them
A: Regex will do the trick... create a perl/python/php script to do the job for you..
I'm pretty sure windows batch file can't do regex.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29496",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: How do you troubleshoot character encoding problems? If all you see is the ugly no-char boxes, what tools or strategies do you use to figure out what went wrong?
(The specific scenario I'm facing is no-char boxes within a <select> when it should be showing Japanese chars.)
A: Firstly, "ugly no-char boxes" might not be an encoding problem, they might just be a sign you don't have a font installed that can display the glyphs in the page.
Most character encoding problems happen when strings are being passed from one system to another. For webapps, this is usually between the browser and the application, between the application and the filesystem and between the application and the database.
So you need to check where the mis-encoded data is coming from, what character encoding it has at the source, and what encoding it is being received as. The best way is to send through characters you know the system is having problems with, and examine them at each level of the app. What do they look like inside the app? In the database? When you get them back from the database? When they're displayed in the browser?
Sorry to be so general, but the question doesn't give much more to work with.
A: If the data you send to the browser becomes mangled (moji-bake) you will get trash characters. Also, if you specify the wrong character set in your META headers, your browser will render the page incorrectly, causing moji-bake again, sometimes in random places on the page.
When handling CJK character sets, you must be sure to use UTF8 character encoding throughout the lifetime of your program (data storage, retrieval, data manipulation in your code, displaying in the browsser etc...)
What is UTF8?
UTF8 handles binary streams of data, not strings. This means the bit combinations can have variable length. ASCII characters have a fixed length of 8 bits representing 1 byte, however UTF8 characters can be composed of 6bits, 8bits, 12bits, etc... As such, UTF8 is prone to what Japanese call "mojibake".
As a coder, from database to codebase to browser, you should try and use UTF8 completely. For email you can use UTF8, but you will probably find most mail servers and clients are still old and use a mishmash of different character sets (e.g. ISO9022X).
Database Settings
If you are a mysql user, then make sure you have to ensure all connections to the DB use UTF8, and that all tables/fields use UTF8. By default mysql uses Latin (Swedish) character sets. Those kooky swedes love their sense of humour!!
Checking your Codebase
In my experience editors like Notepad++, Notepad2, UltraEdit, e, etc... all have UTF8 support problems. They mostly work, but since their developers don't use CJK languages themselves, they are not perfected. Issues like turning off BOM (Byte Order Mark), mangled tabs, poor character set conversion, etc ... all present problems.
I highly recommend using a proven UTF8 editor like Maruo. This is made by a Japanese company, but there is an English version (and a trial version) at http://www.hidemaru.interlink.or.jp/software/
Lastly, you may need to convert your source files into UTF8. Especially if the codebase itself has CJK language strings contained therein.
Manipulating Strings
Any string function need to multibyte safe. Notice I didn't say double-byte. UTF8 is not a double byte but multibyte, depending on the total number of bits used to represent a character. In PHP you need to call the MB string functions specifically. Ruby and other languages have more transparent support, but you need to check the docs for your flavour of application server!
META Tags
Check out google.co.jp or yahoo.co.jp for their META headers. These are sites that know how to to it properly. Basically include the following META tag the doucment <HEAD>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
It is usually safe to mix English HTML document type attributes with the above character too. So adding the META tag above seems to work in a HTML document that has:
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
Email
This is a wholly different can of worms. UTF8 works a lot, but many older Japanese clients use ISO2022X more. This is not worth covering here.
Debugging UTF8 Issues
Once you have a reliable UTF8 editor like Maruo, you can create static pages and resolve your issues.
Hope that helps
A: Redirect the data to disk and use a Hex Editor. Most text editors / viewers do their own conversions behind the scenes, so it is difficult to be sure you are seeing the data in it's true form.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29499",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Deserialize in a different language The log4j network adapter sends events as a serialised java object. I would like to be able to capture this object and deserialise it in a different language (python). Is this possible?
NOTE The network capturing is easy; its just a TCP socket and reading in a stream. The difficulty is the deserialising part
A: Generally, no.
The stream format for Java serialization is defined in this document, but you need access to the original class definitions (and a Java runtime to load them into) to turn the stream data back into something approaching the original objects. For example, classes may define writeObject() and readObject() methods to customise their own serialized form.
(edit: lubos hasko suggests having a little java program to deserialize the objects in front of Python, but the problem is that for this to work, your "little java program" needs to load the same versions of all the same classes that it might deserialize. Which is tricky if you're receiving log messages from one app, and really tricky if you're multiplexing more than one log stream. Either way, it's not going to be a little program any more. edit2: I could be wrong here, I don't know what gets serialized. If it's just log4j classes you should be fine. On the other hand, it's possible to log arbitrary exceptions, and if they get put in the stream as well my point stands.)
It would be much easier to customise the log4j network adapter and replace the raw serialization with some more easily-deserialized form (for example you could use XStream to turn the object into an XML representation)
A: Theoretically, it's possible. The Java Serialization, like pretty much everything in Javaland, is standardized. So, you could implement a deserializer according to that standard in Python. However, the Java Serialization format is not designed for cross-language use, the serialization format is closely tied to the way objects are represented inside the JVM. While implementing a JVM in Python is surely a fun exercise, it's probably not what you're looking for (-:
There are other (data) serialization formats that are specifically designed to be language agnostic. They usually work by stripping the data formats down to the bare minimum (number, string, sequence, dictionary and that's it) and thus requiring a bit of work on both ends to represent a rich object as a graph of dumb data structures (and vice versa).
Two examples are JSON (JavaScript Object Notation) and YAML (YAML Ain't Markup Language).
ASN.1 (Abstract Syntax Notation One) is another data serialization format. Instead of dumbing the format down to a point where it can be easily understood, ASN.1 is self-describing, meaning all the information needed to decode a stream is encoded within the stream itself.
And, of course, XML (eXtensible Markup Language), will work too, provided that it is not just used to provide textual representation of a "memory dump" of a Java object, but an actual abstract, language-agnostic encoding.
So, to make a long story short: your best bet is to either try to coerce log4j into logging in one of the above-mentioned formats, replace log4j with something that does that or try to somehow intercept the objects before they are sent over the wire and convert them before leaving Javaland.
Libraries that implement JSON, YAML, ASN.1 and XML are available for both Java and Python (and pretty much every programming language known to man).
A: In theory it's possible. Now how difficult in practice it might be depends on whether Java serialization format is documented or not. I guess, it's not. edit: oops, I was wrong, thanks Charles.
Anyway, this is what I suggest you to do
*
*capture from log4j & deserialize Java object in your own little Java program.
*now when you have the object again, serialize it using your own custom formatter.
Tip: Maybe you don't even have to write your own custom formatter. for example, JSON (scroll down for libs) has libraries for Python and Java, so you could in theory use Java library to serialize your objects and Python equivalent library to deserialize it
*send output stream to your python application and deserialize it
Charles wrote:
the problem is that for this
to work, your "little java program"
needs to load the same versions of all
the same classes that it might
deserialize. Which is tricky if you're
receiving log messages from one app,
and really tricky if you're
multiplexing more than one log stream.
Either way, it's not going to be a
little program any more.
Can't you just simply reference Java log4j libraries in your own java process? I'm just giving general advice here that is applicable to any pair of languages (name of the question is pretty language agnostic so I just provided one of the generic solutions). Anyway, I'm not familiar with log4j and don't know whether you can "inject" your own serializer into it. If you can, then of course your suggestion is much better and cleaner.
A: I would recommend moving to a third-party format (by creating your own log4j adapters etc) that both languages understand and can easily marshal / unmarshal, e.g. XML.
A: Well I am not Python expert so I can't comment on how to solve your problem but if you have program in .NET you may use IKVM.NET to deserialize Java objects easily. I have experimented this by creating .NET Client for Log4J log messages written to Socket appender and it worked really well.
I am sorry, if this answer does not make sense here.
A: If you can have a JVM on the receiving side and the class definitions for the serialized data, and you only want to use Python and no other language, then you may use Jython:
*
*you would deserialize what you received using the correct Java methods
*and then you process what you get with you Python code
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29505",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Ruby - Convert Integer to String In Ruby, trying to print out the individual elements of a String is giving me trouble. Instead of seeing each character, I'm seeing their ASCII values instead:
>> a = "0123"
=> "0123"
>> a[0]
=> 48
I've looked online but can't find any way to get the original "0" back out of it. I'm a little new to Ruby to I know it has to be something simple but I just can't seem to find it.
A: You want a[0,1] instead of a[0].
A: I believe this is changing in Ruby 1.9 such that "asdf"[2] yields "d" rather than the character code
A: To summarize:
This behavior will be going away in version 1.9, in which the character itself is returned, but in previous versions, trying to reference a single character of a string by its character position will return its character value (so "ABC"[2] returns 67)
There are a number of methods that return a range of characters from a string (see the Ruby docs on the String slice method) All of the following return "C":
"ABC"[2,1]
"ABC"[2..2]
"ABC".slice(2,1)
I find the range selector to be the easiest to read. Can anyone speak to whether it is less efficient?
A: Or you can convert the integer to its character value:
a[0].chr
A: @Chris,
That's just how [] and [,] are defined for the String class.
Check out the String API.
A: The [,] operator returns a string back to you, it is a substring operator, where as the [] operator returns the character which ruby treats as a number when printing it out.
A: I think each_char or chars describes better what you want.
irb(main):001:0> a = "0123"
=> "0123"
irb(main):002:0> Array(a.each_char)
=> ["0", "1", "2", "3"]
irb(main):003:0> puts Array(a.each_char)
0
1
2
3
=> nil
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29511",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: Opcode cache impact on memory usage Can anyone tell me what is the memory usage overhead associated with PHP opcode cache?
I've seen a lot of reviews of opcode cache but all of them only concentrate on the performance increase. I have a small entry level VPS and memory limits are a concern for me.
A: Most of the memory overhead will come from the opcode cache size. Each opcode cacher has their own default(e.g. 30MB for APC) which you can change through the config file.
Other than the cache size, the actual memory overhead of the cacher itself is negligible.
A: In todays world: It's neglectible. I think memory consumption was about 50 MB bigger with eAccelerator then it was without when I did my benchmarks.
If you really need the speed but do have headaches that your RAM might be not enough: grab $40 and buy another GIG of RAM for your server ;)
A: You can set a limit to memory consumption for APC, but that potentially limits its effectiveness.
If you're just using it for silent opcode caching, then it should be fine. Once the memory allotment is full, no new files will be cached, but everything will work as expected. However, the user-space cache functions like apc_store() and apc_fetch() will fail silently and inexplicably if there is no memory available.
This can be tricky to catch and debug since no error is reported and no exception is thrown.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29525",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Using .NET CodeDOM to declare and initialize a field in one statement I want to use CodeDOM to both declare and initialize my static field in one statement. How can I do this?
// for example
public static int MyField = 5;
I can seem to figure out how to declare a static field, and I can set its value later, but I can't seem to get the above effect.
@lomaxx,
Naw, I just want static. I don't want const. This value can change. I just wanted the simplicity of declaring and init'ing in one fell swoop. As if anything in the codedom world is simple. Every type name is 20+ characters long and you end up building these huge expression trees. Makes my eyes bug out. I'm only alive today thanks to resharper's reformatting.
A: Once you create your CodeMemberField instance to represent the static field, you can assign the InitExpression property to the expression you want to use to populate the field.
A: This post by Omer van Kloeten seems to do what you want. Notice that the output has the line:
private static Foo instance = new Foo();
A: I think what you want is a const rather than static. I assume what you want is the effect of having a static readonly which is why you always want the value to be 5.
In c# consts are treated exactly the same as a readonly static.
From the c# docs:
Even though constants are considered
static members, a constant-
declaration neither requires nor
allows a static modifier.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29531",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How to create an exit message Is there a one line function call that quits the program and displays a message? I know in Perl it's as simple as:
die("Message goes here")
I'm tired of typing this:
puts "Message goes here"
exit
A: The abort function does this. For example:
abort("Message goes here")
Note: the abort message will be written to STDERR as opposed to puts which will write to STDOUT.
A: If you want to denote an actual error in your code, you could raise a RuntimeError exception:
raise RuntimeError, 'Message goes here'
This will print a stacktrace, the type of the exception being raised and the message that you provided. Depending on your users, a stacktrace might be too scary, and the actual message might get lost in the noise. On the other hand, if you die because of an actual error, a stacktrace will give you additional information for debugging.
A: I got here searching for a way to execute some code whenever the program ends.
Found this:
Kernel.at_exit { puts "sayonara" }
# do whatever
# [...]
# call #exit or #abort or just let the program end
# calling #exit! will skip the call
Called multiple times will register multiple handlers.
A: I've never heard of such a function, but it would be trivial enough to implement...
def die(msg)
puts msg
exit
end
Then, if this is defined in some .rb file that you include in all your scripts, you are golden.... just because it's not built in doesn't mean you can't do it yourself ;-)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29539",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "213"
} |
Q: GUI toolkit for rapid development? I want to write a front-end to an application written in C/C++.
I use Solaris 10 and plan to port the application to some other architectures (Windows first).
A: I'd recommend taking a look at wxWidgets to provide some cross platform UI widgets that will work on Solaris and Windows.
A: Qt 4 is the best tool for this job. If you want to work with other languages, it also has bindings for Java and Python
A: On a Mac, this would be easy. The Cocoa API is great when programming in Objective C (which compiles fine with C/C++ files).
Otherwise the situation is a bit more grim. As for Rapid prototype, you might want to check the CodeGear (Borland/C++ Builder) tools. I think their VCL library is cross-platform.
Otherwise, you could interface with a scripting language like Ruby and use fantastic front end libraries like Shoes. Python also interfaces with wxWidgets to make writing cross-platform front ends easy. Keep in mind that this all requires taking time to make sure your C/C++ code can talk to the scripting language. This is not trivial, and the amount of effort required depends upon the style of your code base. (Oh my God.)
Lastly, you could just use wxWidgets itself. This might be your best bet since it requires no additional overhead than coding the UI itself. That said, C++ is not the greatest language for designing UIs.
And super lastly, consider writing a code generator that converts from say Shoes to whatever wxWidgets code is needed to generate the same Shoes app. That way you can do easier UI design but still get C++ code in the end. Likewise, you could code gen off of the Python/wxWidgets code. Then sell such a code generator. :-)
A: GTK-- and Glade.
Thats' the C++ bindings on GTK
GTK will work on windows ( just look at GIMP )
Works everywhere, no QT license to mess with your millions-making.
A: I use wxWidgets myself. It makes good use of the C++ language features and uses smart pointers, so object and memory management is not that hard. In fact, it feels like writing in a scripting language.
Coupled with a dialog editor/code generator like wxFormBuilder or wxDesigner, (links to screenshots) it becomes a good toolkit for rapid development.
A: Have a look at FLTK which supports X11 and Windows.
A: Ultimate++ is a cross platform rapid application development framework for C++. It is aimed specifically at rapid development. The Ultimate++ website provides some comparisons to other frameworks mentioned such as Qt and wxWidgets.
A: I have used ASP.NET Web Forms to make UI front-end to collection of command line application written in legacy language, RESTful-ish web service, and bash scripts.
Once it works on Firefox, it should work at least on Firefox on other architecture. If you haven't played around with it, you should give ASP.NET a try (ASP.NET MVC seems to be the current trend). Not quite the same as RAD, but it does give you visual design of forms etc.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29555",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How to Ease TDD with MSTest / VS2008 I've read time and time again that TDD/test first is more difficult with MSTest than it is with other testing frameworks such as nUnit, MBUnit, etc... What are some suggested manual workarounds and/or 3rd party bits that you suggest when MSTest is the only option due to infrastructure policy? I'm mainly wondering about VS 2008 Team Suite, but I suppose tips for VS 2008 Pro on up would be suitable too since some MSTest functionality is now included with those versions as well.
A: I'm curious here. What I don't understand is that people start comparing all Open Source tools available with MSTest and start bashing it. Commenting on how unwieldy it is, how unintiuitve etc. IMHO, it's because it's fundamentally different from xUnit frameworks. It's optimized for parallel execution.
Even the qurik of having static ClassInitialze and Cleanup and having unique TestContext for each of the tests all because of the nextgen - at least for Windows business programmers in MS languages - parallel progarmming concepts.
I had the misfortune of working in a project with tens of thousands of unit tests. They used to take virtually most of the build time! With MSTest, we cut that down to very managable timelines.
A: My colleague Mike Hadlow has pretty good summary of why we utterly loath MSTest here.
He's managed to remove it from his project, but I'm currently working on a larger project with more politics involved so we're still using it.
The upshot of it is that whoever implemented MSTest doesn't understand TDD. I'm sorry to sound like an M$ basher - I'm really not. But I'm annoyed that I'm having to put up with a very poor tool.
A: MSTest is certainly not as efficient or extensible as some of the open source frameworks, but it is workable. Since the question asks about making life easier with MSTest and not about alternatives, here are my MSTest tips.
*
*Shortcuts. Like Haacked said, take a few seconds to learn the shortcuts.
*Current Context. Since MSTest is so slow, run tests only in the current context when you can. (CTRL+R, CTRL+T). If your cursor is in a test method, this will only run the method. If your cursor is outside a method, but in a test class, this will only run the test. And with namespace, etc etc
*Efficient tests and organization. It's dog slow. Make things as best as you can by writing efficient tests. Move slow tests to other test classes or projects so you can run the fast tests more frequently.
*Testing with WCF. If you're testing services, be sure to DEBUG tests rather than RUN tests so Visual Studio can fire up the ASP.NET development web servers. After these are up, then you can go back to RUN, but it can be easier to just always DEBUG so you don't have to think about it.
*Config Files. Edit your test-run configuration to move .config files into the test execution folder.
*Integration with Source Safe. You need to be aware that MSTest hates SourceSafe and the feeling is mutual. Because MSTest wants to put test files under source control, and add them to the solution, it must check out the solution every time you run tests. So SourceSafe must be running in multi-check-out mode to avoid killing your fellow developers.
*Ignore the fluff With MSTest, you get a dozen different windows and views. Test Runs, Test View, Test Lists ... they're all less-than-helpful. Stick with Test Results and you'll be much happier.
*Stick with "Unit Tests". When you add a new test, you can add an ordered test, a unit test, or run through a wizard. Stick with just plain simple unit tests.
A: I have not seen any serious issues with MSTest. What, specifically, are you talking about? We are, in fact, moving away from NUnit to MSTest. I do not know our reasons for this, though.
A: There are lots of config files with mstest, making it less condusive.
Another reason I chose mbunit, is the "rollback" feature of mbunit. This allows you to rollback all database things done in this test, so you can actually do full circuit tests and not worry about the pond being tainted after the test.
Also lack of RowTest facilities in mstest.
I suggest just running mbunit as a dependency inside the build process, its easy enough to just float it with your bin, and reference, no installation required.
A: *
*MSTest has "high friction": Getting
a build script with NAant and MbUnit
compared to MStest and MSBuild. No
Comparison.
*MSTest is Slow MbUnit
and NUnit are in my experince faster
(gallio may help here)
*MStest adds a
bunch of stuff i dont need like
wired config files etc.
*MSTest doesnt have the fetaure set of other OS test frameworks. check out xUnit and MbUnit
It just too hard to use and there are many better options.
A: as mentioned oyu need to install the full IDE in order to use MSTest on another machine, which is a bit crap. I suppose this is because they want to make sure that unit tests only works on the higher end visual studios and you sholdné be able to run them in any other way.
ALso, MSTest is quite slow, this is because inbetween each test it rebuilds the entire context for each test, this makes o0ne sure that a former test - failed or otherwise doen't infuence the current test, but slows things down. you can however use the /noisolation flag, which will run all tests within the MSTest process - which is quicker.
To speed up in IDE: In the VS ide you can go to Tools-Options then select Test Tools. Select sub-item called Test execution and in dialouge to the right make sure the check box called 'Keep test execution engine running between test runs' is checked.
A: If you have no choice but to use MSTest, learn the keyboard shortcuts. They'll make your life a little easier.
Test in Current Context: CTRL+R, T
All Tests in Solution: CTRL+R, A
Debug Tests in Current Context: CTRL+R, CTRL+T
Debug All Tests in Solution: CTRL+R, CTRL+A
A: To answer a non-pointed question, my answer would be
"probably NUnit just stays out of your face."
Disclaimer: I've no actual experience with MS version of xUnit, however I hear problems like 'You need to install the gigantic idea just to run your tests on a separate machine' - which is a complete No-No.
Other than that MS has this way of contorting the right path for a newbie via some kind of IDE bell/whistle that runs counter to the whole idea. Like generating tests from classes was one I remember from a year or so back.. that defeats the whole point of test-driving - your design is supposed to emerge from tiny steps of RGR: writing a test-make it pass-refactor. If you use that tool - it robs you of the entire experience.
I'll stop with my sermon.. now :)
A: I've done TDD development using NUnit for a number of years and have been using MSTest for about 4 months now due to a role change.
I don't think that MSTest stops someone from doing TDD. You still have all the core things you need for TDD such as basic asserts and mocking frameworks (I use Rhino Mocks).
MSTest does integrate closely with Visual Studio, the best component of this integration is the Code Coverage Tool that is built in.
BUT
There are a number of compelling reasons not to use MSTest. The two biggest turn offs in my opinion are:
*
*The lack of assert options (compared to NUnit)
*The sluggish test runner (slow compared to NUnit)
This means that writing asserts takes more code in combination with a slow test runner means that the whole process is slower than NUnit.
The open source options also have a lot more support in the community.
If you are using TFS for CI, then you will need to jump through a few hoops/hacks to get NUnit to publish test results. Running tests on TFS with MSTest in comparison is very easy and straight forward.
If you don't touch TFS than I'd go NUnit all the way, it's just nicer.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29557",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: Split data access class into reader and writer or combine them? This might be on the "discussy" side, but I would really like to hear your view on this.
Previously I have often written data access classes that handled both reading and writing, which often led to poor naming, like FooIoHandler etc. The rule of thumb that classes that are hard to name probably are poorly designed suggests that this is not a good solution.
So, I have recently started splitting the data access into FooWriter and FooReader, which leads to nicer names and gives some additional flexibility, but at the same time I kind of like keeping it together, if the classes are not to big.
Is a reader/writer separation a better design, or should I combine them? If I should combine them, what the heck should I name the class?
Thanks /Erik
A: ORM might be your best solution.
Or use a repository type pattern, with a "thingContext" object that is responsible for state persistence.
Personally, I use the activeRecord pattern, where save logic is baked into a base class, but I'm leaving it in favor of an nHibernate style repository pattern. The allowance for DDD and testing things without a db is very nice to have in a framework type situation, where my business logic is now gaining traction for a new UI.
A: I am now using Linq to Sql. This solves the problem entirely.
However if you do not have that option (or some similar ORM tool), I don't see any reason to separate Read/Write methods. It just adds more classes and complicates data access. I have always designed it as follows:
*
*Component/Business Object: Car
*Data Access, containing static Read and Write methods: CarDB
Example Usage:
Car car = new Car();
car.Manufacturer = "Toyota"
car.Model = "Camry"
car.Year = 2006;
car.CarID = CarDB.InsertCar(car)
car.OwnerID = 2;
CarDB.UpdateCar(car);
This also makes sense for data access where both Reads and Write need to be performed as part of the same transaction. If you split up the classes, where would that go?
A: Ignoring ORM (not because I'm either for or against it) I would keep them in the same class. They are both facets of a single responsibility and separating them just makes you look in two places where I can't really think of a good reason you would want to do that.
A: Something that reads and writes to a backend store could be called a data accessor, or ReaderWriter, or IO, or Store.
So how about one of:
*
*FooDataAccessor
*FooAccessor
*FooReaderWriter
*FooRW
*FooIO
*FooStore
*FooStorage
A: When given the choice I generally subclass the reader to create the writer.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29558",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How to use Python distutils? I wrote a quick program in python to add a gtk GUI to a cli program. I was wondering how I can create an installer using distutils. Since it's just a GUI frontend for a command line app it only works in *nix anyway so I'm not worried about it being cross platform.
my main goal is to create a .deb package for debian/ubuntu users, but I don't understand make/configure files. I've primarily been a web developer up until now.
edit: Does anyone know of a project that uses distutils so I could see it in action and, you know, actually try building it?
Here are a few useful links
*
*Ubuntu Python Packaging Guide
This Guide is very helpful. I don't know how I missed it during my initial wave of gooling. It even walks you through packaging up an existing python application
*The Ubuntu MOTU Project
This is the official package maintaining project at ubuntu. Anyone can join, and there are lots of tutorials and info about creating packages, of all types, which include the above 'python packaging guide'.
*"Python distutils to deb?" - Ars Technica Forum discussion
According to this conversation, you can't just use distutils. It doesn't follow the debian packaging format (or something like that). I guess that's why you need dh_make as seen in the Ubuntu Packaging guide
*"A bdist_deb command for distutils
This one has some interesting discussion (it's also how I found the ubuntu guide) about concatenating a zip-file and a shell script to create some kind of universal executable (anything with python and bash that is). weird. Let me know if anyone finds more info on this practice because I've never heard of it.
*Description of the deb format and how distutils fit in - python mailing list
A: apt-get install python-stdeb
Python to Debian source package conversion utility
This package provides some tools to produce Debian packages from Python packages via a new distutils command, sdist_dsc. Automatic defaults are provided for the Debian package, but many aspects of the resulting package can be customized via a configuration file.
*
*pypi-install will query the Python Package Index (PyPI) for a
package, download it, create a .deb from it, and then install
the .deb.
*py2dsc will convert a distutils-built source tarball into a Debian
source package.
A: Most Python programs will use distutils. Django is a one - see http://code.djangoproject.com/svn/django/trunk/setup.py
You should also read the documentation, as it's very comprehensive and has some good examples.
A: I found the following tutorial to be very helpful. It's shorter than the distutils documentation and explains how to setup a typical project step by step.
A: See the distutils simple example. That's basically what it is like, except real install scripts usually contain a bit more information. I have not seen any that are fundamentally more complicated, though. In essence, you just give it a list of what needs to be installed. Sometimes you need to give it some mapping dicts since the source and installed trees might not be the same.
Here is a real-life (anonymized) example:
#!/usr/bin/python
from distutils.core import setup
setup (name = 'Initech Package 3',
description = "Services and libraries ABC, DEF",
author = "That Guy, Initech Ltd",
author_email = "[email protected]",
version = '1.0.5',
package_dir = {'Package3' : 'site-packages/Package3'},
packages = ['Package3', 'Package3.Queries'],
data_files = [
('/etc/Package3', ['etc/Package3/ExternalResources.conf'])
])
A: distutils really isn't all that difficult once you get the hang of it. It's really just a matter of putting in some meta-information (program name, author, version, etc) and then selecting what files you want to include. For example, here's a sample distutils setup.py module from a decently complex python library:
Kamaelia setup.py
Note that this doesn't deal with any data files or or whatnot, so YMMV.
On another note, I agree that the distutils documentation is probably some of python's worst documentation. It is extremely inclusive in some areas, but neglects some really important information in others.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29562",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25"
} |
Q: Why is pagination so resource-expensive? It's one of those things that seems to have an odd curve where the more I think about it, the more it makes sense. To a certain extent, of course. And then it doesn't make sense to me at all.
Care to enlighten me?
A: Lubos is right, the problem is not the fact that you are paging (which takes a HUGE amount of data off the wire), but that you need to figure out what is actually going on the page..
The fact that you need to page implies there is a lot of data. A lot of data takes a long time to sort :)
A: This is a really vague question. We'd need a concrete example to get a better idea of the problem.
A: This question seems pretty well covered, but I'll add a little something MySQL specific as it catches out a lot of people:
Avoid using SQL_CALC_FOUND_ROWS. Unless the dataset is trivial, counting matches and retrieving x amount of matches in two separate queries is going to be a lot quicker. (If it is trivial, you'll barely notice a difference either way.)
A: Because in most cases you've got to sort your results first. For example, when you search on Google, you can view only up to 100 pages of results. They don't bother sorting by page-rank beyond 1000 websites for given keyword (or combination of keywords).
Pagination is fast. Sorting is slow.
A: I thought you meant pagination of the printed page - that's where I cut my teeth. I was going to enter a great monologue about collecting all the content for the page, positioning (a vast number of rules here, constrait engines are quite helpful) and justification... but apparently you were talking about the process of organizing information on webpages.
For that, I'd guess database hits. Disk access is slow. Once you've got it in memory, sorting is cheap.
A: Of course sorting on a random query takes some time, but if you're having problems with the same paginated query being used regulary, there's either something wrong with the database setup (improperly indexing/none at all, too little memory etc. I'm not a db-manager) or you're doing pagination seriously wrong:
Terribly wrong: e.g. doing select * from hugetable where somecondition; into an array getting the page count with the array.length pick the relevant indexes and dicard the array - then repeating this for each page... That's what I call seriously wrong.
The better solution two queries: one getting just the count then another getting results using limit and offset. (Some proprietary, nonstandard-sql server might have a one query option, I dunno)
The bad solution might actually work quite okay in on small tables (in fact it's not unthinkable that it's faster on very small tables, because the overhead of making two queries is bigger than getting all rows in one query. I'm not saying it is so...) but as soon as the database begins to grow the problems become obvious.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29580",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: What does ServerVariables["APPL_MD_PATH"] retrieves the metabase path for the Application for the ISAPI DLL mean? I've trying to get an ASP.net (v2) app to work in the debugger and keep running into a problem because the value returned by the following code is an empty string:
HttpContext.Current.Request.ServerVariables["APPL_MD_PATH"].ToLower()
I have found out that this "Retrieves the metabase path for the Application for the ISAPI DLL". Can anybody shed some light on what this means and why it might be empty?
This code works in our live environment, but I want it to work on my PC and be able to step through source code so I can look at another problem...
A: Are you running your application locally inside of IIS or inside of the development web server? If it's the latter, then that's probably why: Cassini (the development web server) doesn't do ISAPI, so this value will be empty.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29593",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Get current process CPU usage in C On Windows I can do:
HANDLE hProcess = GetCurrentProcess();
FILETIME ftCreation, ftExit, ftKernel, ftUser;
GetProcessTimes(hProcess, &ftCreation, &ftExit, &ftKernel, &ftUser);
SYSTEMTIME stKernel;
FileTimeToSystemTime(&ftKernel, &stKernel);
SYSTEMTIME stUser;
FileTimeToSystemTime(&ftUser, &stUser);
printf("Time in kernel mode = %uh %um %us %ums", stKernel.wHour,
stKernel.wMinute, stKernel.wSecond, stKernel.wMilliseconds));
printf("Time in user mode = %uh %um %us %ums", stUser.wHour,
stUser.wMinute, stUser.wSecond, stUser.wMilliseconds));
How can I do the same thing on *nix?
A: Check getrusage, I think that should solve your problem.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29615",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Change priority of the current process in C On Windows I can do:
HANDLE hCurrentProcess = GetCurrentProcess();
SetPriorityClass(hCurrentProcess, ABOVE_NORMAL_PRIORITY_CLASS);
How can I do the same thing on *nix?
A: If doing something like this under unix your want to (as root) chmod you task and set the s bit. Then you can change who you are running as, what your priority is, your thread scheduling, etc. at run time.
It is great as long as you are not writing a massively multithreaded app with a bug in it so that you take over a 48 CPU box and nobody can shut you down because your have each CPU spinning at 100% with all thread set to SHED_FIFO (runs to completion) running as root.
Nah .. I wouldn't be speaking from experience ....
A: Try:
#include <sys/time.h>
#include <sys/resource.h>
int main(){
setpriority(PRIO_PROCESS, 0, -20);
}
Note that you must be running as superuser for this to work.
(for more info, type 'man setpriority' at a prompt.)
A: @ allain Can you lower your own process' priority without being superuser?
Sure. Be aware, however, that this is a one way street. You can't even get back to where you started. And even fairly small reductions in priority can have startlingly large effects on running time when there is significant load on the system.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29621",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: How to Maintain Correct Javascript Event After Using cloneNode(true) I have a form element that contains multiple lines of inputs. Think of each line as attributes of a new object that I want to create in my web application. And, I want to be able to create multiple new objects in one HTTP POST. I'm using Javascript's built-in cloneNode(true) method to clone each line. The problem is that each input-line also has a removal link attached to its onclick-event:
// prototype based
<div class="input-line">
<input .../>
<a href="#" onclick="$(this).up().remove();"> Remove </a>
</div>
When the cloned input-line's removal link is clicked, it also removes any input-lines that were cloned from the same dom object. Is it possible to rebind the "this" object to the proper anchor tag after using cloneNode(true) on the above DOM element?
A: Don't put handler on each link (this really should be a button, BTW). Use event bubbling to handle all buttons with one handler:
formObject.onclick = function(e)
{
e=e||event; // IE sucks
var target = e.target||e.srcElement; // and sucks again
// target is the element that has been clicked
if (target && target.className=='remove')
{
target.parentNode.parentNode.removeChild(target.parentNode);
return false; // stop event from bubbling elsewhere
}
}
+
<div>
<input…>
<button type=button class=remove>Remove without JS handler!</button>
</div>
A: You could try cloning using the innerHTML method, or a mix:
var newItem = $(item).cloneNode(false);
newItem.innerHTML = $(item).innerHTML;
Also: I think cloneNode doesn't clone events, registered with addEventListener. But IE's attachEvent events are cloned. But I might be wrong.
A: I tested this in IE7 and FF3 and it worked as expected - there must be something else going on.
Here's my test script:
<div id="x">
<div class="input-line" id="y">
<input type="text">
<a href="#" onclick="$(this).up().remove();"> Remove </a>
</div>
</div>
<script>
$('x').appendChild($('y').cloneNode(true));
$('x').appendChild($('y').cloneNode(true));
$('x').appendChild($('y').cloneNode(true));
</script>
A: To debug this problem, I would wrap your code
$(this).up().remove()
in a function:
function _debugRemoveInputLine(this) {
debugger;
$(this).up().remove();
}
This will allow you to find out what $(this) is returning. If it is indeed returning more than one object (multiple rows), then you definitely know where to look -- in the code which creates the element using cloneNode. Do you do any modification of the resulting element (i.e. changing the id attribute)?
If I had the problem you're describing, I would consider adding unique IDs to the triggering element and the "line" element.
A: First answer is the correct one.
Pornel is implicitly suggesting the most cross-browser and framework agnostic solution.
Haven't tested it, but the concept will work in these dynamic situations involving events.
A: Looks like you're using jQuery? It has a method to clone an element with events: http://docs.jquery.com/Manipulation/clone#true
EDIT: Oops I see you're using Prototype.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29624",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Cannot access a disposed object - How to fix? In a VB.NET WinForms project, I get an exception
Cannot access a disposed of object
when closing a form. It occurs very rarely and I cannot recreate it on demand. The stack trace looks like this:
Cannot access a disposed object. Object name: 'dbiSchedule'.
at System.Windows.Forms.Control.CreateHandle()
at System.Windows.Forms.Control.get_Handle()
at System.Windows.Forms.Control.PointToScreen(Point p)
at Dbi.WinControl.Schedule.dbiSchedule.a(Boolean A_0)
at Dbi.WinControl.Schedule.dbiSchedule.a(Object A_0, EventArgs A_1)
at System.Windows.Forms.Timer.OnTick(EventArgs e)
at System.Windows.Forms.Timer.TimerNativeWindow.WndProc(Message& m)
at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)
The dbiSchedule is a schedule control from Dbi-tech. There is a timer on the form that updates the schedule on the screen every few minutes.
Any ideas what is causing the exception and how I might go about fixing it? or even just being able to recreate it on demand?
Hej! Thanks for all the answers. We do stop the Timer on the FormClosing event and we do check the IsDisposed property on the schedule component before using it in the Timer Tick event but it doesn't help.
It's a really annoying problem because if someone did come up with a solution that worked - I wouldn't be able to confirm the solution because I cannot recreate the problem manually.
A: Try checking the IsDisposed property before accessing the control. You can also check it on the FormClosing event, assuming you're using the FormClosed event.
We do stop the Timer on the
FormClosing event and we do check the
IsDisposed property on the schedule
component before using it in the Timer
Tick event but it doesn't help.
Calling GC.Collect before checking IsDisposed may help, but be careful with this. Read this article by Rico Mariani "When to call GC.Collect()".
A:
we do check the IsDisposed property on
the schedule component before using it
in the Timer Tick event but it doesn't
help.
If I understand that stack trace, it's not your timer which is the problem, it's one in the control itself - it might be them who are not cleaning-up properly.
Are you explicitly calling Dispose on their control?
A: Stopping the timer doesn't mean that it won't be called again, depending on when you stop the timer, the timer_tick may still be queued on the message loop for the form. What will happen is that you'll get one more tick that you may not be expecting. What you can do is in your timer_tick, check the Enabled property of your timer before executing the Timer_Tick method.
A: I had the same problem and solved it using a boolean flag that gets set when the form is closing (the System.Timers.Timer does not have an IsDisposed property). Everywhere on the form I was starting the timer, I had it check this flag. If it was set, then don't start the timer. Here's the reason:
The Reason:
I was stopping and disposing of the timer in the form closing event. I was starting the timer in the Timer_Elapsed() event. If I were to close the form in the middle of the Timer_Elapsed() event, the timer would immediately get disposed by the Form_Closing() event. This would happen before the Timer_Elapsed() event would finish and more importantly, before it got to this line of code:
_timer.Start()
As soon as that line was executed an ObjectDisposedException() would get thrown with the error you mentioned.
The Solution:
Private Sub myForm_FormClosing(ByVal sender As System.Object, ByVal e As System.Windows.Forms.FormClosingEventArgs) Handles MyBase.FormClosing
' set the form closing flag so the timer doesn't fire even after the form is closed.
_formIsClosing = True
_timer.Stop()
_timer.Dispose()
End Sub
Here's the timer elapsed event:
Private Sub Timer_Elapsed(ByVal sender As System.Object, ByVal e As System.Timers.ElapsedEventArgs) Handles _timer.Elapsed
' Don't want the timer stepping on itself (ie. the time interval elapses before the first call is done processing)
_timer.Stop()
' do work here
' Only start the timer if the form is open. Without this check, the timer will run even if the form is closed.
If Not _formIsClosing Then
_timer.Interval = _refreshInterval
_timer.Start() ' ObjectDisposedException() is thrown here unless you check the _formIsClosing flag.
End If
End Sub
The interesting thing to know, even though it would throw the ObjectDisposedException when attempting to start the timer, the timer would still get started causing it to run even when the form was closed (the thread would only stop when the application was closed).
A: It looks like a threading issue.
Hypothesis: Maybe you have the main thread and a timer thread accessing this control. The main thread shuts down - calling Control.Dispose() to indicate that I'm done with this Control and I shall make no more calls to this. However, the timer thread is still active - a context switch to that thread, where it may call methods on the same control. Now the control says I'm Disposed (already given up my resources) and I shall not work anymore. ObjectDisposed exception.
How to solve this: In the timer thread, before calling methods/properties on the control, do a check with
if ControlObject.IsDisposed then return; // or do whatever - but don't call control methods
OR stop the timer thread BEFORE disposing the object.
A: You sure the timer isn't outliving the 'dbiSchedule' somehow and firing after the 'dbiSchedule' has been been disposed of?
If that is the case you might be able to recreate it more consistently if the timer fires more quickly thus increasing the chances of you closing the Form just as the timer is firing.
A: Another place you could stop the timer is the FormClosing event - this happens before the form is actually closed, so is a good place to stop things before they might access unavailable resources.
A: If this happens sporadically then my guess is that it has something to do with the timer.
I'm guessing (and this is only a guess since I have no access to your code) that the timer is firing while the form is being closed. The dbiSchedule object has been disposed but the timer somehow still manages to try to call it. This shouldn't happen, because if the timer has a reference to the schedule object then the garbage collector should see this and not dispose of it.
This leads me to ask: are you calling Dispose() on the schedule object manually? If so, are you doing that before disposing of the timer? Be sure that you release all references to the schedule object before Disposing it (i.e. dispose of the timer beforehand).
Now I realize that a few months have passed between the time you posted this and when I am answering, so hopefully, you have resolved this issue. I'm writing this for the benefit of others who may come along later with a similar issue.
Hope this helps.
A: Looking at the error stack trace, it seems your timer is still active. Try to cancel the timer upon closing the form (i.e. in the form's OnClose() method). This looks like the cleanest solution.
A: My Solution was to put a try catch, & is working fine
try {
this.Invoke(new EventHandler(DoUpdate));
}catch { }
A: because the solution folder was inside OneDrive folder.
If you moving the solution folders out of the one drive folder made the errors go away.
best
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29626",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "38"
} |
Q: What is the best way to package and distribute an Excel application I've writen an Excel-based, database reporting tool. Currentely, all the VBA code is associated with a single XLS file. The user generates the report by clicking a button on the toolbar. Unfortunately, unless the user has saved the file under another file name, all the reported data gets wiped-out.
When I have created similar tools in Word, I can put all the code in a template (.dot) file and call it from there. If I put the template file in the Office startup folder, it will launch everytime I start Word. Is there a similar way, to package and distribute my code in Excel? I've tried using Add-ins, but I didn't find a way to call the code from the application window.
A: I always use an Add-in(xla)/Template(xlt) combination. Your add-in creates the menu (or other UI entry points) and loads templates as needed. It also write data that you want to persist to a database (Access, SQLServer, text file, or even an xls file).
The first rule is to keep your code separate from your data. Then, if you later have bug fixes or other code changes, you can send a new add-in and all of their templates and databases aren't affected.
A: You can modify the user's personal.xls file, stored in the excel startup directory (varies between Office versions). If you have lots of users though, that can be fiddly.
An alternative way to get over your problem is to store the macro in a template (.xlt) file. Then when the users opens it they can't save it back over the original file, but have to specify a new filename to save it as. The disadvantage of this method is that you then get multiple copies of your original code all over the place with each saved file. If you modify the original .xlt and someone reruns the old macro in a previously-saved .xls file then things can get out of step.
A: Simply move your code into an Excel Addin (XLA) - this gets loaded at startup (assuming it's in the %AppData%\Microsoft\Excel\XLSTART folder) but if it's a addin, not a workbook, then only your macros and defined startup functions will be loaded.
If the functions depend on a spreadsheet itself, then you might want to use a combination of templates and addins.
I'm distributing part of an application like this, we have addins for Word, Excel and Powerpoint (XLA, PPA, DOT) and also Office 2007 'ribbon' versions (DOTM, XLAM and PPAM)
The addin startup code creates toolbar buttons if they're not found, this means in any workbook/document/etc they can simply hit the toolbar button to run our code (we have two action buttons and one button that displays a settings dialog)
Templates aren't really the way to go for VBA code, Addins are definitely the way to go...
So to load the toolbars on startup we're using something like.. (checking to see if toolbar exists though - code will run for each worksheet that is opened, but toolbars are persistent for the user session)
Public Sub Workbook_Open()
' startup code / add toolbar / load saved settings, etc.
End Sub
hope that helps :)
A: Have you looked into ClickOnce deploying the Excel file?
A: What about to save an excel to network folder with read only permissions ? The authentication can be done with integrated windows authentication and you don't need to store connection password to the database in the VBA. Then you only need distribute a link to this location to your users only once. If you will do an update, you only change data in that folder without user notice.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29630",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Which is a better approach in logging - files or DB? Okay, here's the scenario. I have a utility that processes tons of records, and enters information to the Database accordingly.
It works on these records in multi-threaded batches. Each such batch writes to the same log file for creating a workflow trace for each record. Potentially, we could be making close to a million log writes in a day.
Should this log be made into a database residing on another server? Considerations:
*
*The obvious disadvantage of multiple threads writing to the same log file is that the log messages are shuffled amongst each other. In the database, they can be grouped by batch id.
*Performance - which would slow down the batch processing more? writing to a local file or sending log data to a database on another server on the same network. Theoretically, the log file is faster, but is there a gotcha here?
Are there any optimizations that can be done on either approach?
Thanks.
A: The interesting question, should you decide to log to the database, is where do you log database connection errors?
If I'm logging to a database, I always have a secondary log location (file, event log, etc) in case there are communication errors. It really does make it easier to diagnose issues later on.
A: One thing that comes to mind is that you could have each thread writing to its own log file and then do a daily batch run to combine them.
If you are logging to database you probably need to do some tuning and optimization, especially if the DB will be across the network. At the least you will need to be reusing the DB connections.
Furthermore, do you have any specific needs to have the log in database? If all you need is a "grep " then I don't think you gain much by logging into database.
A: I second the other answers here, depends on what you are doing with the data.
We have two scenarios here:
*
*The majority of the logging is to a DB since admin users for the products we build need to be able to view them in their nice little app with all the bells and whistles.
*We log all of our diagnostics and debug info to file. We have no need for really "prettifying" it and TBH, we don't even often need it, so we just log and archive for the most part.
I would say if the user is doing anything with it, then log to DB, if its for you, then a file will probably suffice.
A: Not sure if it helps, but there's also a utility called Microsoft LogParser that you can supposedly use to parse text-based log files and use them as if they were a database. From the website:
Log parser is a powerful, versatile
tool that provides universal query
access to text-based data such as log
files, XML files and CSV files, as
well as key data sources on the
Windows® operating system such as the
Event Log, the Registry, the file
system, and Active Directory®. You
tell Log Parser what information you
need and how you want it processed.
The results of your query can be
custom-formatted in text based output,
or they can be persisted to more
specialty targets like SQL, SYSLOG, or
a chart. Most software is designed to
accomplish a limited number of
specific tasks. Log Parser is
different... the number of ways it can
be used is limited only by the needs
and imagination of the user. The
world is your database with Log
Parser.
I haven't used the program myself, but it seems quite interesting!
A: Or how about logging to a queue? That way you can switch out pollers whenever you like to log to different things. It makes things like rolling over and archiving log files very easy. It's also nice because you can add pollers that log to different things, for example:
*
*a poller that looks for error messages and posts them to your FogBugz account
*a poller that looks for access violations ('x tried to access /foo/y/bar.html') to a 'hacking attempts' file
*etc.
A: Database - since you mentioned multiple threads. Synchronization as well as filtered retrieval are my reasons for my answer.
See if you have a performance problem before deciding to switch to files
"Knuth: Premature optimization is the root of all evil" I didn't get any further in that book... :)
A: There are ways you can work around the limitations of file logging.
You can always start each log entry with a thread id of some kind, and grep out the individual thread ids. Or a different log file for each thread.
I've logged to database in the past, in a separate thread at a lower priority. I must say, queryability is very valuable when you're trying to figure out what went wrong.
A: How about logging to database-file, say a SQLite database? I think it can handle multi-threaded writes - although that may also have its own performance overheads.
A: I think it depends greatly on what you are doing with the log files afterwards.
Of the two operations writing to the log file will be faster - especially as you are suggesting writing to a database on another server.
However if you are then trying to process and search the log files on a regular basis then the best place to do this would be a database.
If you use a logging framework like log4net they often provide simple config file based ways of redirecting input to file or database.
A: I like Gaius' answer. Put all the log statements in a threadsafe queue and then process them from there. For DB you could batch them up, say 100 log statements in one batch and for file you could just stream them into the file as they come into the queue.
File or Db? As many others say; it depends on what you need the log file for.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29643",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Set up PowerShell Script for Automatic Execution I have a few lines of PowerShell code that I would like to use as an automated script. The way I would like it to be able to work is to be able to call it using one of the following options:
*
*One command line that opens PowerShell, executes script and closes PowerShell (this would be used for a global build-routine)
*A file that I can double-click to run the above (I would use this method when manually testing components of my build process)
I have been going through PowerShell documentation online, and although I can find lots of scripts, I have been unable to find instructions on how to do what I need. Thanks for the help.
A: Save your script as a .ps1 file and launch it using powershell.exe, like this:
powershell.exe .\foo.ps1
Make sure you specify the full path to the script, and make sure you have set your execution policy level to at least "RemoteSigned" so that unsigned local scripts can be run.
A: Run Script Automatically From Another Script (e.g. Batch File)
As Matt Hamilton suggested, simply create your PowerShell .ps1 script and call it using:
PowerShell C:\Path\To\YourPowerShellScript.ps1
or if your batch file's working directory is the same directory that the PowerShell script is in, you can use a relative path:
PowerShell .\YourPowerShellScript.ps1
And before this will work you will need to set the PC's Execution Policy, which I show how to do down below.
Run Script Manually Method 1
You can see my blog post for more information, but essentially create your PowerShell .ps1 script file to do what you want, and then create a .cmd batch file in the same directory and use the following for the file's contents:
@ECHO OFF
SET ThisScriptsDirectory=%~dp0
SET PowerShellScriptPath=%ThisScriptsDirectory%MyPowerShellScript.ps1
PowerShell -NoProfile -ExecutionPolicy Bypass -Command "& '%PowerShellScriptPath%'"
Replacing MyPowerShellScript.ps1 on the 3rd line with the file name of your PowerShell script.
This will allow you to simply double click the batch file to run your PowerShell script, and will avoid you having to change your PowerShell Execution Policy.
My blog post also shows how to run the PowerShell script as an admin if that is something you need to do.
Run Script Manually Method 2
Alternatively, if you don't want to create a batch file for each of your PowerShell scripts, you can change the default PowerShell script behavior from Edit to Run, allowing you to double-click your .ps1 files to run them.
There is an additional registry setting that you will want to modify so that you can run scripts whose file path contains spaces. I show how to do both of these things on this blog post.
With this method however, you will first need to set your execution policy to allow scripts to be ran. You only need to do this once per PC and it can be done by running this line in a PowerShell command prompt.
Start-Process PowerShell -ArgumentList 'Set-ExecutionPolicy RemoteSigned -Force' -Verb RunAs
Set-ExecutionPolicy RemoteSigned -Force is the command that actually changes the execution policy; this sets it to RemoteSigned, so you can change that to something else if you need. Also, this line will automatically run PowerShell as an admin for you, which is required in order to change the execution policy.
A: Source for Matt's answer.
I can get it to run by double-clicking a file by creating a batch file with the following in it:
C:\WINDOWS\system32\windowspowershell\v1.0\powershell.exe LocationOfPS1File
A: From http://blogs.msdn.com/b/jaybaz_ms/archive/2007/04/26/powershell-polyglot.aspx
If you're willing to sully your beautiful PowerShell script with a little CMD, you can use a PowerShell-CMD polyglot trick. Save your PowerShell script as a .CMD file, and put this line at the top:
@PowerShell -ExecutionPolicy Bypass -Command Invoke-Expression $('$args=@(^&{$args} %*);'+[String]::Join(';',(Get-Content '%~f0') -notmatch '^^@PowerShell.*EOF$')) & goto :EOF
If you need to support quoted arguments, there's a longer version, which also allows comments. (note the unusual CMD commenting trick of double @).
@@:: This prolog allows a PowerShell script to be embedded in a .CMD file.
@@:: Any non-PowerShell content must be preceeded by "@@"
@@setlocal
@@set POWERSHELL_BAT_ARGS=%*
@@if defined POWERSHELL_BAT_ARGS set POWERSHELL_BAT_ARGS=%POWERSHELL_BAT_ARGS:"=\"%
@@PowerShell -ExecutionPolicy Bypass -Command Invoke-Expression $('$args=@(^&{$args} %POWERSHELL_BAT_ARGS%);'+[String]::Join(';',$((Get-Content '%~f0') -notmatch '^^@@'))) & goto :EOF
A: you can use this command :
powershell.exe -argument c:\scriptPath\Script.ps1
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29645",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: How do you get the filename of a tempfile to use in Linux? Let's say I'm creating a program in C that needs to use a tempfile. Creating an ad hoc tempfile in /tmp is probably not a good idea. Is there a function or OS call to supply me with a tempfile name so that I can begin to write and read from it?
A: The question is how to generate a temporary file name. Neither mkstemp nor tmpfile provide the caller with a name, they return a file descriptor or file handle, respectively.
A: @garethm:
I believe that the function you're looking for is called tmpnam.
You should definitely not use tmpnam. It suffers from the race condition problem I mentioned in my answer: Between determining the name and opening it, another program may create the file or a symlink to it, which is a huge security hole.
The tmpnam man page specifically says not to use it, but to use mkstemp or tmpfile instead.
A: You can use the mkstemp(3) function for this purpose. Another alternative is the tmpfile(3) function.
Which one of them you choose depends on whether you want the file to be opened as a C library file stream (which tmpfile does), or a direct file descriptor (mkstemp). The tmpfile function also deletes the file automatically when you program finishes.
The advantage of using these functions is that they avoid race conditions between determining the unique filename and creating the file -- so that two programs won't try to create the same file at the same time, for example.
See the man pages for both functions for more details.
A: Absolutely: man mkstemp.
The man page has example usage.
A: Not sure about anything in a C lib, but you can do this at the shell with mktemp.
A: You should use the mkstemp() as this is the recommended function, but it returns a file descriptor, so once you have the descriptor get it's name:
int fd;
fd = mkstemp("hdrXXXXXX);
/* Read out the link to our file descriptor. */
sprintf(path, "/proc/self/fd/%d", fd);
memset(result, 0, sizeof(result));
readlink(path, result, sizeof(result)-1);
/* Print the result. */
printf("%s\n", result);
A: usually there's no need to actually make a named file; instead use the file descriptor path,
FILE *tmp=tmpfile();
char path[PATH_MAX+1]={0};
sprintf(path, "/dev/fd/%d", fileno(tmp));
printf("%s\n", path);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29647",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: WinForms databinding and foreign key relationships I'm developing a WinForms application (.Net 3.5, no WPF) where I want to be able to display foreign key lookups in a databound DataGridView.
An example of the sort of relationship is that I have a table of OrderLines. Orderlines have a foreign key relationship to Products and Products in turn have a foreign key relationship to ProductTypes.
I'd like to have a databound DataGridView where each row represents an orderline, displaying the line's product and producttype.
Users can add or edit orderlines direct to the grid and choose the product for the order line from a comboBoxColumn - this should then update the producttype column, showing the producttype for the selected product, in the same row.
The closest to a good fit that I've found so far is to introduce a domain object representing an orderline then bind the DataGridView to a collection of these orderlines. I then add properties to the orderline object that expose the product and the producttype, and raise relevant notifypropertychanged events to keep everything up to date. In my orderline repository I can then wire up the mappings between this orderline object and the three tables in my database.
This works for the databinding side of things, but having to hand code all that OR-mapping in the repository seems bad. I thought nHibernate would be able to help with this wiring up but am struggling with the mappings through all the foreign keys - they seem to work ok (the foreignkey lookup for an orderline's product creates the correct product object based on the foreign key) until I try to do the databinding, I can't get the databound id columns to update my product or producttype objects.
Is my general approach even in the right ballpark? If it is, what is a good solution to the mapping problem?
Or, is there a better solution to databinding rows including foreign key lookups that I haven't even considered?
A: I think the problem you're having is that when you are binding to a grid, it is not enough to support INotifyPropertyChanged, but you have to fire the ListChanged events in your IBindingList implementation and make sure that you override and return true for the SupportsChangeNotification property. If you don't return true for this, the grid won't look for it to know if the data has changed.
In .NET 2.0+, you can create a generic collection using the BindingList class, this will take care of most of the nastiness (just don't forget to override and return true for the SupportsChangeNotification property).
If the class you use for data binding has a property that is a collection (such as IBindingList or BindingList), then you can bind the foreign key grid to that property directly. When you configure the bindings in the Forms designer, just select the collection property as the data source for the grid. It should "just work". The only sneaky part is making sure that you handle empty or null collections the right way.
A: welcome to StackOverflow :)
Normally what you would do is base the information in the drop down on two values ValueMember and DisplayMember.
The ValueMember is the source of the actual controls value (this will be the key value in the order line), the display member is the value that is displayed to the user instead of the value (this will be the FK value).
Is there no particular reason you cannot just return all the data required and set these properties?
A: Here's a good "How Do I" video that demonstrates data binding:
http://windowsclient.net/learn/video.aspx?v=52579
A: My original question obviously wasn't clear, sorry about that.
The problem wasn't with databinding to a DataGridView in general, or with the implementation of a DataGridViewComboBoxColumn - as the people who have answered already rightly say, that is well documented on the web.
The problem I've been trying to solve is with the refresh of properties that are drilling down through relationships.
In my orders example, when I change the value of the "Product" column, the "Product Type" column is not being updated - even though in the code I am setting the property and firing the NotifyPropertyChanged event. (In debug I go to all the right places)
After a lot of poking around I realised that this was not even working when I directly set the "Product Type" property of datasource, rather that setting it in the "Product" setter.
The other thing that I believe has me back on the right track is that when I provide a mocked dataccess layer, created in the main form, everything works fine.
Also, when I copy the IList made by nHibernate to a IBindingList - everything again appears fine.
So the problem is I think with threading and the NotifyPropertyChanged events being lost when using certain datasources, in certain ways (wish I could be more definitive than that!)
I'm going to keep researching better ways of resolving this than copying the IList to the IBindingList - maybe I need to learn about thread marshalling.
Edit
I've now developed a solution that solves the issue and think I understand what was confusing me - it appears that anything but basic property databinding doesn't play nicely for lists that aren't derived from BindingList - as soon as I was trying to databind to properties that fired chained NotifyPropertyChanged events, things went haywire and my events got lost.
The data access solution I have now is using a variation of the Rob Conery IRepository pattern, returning my collections to be bound as a custom class I made, a SortableBindingLazyList that derives from BindingList, implements the Sort Core methods and also stores its internal list as a query, delaying the list materialisation.
A: Well, I don't know whether it's supported by the DataGridView, but when you're doing regular WinForms databinding (say, to a regular TextBox) you can use property paths to navigate through object relationships.
Something like this:
myTextBox.DataBindings.Add("Text", anOrderLine, "OrderedPart.PartNumber");
Would be worth seeing if this works in your situation too.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29654",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How to catch SQLServer timeout exceptions I need to specifically catch SQL server timeout exceptions so that they can be handled differently. I know I could catch the SqlException and then check if the message string Contains "Timeout" but was wondering if there is a better way to do it?
try
{
//some code
}
catch (SqlException ex)
{
if (ex.Message.Contains("Timeout"))
{
//handle timeout
}
else
{
throw;
}
}
A: here: http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.framework.adonet/2006-10/msg00064.html
You can read also that Thomas Weingartner wrote:
Timeout: SqlException.Number == -2 (This is an ADO.NET error code)
General Network Error: SqlException.Number == 11
Deadlock: SqlException.Number == 1205 (This is an SQL Server error code)
...
We handle the "General Network Error" as a timeout exception too. It only occurs under rare circumstances e.g. when your update/insert/delete query will raise a long running trigger.
A: Updated for c# 6:
try
{
// some code
}
catch (SqlException ex) when (ex.Number == -2) // -2 is a sql timeout
{
// handle timeout
}
Very simple and nice to look at!!
A: To check for a timeout, I believe you check the value of ex.Number. If it is -2, then you have a timeout situation.
-2 is the error code for timeout, returned from DBNETLIB, the MDAC driver for SQL Server. This can be seen by downloading Reflector, and looking under System.Data.SqlClient.TdsEnums for TIMEOUT_EXPIRED.
Your code would read:
if (ex.Number == -2)
{
//handle timeout
}
Code to demonstrate failure:
try
{
SqlConnection sql = new SqlConnection(@"Network Library=DBMSSOCN;Data Source=YourServer,1433;Initial Catalog=YourDB;Integrated Security=SSPI;");
sql.Open();
SqlCommand cmd = sql.CreateCommand();
cmd.CommandText = "DECLARE @i int WHILE EXISTS (SELECT 1 from sysobjects) BEGIN SELECT @i = 1 END";
cmd.ExecuteNonQuery(); // This line will timeout.
cmd.Dispose();
sql.Close();
}
catch (SqlException ex)
{
if (ex.Number == -2) {
Console.WriteLine ("Timeout occurred");
}
}
A: Whats the value for the SqlException.ErrorCode property? Can you work with that?
When having timeouts, it may be worth checking the code for -2146232060.
I would set this up as a static const in your data code.
A: I am not sure but when we have execute time out or command time out
The client sends an "ABORT" to SQL Server then simply abandons the query processing. No transaction is rolled back, no locks are released. to solve this problem I Remove transaction in Stored-procedure and use SQL Transaction in my .Net Code To manage sqlException
A: When a client sends ABORT, no transactions are rolled back. To avoid this behavior we have to use SET_XACT_ABORT ON
https://learn.microsoft.com/en-us/sql/t-sql/statements/set-xact-abort-transact-sql?view=sql-server-ver15
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29664",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "131"
} |
Q: Free Wavetable Synthesizer? I need to implement a wavetable synthesizer in an ARM Cortex-M3 core. I'm looking for any code or tools to help me get started.
I'm aware of this AVR implementation. I actually converted it to a PIC a while back. Now I am looking for something similar, but a little better sounding.
ANSI C code would be great. Any code snippets (C or C++), samples, tools, or just general information would be greatly appreciated.
Thanks.
A: The Synthesis Toolkit (STK) is excellent, but it is C++ only:
http://ccrma.stanford.edu/software/stk/
You may be able to extract the wavetable synthesizer code from the STK though.
A: Two open-source wavetable synthesizers are FluidSynth and TiMidity.
A: Any ARM synth, the best ones, can be changed to wavescanner in less than a day. Scanning the wave from files or generating them mathematically is nearly the same thing audio wise, WT provides massive banks of waveforms at zero processing cost, you need the waves, the WT oscillator code itself is 20 lines. so change your waveform knob from 3 to 100 to indicate which WAV you are reading, use a ramp/counter to read the WAV files(as arrays). WT fixed.
From 7 years of Synth experience, i'd recommend to change 20 lines of the oscillator function of your favorite synth to adapt it to read wave arrays. The WT only uses 20 lines of logic, the rest of the synthesizer is more important: LFO's, Filters, input parameters, preset memory... Use your favorite synth instead and find a WT wave library as WAV files and folders, and replace your fav synth oscillators with WT functions, it will sound almost the same, only lower processing costs.
A synth normally uses Sin, Sqr, Saw, Antialiased OSC functions for the wave...
A wavetable synth uses about 20 lines of code at it's base, and 10/20/100ds of waves, each wave sampled at every octave ideally. If you can get a wavetable sound library, the synth just loops, pitch shifts, the sounds, and pro synths can also have multiple octave to mix the octaves.
WTfunction =
*
*load WAV files into N arrays
*change waveform = select waveform array from WAV list
*read waveform array at desired Hz
wavescanner function =
*
*crossfade between 2 waves and assign xfade to LFO, i.e. sine and xfade.
The envelope, filter, amplitude, all other functions are independent from the wave generation function in all synths.
remember the the most powerful psychoacoustic tool for synthesizers is deviation from the digital tone of the notes, it's called unison detune, sonic character of synthesizers mostly comes from chorus and unison detune.
WT's are either single periods of waves of longer sections, in more advanced synths. the single period stuff is super easy to write into code. the advanced WT's are sampled per octave with waves lasting N periods, even 2-3 seconds, i.e. piano, and that means that they change sound quality through the octaves, so the complex WT's are crossfaded every octave with multiple octave recordings.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29665",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Remote debugging across domains I have two machines in two different domains. On both I have VS 2005 installed. I want remote debug between them. Without authentication it is possible but I want to debug managed code. I don't want to debug directly since it is really crappy machine.
When I try to attach with debugger I get message "The trust relationship between this workstation and primary domain failed." Any idea how to overcome this ? I tried tricks with adding same local username on both machines but with no luck.
EDIT: I have same local users on both machines. I started both VS2005 and Debugging monitor with RunAs using local users. I turned Windows Auditing on debug machine and I see that local user from VS2005 machine is trying to logon. But he fails with error 0xC000018D (ERROR_TRUSTED_RELATIONSHIP_FAILURE)
A: Gregg Miskely has a blog post on this. You might get it to work if both local accounts have the same user name and password. You might also try dropping your good box from it's domain so that you are going from a workgroup to a domain rather than domain to domain.
A: I seem to remember that I have sometimes found it useful to use RunAs when you run msvcmon (or whatever it's called this week - the remote debugging stub anyway), to force it to start as the user which you have set up to be the same on both machines.
I would guess that on the machine you're running VS on, you will also need to log in as the local user rather than a domain user (or start VS with RunAs).
I have never understood why this needed to be so hard, given that unmanaged debugging is so much easier, and must expose every security hole that managed debugging could.
A: The blog post wasn't totally clear that this would work, but I was able to run Visual Studio as my domain account and still debug a process on a machine that was not on a domain.
I have a physical development machine PHYSICAL on a Active Directory domain DOMAIN. I'm logged in and running Visual Studio as DOMAIN\employee.
I have a virtual machine VIRTUAL that is not attached to an Active Directory domain at all. This is the machine running the process I want to debug.
Like the blog post says, create local accounts PHYSICAL\employee (on PHYSICAL) and VIRTUAL\employee (on VIRTUAL). They both must be Administrators and have the same password as DOMAIN\employee.
The remote debugger and the process to debug must be run on VIRTUAL while logged in as VIRTUAL\employee. Then on PHYSICAL while logged in as DOMAIN\employee I can use "Attach to Process..." and connect to VIRTUAL to get a process list.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29668",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How do I make the manifest of a .net assembly private? What should I do if I want to release a .net assembly but wish to keep its internals detailed in the manifest private (from a utility such as ildasm.exe) ?
A: I think what you're talking about is "obfuscation".
There are lots of articles about it on the net:
http://en.wikipedia.org/wiki/Obfuscation
The "standard" tool for obfuscation on .NET is by Preemptive Solutions:
http://www.preemptive.com/obfuscator.html
They have a community edition that ships with Visual Studio which you can use.
You mentioned ILDasm, have you looked at the .NET Reflector?
http://aisto.com/roeder/dotnet/
It gives you an even better idea as to what people can see if you release a manifest!
A: The CLR cannot directly load modules that contain no manifest. So you can't make an assembly completely private unless you also want to make it unloadable ;)
You can however, as Mark noted above, use obfuscation tools to hide the parts you would like to keep truly internal.
It's too bad the internal keyword doesn't exclude that metadata
EDIT: it looks like this question is highly related
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29677",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How to aggregate data from SQL Server 2005 I have about 150 000 rows of data written to a database everyday. These row represent outgoing articles for example. Now I need to show a graph using SSRS that show the average number of articles per day over time. I also need to have a information about the actual number of articles from yesterday.
The idea is to have a aggregated view on all our transactions and have something that can indicate that something is wrong (that we for example send out 20% less articles than the average).
My idea is to have yesterdays data moved into SSAS every night and there store the aggregated value of number of transactions and the actual number of transaction from yesterdays data. Using SSAS would hopefully speed up the reports.
Do you think this is the right idea? Should I skip SSAS and have reports straight on the raw data? I know how use reporting services on raw data using standard SQL queries but how would this change when querying SSAS? I don't know SSAS - where do I start ..?
A: The neat thing with SSAS is that you can get those indicators that you talk about quite easily either by creating calculated measures or by using KPIs.
I started with Delivering Business Intelligence with Microsoft SQL Server 2005. It had some good introduction, but unfortunately it's too verbose when it comes to the details. But if you want to understand SSAS, OLAP and reporting using this framework it's a good start.
Mosha Pasumansky has a blog on SSAS and MDX with great links.
Other than that I would recommend Microsofts Online books.
A: Are you sure you aren't mixing up SSAS (Analysis Services) and SSIS (integration services)?
SSAS is not an ETL, it is an OLAP tool.
SSIS is an ETL tool.
I agree with everything that Rowan said. I'm just confused by the terms.
A: SSAS is an ETL tool. Basically you get data from somewhere (your outgoing articles), do something to it (aggregate), and put it somewhere else (your aggregates table, data warehouse, etc). Check the link for details.
You probably won't be keeping all of the rows in the DB indefinitely and if you want to be able to report on longer trends you need in any case do some kind of aggregating of historical data. So making the reports use this historical data store as their source makes sense. You can then use it to do all kinds of fancy reporting.
TL;DR: Define your aggregated history table with your future reporting needs in mind. Use the SSAS to populate the table and refresh it from the daily updates. Report from that table. Further reading: Star Schemas and data warehousing.
A: @Sergio and @Rowan
Yes, we're not talking about loading and transforming data into the database (like a SSIS tool would do). That's solved using our integration platform.
A: @Riri maybe SSAS is overkill for the situation you presented. If you only need to daily populate sumarization tables, you can accomplish it by creating a regular JOB in SQL Server and doing it in a regular T-SQL script.
I've used this approach for several years in a daily process to calculate business indicators from about 9GB new data / day. It works, it's fast, it's simple and it uses a technology you're already used to. If your daily process get's more complicated (it needs to read from files, use FTP, send emails) you can move to a SSIS package (or any other ETL tool you like), but I cannot recommend using SSAS unless you need to provide OLAP capabilities to your users.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29678",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Suppress NTLM dialog box after unauthorized request In a recent sharepoint project, I implemented an authentication webpart which should replace the NTLM authentication dialog box. It works fine as long as the user provides valid credentials. Whenever the user provides invalid credentials, the NTLM dialog box pops up in Internet Explorer.
My Javascript code which does the authentication via XmlHttpRequest looks like this:
function Login() {
var request = GetRequest(); // retrieves XmlHttpRequest
request.onreadystatechange = function() {
if (this.status == 401) { // unauthorized request -> invalid credentials
// do something to suppress NTLM dialog box...
// already tried location.reload(); and window.location = <url to authentication form>;
}
}
request.open("GET", "http://myServer", false, "domain\\username", "password");
request.send(null);
}
I don't want the NTLM dialog box to be displayed when the user provides invalid credentials. Instead the postback by the login button in the authentication form should be executed. In other words, the browser should not find out about my unauthorized request.
Is there any way to do this via Javascript?
A: Mark's comment is correct; The NTLM auth prompt is triggered by a 401 response code and the presence of NTLM as the first mechanism offered in the WWW-Authenticate header (Ref: The NTLM Authentication Protocol).
I'm not sure if I understand the question description correctly, but I think you are trying to wrap the NTLM authentication for SharePoint, which means you don't have control over the server-side authentication protocol, correct? If you're not able to manipulate the server side to avoid sending a 401 response on failed credentials, then you will not be able to avoid this problem, because it's part of the (client-side) spec:
The XMLHttpRequest Object
If the UA supports HTTP Authentication [RFC2617] it SHOULD consider requests
originating from this object to be part of the protection space that includes the
accessed URIs and send Authorization headers and handle 401 Unauthorised requests
appropriately. if authentication fails, UAs should prompt the users for credentials.
So the spec actually calls for the browser to prompt the user accordingly if any 401 response is received in an XMLHttpRequest, just as if the user had accessed the URL directly. As far as I can tell the only way to really avoid this would be for you to have control over the server side and cause 401 Unauthorized responses to be avoided, as Mark mentioned.
One last thought is that you may be able to get around this using a proxy, such a separate server side script on another webserver. That script then takes a user and pass parameter and checks the authentication, so that the user's browser isn't what's making the original HTTP request and therefore isn't receiving the 401 response that's causing the prompt. If you do it this way you can find out from your "proxy" script if it failed, and if so then prompt the user again until it succeeds. On a successful authentication event, you can simply fetch the HTTP request as you are now, since everything works if the credentials are correctly specified.
A: IIRC, the browser pops the auth dialog when the following comes back in the request stream:
*
*Http status of 401
*WWW-Authenticate header
I would guess that you'd need to suppress one or both of those. The easy way to do that is to have a login method that'll take a Base64 username and password (you are using HTTPS, right?) and return 200 with a valid/invalid status. Once the password has been validated, you can use it with XHR.
A: I was able to get this working for all browsers except firefox. See my blog post below from a few years ago. My post is aimed at IE only but with some small code changes it should work in Chrome and safari.
http://steve.thelineberrys.com/ntlm-login-with-anonymous-fallback-2/
EDIT:
The gist of my post is wrapping your JS xml call in a try catch statement. In IE, Chrome, and Safari, this will suppress the NTLM dialog box. It does not seem to work as expected in firefox.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29680",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Set ASP.net executionTimeout in code / "refresh" request I'll have an ASP.net page that creates some Excel Sheets and sends them to the user. The problem is, sometimes I get Http timeouts, presumably because the Request runs longer than executionTimeout (110 seconds per default).
I just wonder what my options are to prevent this, without wanting to generally increase the executionTimeout in web.config?
In PHP, set_time_limit exists which can be used in a function to extend its life, but I did not see anything like that in C#/ASP.net?
How do you handle long-running functions in ASP.net?
A: If you want to increase the execution timeout for this one request you can set
HttpContext.Current.Server.ScriptTimeout
But you still may have the problem of the client timing out which you can't reliably solve directly from the server. To get around that you could implement a "processing" page (like Rob suggests) that posts back until the response is ready. Or you might want to look into AJAX to do something similar.
A: I've not really had to face this issue too much yet myself, so please keep that in mind.
Is there not anyway you can run the process async and specify a callback method to occur once complete, and then keep the page in a "we are processing your request.." loop cycle. You could then open this up to add some nice UI enhancements as well.
Just kinda thinking out loud. That would probably be the sort of thing I would like to do :)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29686",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: Javadoc template generator I have a large codebase without Javadoc, and I want to run a program to write a skeleton with the basic Javadoc information (e.g., for each method's parameter write @param...), so I just have to fill the gaps left.
Anyone know a good solution for this?
Edit:
JAutodoc is what I was looking for. It has Ant tasks, an Eclipse plugin, and uses Velocity for the template definition.
A: You can configure eclipse to show warnings for things that lack javadoc, or have javadoc that does not have all the information, or has wrong information. It can also insert templates for you to fill out.
Not quite the tool you asked for, but probably better because you won't end up with empty skeletons on methods that you missed.
You can achieve this by investigating and editing the preference page beyond the path Window > Preferences > Java > Compiler > Javadoc for your workspace. The screenshot of that preference page is below:
For further information about the items in this screen please follow the link below:
Java Compiler Javadoc Preferences Help
A: Select the method that you want add Javadoc and alt+Shift+j, creates automatically the javadoc comment.
EXAMPLE:
/**
* @param currDate
* @param index
* @return
*/
public static String getAtoBinary(String currDate, int index){
String HourA = "0";
try{
String[] mydate = currDate.split("/");
HourA = mydate[index].substring(1, 2);
}catch(Exception e){
Log.e(TAG, e.getMessage());
}
return HourA;
}
A: The JAutodoc plugin for eclipse does exactly what you need, but with a package granularity :
right click on a package, select "Add javadoc for members..." and the skeleton will be added.
There are numerous interesting options : templates for javadoc, adding a TODO in the header of every file saying : "template javadoc, must be filled...", etc.
A: I think auto-generating empty Javadoc is an anti-pattern and should be discouraged; it gives code the appearance of being documented, but just adds noise to the codebase.
I would recommend instead that you configure your code editor to assist on a per-method and per-class basis to use when you actually write the javadoc (one commenter pointed to Eclipse's feature that does this).
A: If you right-click in the source of a file in Eclipse, it has a Javadoc generation option under the source menu.
A: You can also place your cursor on the line above a method you would like to JavaDoc, then type:
/**
and press Enter. This will generate your JavaDoc stub.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29689",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32"
} |
Q: How to work around unsupported unsigned integer field types in MS SQL? Trying to make a MySQL-based application support MS SQL, I ran into the following issue:
I keep MySQL's auto_increment as unsigned integer fields (of various sizes) in order to make use of the full range, as I know there will never be negative values. MS SQL does not support the unsigned attribute on all integer types, so I have to choose between ditching half the value range or creating some workaround.
One very naive approach would be to put some code in the database abstraction code or in a stored procedure that converts between negative values on the db side and values from the larger portion of the unsigned range. This would mess up sorting of course, and also it would not work with the auto-id feature (or would it some way?).
I can't think of a good workaround right now, is there any? Or am I just being fanatic and should simply forget about half the range?
Edit:
@Mike Woodhouse: Yeah, I guess you're right. There's still a voice in my head saying that maybe I could reduce the field's size if I optimize its utilization. But if there's no easy way to do this, it's probably not worth worrying about it.
A: When is the problem likely to become a real issue?
Given current growth rates, how soon do you expect signed integer overflow to happen in the MS SQL version?
Be pessimistic.
How long do you expect the application to live?
Do you still think the factor of 2 difference is something you should worry about?
(I have no idea what the answers are, but I think we should be sure that we really have a problem before searching any harder for a solution)
A: I would recommend using the BIGINT data type as this goes up to 9,223,372,036,854,775,807.
SQL Server does not support signed and unsigned values.
A: I would say this.. "How do we normally deal with differences between components?"
Encapsulate what varies..
You need to create an abstraction layer within you data access layer to get it to the point where it doesn't care whether or not the database is MySQL or MS SQL..
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29694",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How do you stop the Designer generating code for public properties on a User Control? How do you stop the designer from auto generating code that sets the value for public properties on a user control?
A: Use the DesignerSerializationVisibilityAttribute on the properties that you want to hide from the designer serialization and set the parameter to Hidden.
[DesignerSerializationVisibility(DesignerSerializationVisibility.Hidden)]
public string Name
{
get;
set;
}
A: Add the following attributes to the property in your control:
[Browsable(false), DesignerSerializationVisibility(DesignerSerializationVisibility.Hidden)]
A: A slight change to Erik's answer I am using VS 2013.
[DesignerSerializationVisibility(DesignerSerializationVisibility.Hidden)]
public new string Name {
get;
set;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29696",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "39"
} |
Q: How do I deal with quotes ' in SQL I have a database with names in it such as John Doe etc. Unfortunately some of these names contain quotes like Keiran O'Keefe. Now when I try and search for such names as follows:
SELECT * FROM PEOPLE WHERE SURNAME='O'Keefe'
I (understandably) get an error.
How do I prevent this error from occurring. I am using Oracle and PLSQL.
A: The escape character is ', so you would need to replace the quote with two quotes.
For example,
SELECT * FROM PEOPLE WHERE SURNAME='O'Keefe'
becomes
SELECT * FROM PEOPLE WHERE SURNAME='O''Keefe'
That said, it's probably incorrect to do this yourself. Your language may have a function to escape strings for use in SQL, but an even better option is to use parameters. Usually this works as follows.
Your SQL command would be :
SELECT * FROM PEOPLE WHERE SURNAME=?
Then, when you execute it, you pass in "O'Keefe" as a parameter.
Because the SQL is parsed before the parameter value is set, there's no way for the parameter value to alter the structure of the SQL (and it's even a little faster if you want to run the same statement several times with different parameters).
I should also point out that, while your example just causes an error, you open youself up to a lot of other problems by not escaping strings appropriately. See http://en.wikipedia.org/wiki/SQL_injection for a good starting point or the following classic xkcd comic.
A: Oracle 10 solution is
SELECT * FROM PEOPLE WHERE SURNAME=q'{O'Keefe}'
A: Parameterized queries are your friend, as suggested by Matt.
Command = SELECT * FROM PEOPLE WHERE SURNAME=?
They will protect you from headaches involved with
*
*Strings with quotes
*Querying using dates
*SQL Injection
A: Use of parameterized SQL has other benefits, it reduces CPU overhead (as well as other resources) in Oracle by reducing the amount of work Oracle requires in order to parse the statement. If you do not use parameters (we call them bind variables in Oracle) then "select * from foo where bar='cat'" and "select * from foo where bar='dog'" are treated as separate statements, where as "select * from foo where bar=:b1" is the same statement, meaning things like syntax, validity of objects that are referenced etc...do not need to be checked again. There are occasional problems that arise when using bind variables which usually manifests itself in not getting the most efficient SQL execution plan but there are workarounds for this and these problems really depend on the predicates you are using, indexing and data skew.
A: Input filtering is usually done on the language level rather than database layers.
php and .NET both have their respective libraries for escaping sql statements. Check your language, see waht's available.
If your data are trustable, then you can just do a string replace to add another ' infront of the ' to escape it. Usually that is enough if there isn't any risks that the input is malicious.
A: I suppose a good question is what language are you using?
In PHP you would do: SELECT * FROM PEOPLE WHERE SURNAME='mysql_escape_string(O'Keefe)'
But since you didn't specify the language I will suggest that you look into a escape string function mysql or otherwise in your language.
A: To deal quotes if you're using Zend Framework here is the code
$db = Zend_Db_Table_Abstract::getDefaultAdapter();
$db->quoteInto('your_query_here = ?','your_value_here');
for example ;
//SELECT * FROM PEOPLE WHERE SURNAME='O'Keefe' will become
SELECT * FROM PEOPLE WHERE SURNAME='\'O\'Keefe\''
A: Found in under 30s on Google...
Oracle SQL FAQ
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29699",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Can someone point me to some guides for WPF I am having trouble finding good guides for WPF.
I have experience in C# and .NET but I don't know anything about WPF except for the regular marketing-ish description of the technology as a whole.
Can anyone point me to a good beginner's tutorial/guide on WPF.
A: Scott Hanselmann has blogged extensively about his experience in learning WPF by creating his 'BabySmash' windows application. All the source code is on codeplex and he has many blog articles describing his progress.
Initial BabySmash article
Codeplex source
BabySmash website
A: Ok, in terms of reading material, this is the pick of the books out there: Windows Presentation Foundation Unleashed.
For blogs, there are a lot of blogs and articles on WindowsClient.net, and there's an excellent blog all about data binding in WPF by Beatriz Costa. Also take a look at LearnWPF.com and Ask Dr. WPF.
A: Sacha Barber has a great series of articles on WPF for Beginners over at Codeproject that you can check out.
*
*An Introduction to the WPF Layout System
*An introduction into XAML / code and WPF resources
*An introduction into RoutedEvents / RoutedCommands
*An introduction into WPF Dependancy Properties
*An introduction into WPF Styles And Templates
A: I would buy a book - the Adam Nathan WPF book is good.
http://www.amazon.com/Windows-Presentation-Foundation-Unleashed-WPF/dp/0672328917
A: Here are a few "How Do I" videos to get you started:
http://windowsclient.net/learn/videos_wpf.aspx
A: Programming WPF by Chris Sells and Ian Griffiths is an excellent way to learn WPF. 5 star rated on Amazon with 50+ reviews. http://www.amazon.com/Programming-WPF-Chris-Sells/dp/0596510373
A: Have a look at the Guided tour of WPF by Josh Smith. I also really like Adam's Nathan book WPF Presentation Unleashed.
A: There are some WPF getting started guides here: http://msdn.microsoft.com/en-us/library/ms742119.aspx
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29700",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: How do I get the (x, y) pixel coordinates of the caret in text boxes? I am using jQuery and trying to find a cross browser way to get the pixel coordinates of the caret in <textarea>s and input boxes such that I can place an absolutely positioned div around this location.
Is there some jQuery plugin? Or JavaScript snippet to do just that?
A: I've looked for a textarea caret coordinates plugin for meteor-autocomplete, so I've evaluated all the 8 plugins on GitHub. The winner is, by far, textarea-caret-position from Component.
Features
*
*pixel precision
*no dependencies whatsoever
*browser compatibility: Chrome, Safari, Firefox (despite two bugs it has), IE9+; may work but not tested in Opera, IE8 or older
*supports any font family and size, as well as text-transforms
*the text area can have arbitrary padding or borders
*not confused by horizontal or vertical scrollbars in the textarea
*supports hard returns, tabs (except on IE) and consecutive spaces in the text
*correct position on lines longer than the columns in the text area
*no "ghost" position in the empty space at the end of a line when wrapping long words
Here's a demo - http://jsfiddle.net/dandv/aFPA7/
How it works
A mirror <div> is created off-screen and styled exactly like the <textarea>. Then, the text of the textarea up to the caret is copied into the div and a <span> is inserted right after it. Then, the text content of the span is set to the remainder of the text in the textarea, in order to faithfully reproduce the wrapping in the faux div.
This is the only method guaranteed to handle all the edge cases pertaining to wrapping long lines. It's also used by GitHub to determine the position of its @ user dropdown.
A: Note: this answer describes how to get the character co-ordinates of the text-cursor/caret. To find the pixel-co-ordinates, you'll need to extend this further.
The first thing to remember is that the cursor can be in three states
*
*a regular insertion cursor at a specific position
*a text selection that has a certain bounded area
*not active: textarea does not have focus. Has not been used.
The IE model uses the Object document.selection, from this we can get a TextRange object which gives us access to the selection and thus the cursor position(s).
The FF model/Opera uses the handy variables [input].selectionStart and selectionEnd.
Both models represent a regular ative cursor as a zero-width selection, with the left bound being the cursor position.
If the input field does not have focus, you may find that neither is set.
I have had good success with the following code to insert a piece of text at the current cursor location, also replacing the current selection, if present.
Depending on the exact browser, YMMV.
function insertAtCursor(myField, myValue) {
/* selecion model - ie */
if (document.selection) {
myField.focus();
sel = document.selection.createRange();
sel.text = myValue;
}
/* field.selectionstart/end firefox */
else if (myField.selectionStart || myField.selectionStart == '0' ) {
var startPos = myField.selectionStart;
var endPos = myField.selectionEnd;
myField.value = myField.value.substring(0, startPos)
+ myValue
+ myField.value.substring(endPos, myField.value.length);
myField.selectionStart = startPos + myValue.length;
myField.selectionEnd = startPos + myValue.length;
myField.focus();
}
// cursor not active/present
else {
myField.value += myValue;
}
Bug Note: links are not being correctly marked up in the top para.
Selection object: http://msdn.microsoft.com/en-us/library/ms535869(VS.85).aspx
TextRange object: http://msdn.microsoft.com/en-us/library/ms535872(VS.85).aspx
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29709",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "37"
} |
Q: What is the best way to sort a data bound combo box? I have done a bit of research into this and it seems that the only way to sort a data bound combo box is to sort the data source itself (a DataTable in a DataSet in this case).
If that is the case then the question becomes what is the best way to sort a DataTable?
The combo box bindings are set in the designer initialize using
myCombo.DataSource = this.typedDataSet;
myCombo.DataMember = "Table1";
myCombo.DisplayMember = "ColumnB";
myCombo.ValueMember = "ColumnA";
I have tried setting
this.typedDataSet.Table1.DefaultView.Sort = "ColumnB DESC";
But that makes no difference, I have tried setting this in the control constructor, before and after a typedDataSet.Merge call.
A: If you're using a DataTable, you can use the (DataTable.DefaultView) DataView.Sort property. For greater flexibility you can use the BindingSource component. BindingSource will be the DataSource of your combobox. Then you can change your data source from a DataTable to List without changing the DataSource of the combobox.
The BindingSource component serves
many purposes. First, it simplifies
binding controls on a form to data by
providing currency management, change
notification, and other services
between Windows Forms controls and
data sources.
A: You can actually sort the default view on a DataTable:
myDataTable.DefaultView.Sort = "Field1, Field2 DESC";
That'll sort any rows you retrieve directly from the DataTable.
A: Make sure you bind the DefaultView to the Controls Datasource, after you set the Sort property, and not the table:
myCombo.DataSource = this.typedDataSet.Tables["Table1"].DefaultView;
myCombo.DisplayMember = "ColumnB";
myCombo.ValueMember = "ColumnA";
A: Josh Smith has a blog post that answers this question, and does it all in XAML.
A: Does the data need to be in a DataTable?
Using a SortedList and binding that to a combo box would be a simpler way.
If you need to use a DataTable you can use the Select method to retrieve a DataView and pass in a sort parameter.
DataView dv = myDataTable.Select("filter expression", "sort");
A: The simplest way to sort a ComboBox is to use the ComboBox.Sorted property. However, that won't work if you're using data binding. In that case you'll have to sort the data source itself.
You can use either a SortedList or SortedDictionary (both sort by the Key), or a DataView.
The DataView has a Sort property that accepts a sort expression (string) for example:
view.Sort = "State, ZipCode DESC";
In the above example both State and ZipCode are columns in the DataTable used to create the DataView.
A: I realize that you've already chosen your answer to this question, but I would have suggested placing a DataView on your form, binding it to your DataSet/DataTable, and setting the sort on the View in the designer. You then bind your combobox to the DataView, rather than the DataSet/DataTable.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29731",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Why stateless session beans are single threaded? As per my understanding stateless session beans are used to code the business logic. They can not store data in their instance variables because their instance is shared by multiple requests. So they seem to be more like Singleton classes. However the difference is contain creates (or reuses from pool) the separate instance of stateless session beans for every request.
After googling I could find the reasoning that the Java EE specification says they are suppose to be single threaded. But I can't get the reason why the are specified to be SINGLE THREADED?
A: The SLSBs are single threaded because of the TX Context, Principal is associated with a bean instance when it is called. These beans are pooled and unless the max pool size is reached are processed in separate threads ( Vendor dependent).
If SLSBs were designed thread safe every call would have looked like a servlet doGet/Post with request info containing Tx Context , Security Context info and etc. So at least the code looks clean (developer dependent).
A: The primary reason stateless session beans are single threaded is to make them highly scalable for the container. The container can make a lot of simplifying assumptions about the runtime environment. A second reason is to make life easier for the developer because the developer doesn't have to worry about any synchronization or re-entrancy in his business logic because the bean will never be called in another thread context.
I remember the reasoning being discussed in the reviews of the original EJB 1.0 specification. I would look at the goals section of the specification. See http://java.sun.com/products/ejb/docs.html for the list of specifications.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29734",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How to select an SQL database? We're living in a golden age of databases, with numerous high quality commercial and free databases. This is great, but the downside is there's not a simple obvious choice for someone who needs a database for his next project.
*
*What are the constraints/criteria you use for selecting a database?
*How well do the various databases you've used meet those constraints/criteria?
*What special features do the databases have?
*Which databases do you feel comfortable recommending to others?
etc...
A: I would think first on what the system requirements are for data access, data security, scalability, performance, disconnected scenarios, data transformation, data sizing.
On the other side, consider also the experience and background of developers, operators, platform administrators.
You should also think on what constraints you have regarding programming languages, operating systems, memory footprint, network bandwidth, hardware.
Last, but not least, you have to think about business issues like budget for licences, support, operation.
After all those considerations you should end up with just a couple of options and the selection should be easier.
In other words, select the technology that suits the best the constraints and needs of your organization and project.
I certainly think that you are right on saying that it is not an obvious choice given the wide number of alternatives, but this is the only way I think you can narrow them to the ones that are really feasible for your project.
A: My selection criteria (mainly programming centric):
*
*Maintenance: How are updates/hotfixes installed?
*Transaction control: How it is implemented
*Are Stored Procedures supported?
*Can you use exception handling in Stored Procedures?
*Costs
*As a benefit: Can you use recursion on Stored Procedures? (E.g. in SQL Server 2000 the recursion stops after 32 passes IIRC)
A: For most people in a corporate environment the choice comes down to "the one we have".
Since you seem to be fortunate enough to have a choice, I'll take a quick run through the questions and maybe pose a few more at the end.
The biggest criterion may be cost. Do you want/are you prepared to pay for your DBMS platform? If not, then Oracle, MS SQL Server, Sybase and others are probably out, although if you're not building a commercial app then there may be some wiggle room. Also, platform - can you run the software on your hardware?
Other dimensions for consideration might include expected number of concurrent connections, transactional vs mostly reads, size, availability and I guess lots of others.
"Special features" are, in the main, to be avoided - in my cynical world-view they're intended to lock you into a platform. So something like Oracle's PL/SQL is a feature that, while powerful (and likely to mean the need for extra CPU power at more licensing cost) is not portable. If you expect extremely high volumes then partitioning may be useful, I suppose.
I have worked with Oracle, MS SQL Server, MySQL, PostreSQL, SQLite and Sybase that I can think of. I'd happily recommend all but Sybase, about which I have some concerns these days (I could easily be wrong, but personally I think the money could be better spent elsewhere) but not all for the same applications.
Ideally, I like to have the warm feeling that it doesn't really matter what DB platform I'm using because I can port easily. With a good abstraction layer between data and business logic, I should be able to develop locally against, say, the excellent SQLite and implement painlessly on, for example, Postgres. With something like ActiveRecord from Rails coupled with a little awareness of things like differences in reserved words, this is almost completely cost-free.
A: Surely the most compelling factor is the expertise of you or your team...or the pool of resource you are likely to hire in the future. I would tend to go with the grain most of the time, using MySQL in a LAMP team and SQL Server in a MS team, since either of these products is capable of doing everything necessary even in a high-load environment.
The benefits of any other database are going to be marginal compared to the pain of learning how to use it well. The only exception to this, in my opinion, would be in a high-demand environment where:
a. the obvious choice has been tried and is failing
b. the benefits of scaling multiply the marginal benefit to such a degree that it will be worth the cost of using something unexpected.
I would assume the need to hire at least two and preferably three excellent DBAs with long term familiarity with the new database.
And first I would try to hire them for the technology that was failing, because it is more likely to be the way it's used than the technology itself that is causing the problem.
A: The existing answers are great. It's worth bearing in mind that Oracle now has an XE version of it's 10g database which is available for free and comes with Application Express, a great web based development environment.
It is limited, 4GB HD, 1 GB Ram and uses only one CPU. This is enough to run smaller system though and can be upgraded easily at a later date if necessary. Oracle can be one of the toughest to learn but is also one of the best to have on your CV :-)
I think SQLServer from Microsoft also has a 'starter' type database. Don't discount the commercial products - if you are going to bet your company on a database technology I would rather be using a product from Oracle or Microsoft personally. Thats not to say there is anything wrong with Open Source.
Spend a while evaluating them :-)
A: *
*Linux, Web Hosted - MySQL (PostreSQL maybe)
*Mainstream SME - MS SQL
*Big Iron (banking etc) - Oracle
Thinking about anything other than those three is masturbation - any of the other databases becomes a discussion about niche products to solve particular problems that you probably haven't encountered yet. If you choose anything other than the three above you will -
*
*Struggle to find people to work on the project or keep the database going
*Struggle to motivate your decision without an academic discussion
*Someone will curse you, your ancestors and your lineage a few years down the line - and replace your choice anyway.
Niche databases are not where architectural strides are made - it is technologies like middleware, messaging, cloud services etc where you can afford to (and should) go out on a limb to find good products.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29743",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How do you manage schema upgrades to a production database? This seems to be an overlooked area that could really use some insight. What are your best practices for:
*
*making an upgrade procedure
*backing out in case of errors
*syncing code and database changes
*testing prior to deployment
*mechanics of modifying the table
etc...
A: Liquibase
liquibase.org:
*
*it understands hibernate definitions.
*it generates better schema update sql than hibernate
*it logs which upgrades have been made to a database
*it handles two-step changes (i.e. delete a column "foo" and then rename a different column to "foo")
*it handles the concept of conditional upgrades
*the developer actually listens to the community (with hibernate if you are not in the "in" crowd or a newbie -- you are basically ignored.)
http://www.liquibase.org
A: opinion
the application should never handle a schema update. This is a disaster waiting to happen. Data outlasts the applications and as soon as multiple applications try to work with the same data ( the production app + a reporting app for example) -- chances are they will both use the same underlying company libraries... and then both programs decide to do their own db upgrade ... have fun with that mess.
A: In general my rule is: "The application should manage it's own schema."
This means schema upgrade scripts are part of any upgrade package for the application and run automatically when the application starts. In case of errors the application fails to start and the upgrade script transaction is not committed. The downside to this is that the application has to have full modification access to the schema (this annoys DBAs).
I've had great success using Hibernates SchemaUpdate feature to manage the table structures. Leaving the upgrade scripts to only handle actual data initialization and occasional removing of columns (SchemaUpdate doesn't do that).
Regarding testing, since the upgrades are part of the application, testing them becomes part of the test cycle for the application.
Afterthought: Taking on board some of the criticism in other posts here, note the rule says "it's own". It only really applies where the application owns the schema as is generally the case with software sold as a product. If your software is sharing a database with other software, use other methods.
A: I am a big fan of Red Gate products that help creating SQL packages to update database schemas. The database scripts can be added to source control to help with versioning and rollback.
A: That's a great question. ( There is a high chance this is going to end up a normalised versus denormalised database debate..which I am not going to start... okay now for some input.)
some off the top of my head things I have done (will add more when I have some more time or need a break)
client design - this is where the VB method of inline sql (even with prepared statements) gets you into trouble. You can spend AGES just finding those statements. If you use something like Hibernate and put as much SQL into named queries you have a single place for most of the sql (nothing worse than trying to test sql that is inside of some IF statement and you just don't hit the "trigger" criteria in your testing for that IF statement). Prior to using hibernate (or other orms') when I would do SQL directly in JDBC or ODBC I would put all the sql statements as either public fields of an object (with a naming convention) or in a property file (also with a naming convention for the values say PREP_STMT_xxxx. And use either reflection or iterate over the values at startup in a) test cases b) startup of the application (some rdbms allow you to pre-compile with prepared statements before execution, so on startup post login I would pre-compile the prep-stmts at startup to make the application self testing. Even for 100's of statements on a good rdbms thats only a few seconds. and only once. And it has saved my butt a lot. On one project the DBA's wouldn't communicate (a different team, in a different country) and the schema seemed to change NIGHTLY, for no reason. And each morning we got a list of exactly where it broke the application, on startup.
If you need adhoc functionality , put it in a well named class (ie. again a naming convention helps with auto mated testing) that acts as some sort of factory for you query (ie. it builds the query). You are going to have to write the equivalent code anyway right, just put in a place you can test it. You can even write some basic test methods on the same object or in a separate class.
If you can , also try to use stored procedures. They are a bit harder to test as above. Some db's also don't pre-validate the sql in stored procs against the schema at compile time only at run time. It usually involves say taking a copy of the schema structure (no data) and then creating all stored procs against this copy (in case the db team making the changes DIDn't validate correctly). Thus the structure can be checked. but as a point of change management stored procs are great. On change all get it. Especially when the db changes are a result of business process changes. And all languages (java, vb, etc get the change )
I usually also setup a table I use called system_setting etc. In this table we keep a VERSION identifier. This is so that client libraries can connection and validate if they are valid for this version of the schema. Depending on the changes to your schema, you don't want to allow clients to connect if they can corrupt your schema (ie. you don't have a lot of referential rules in the db, but on the client). It depends if you are also going to have multiple client versions (which does happen in NON - web apps, ie. they are running the wrong binary). You could also have batch tools etc. Another approach which I have also done is define a set of schema to operation versions in some sort of property file or again in a system_info table. This table is loaded on login, and then used by each "manager" (I usually have some sort of client side api to do most db stuff) to validate for that operation if it is the right version. Thus most operations can succeed, but you can also fail (throw some exception) on out of date methods and tells you WHY.
managing the change to schema -> do you update the table or add 1-1 relationships to new tables ? I have seen a lot of shops which always access data via a view for this reason. This allows table names to change , columns etc. I have played with the idea of actually treating views like interfaces in COM. ie. you add a new VIEW for new functionality / versions. Often, what gets you here is that you can have a lot of reports (especially end user custom reports) that assume table formats. The views allow you to deploy a new table format but support existing client apps (remember all those pesky adhoc reports).
Also, need to write update and rollback scripts. and again TEST, TEST, TEST...
------------ OKAY - THIS IS A BIT RANDOM DISCUSSION TIME --------------
Actually had a large commercial project (ie. software shop) where we had the same problem. The architecture was a 2 tier and they were using a product a bit like PHP but pre-php. Same thing. different name. anyway i came in in version 2....
It was costing A LOT OF MONEY to do upgrades. A lot. ie. give away weeks of free consulting time on site.
And it was getting to the point of wanting to either add new features or optimize the code. Some of the existing code used stored procedures , so we had common points where we could manage code. but other areas were this embedded sql markup in html. Which was great for getting to market quickly but with each interaction of new features the cost at least doubled to test and maintain. So when we were looking at pulling out the php type code out, putting in data layers (this was 2001-2002, pre any ORM's etc) and adding a lot of new features (customer feedback) looked at this issue of how to engineer UPGRADES into the system. Which is a big deal, as upgrades cost a lot of money to do correctly. Now, most patterns and all the other stuff people discuss with a degree of energy deals with OO code that is running, but what about the fact that your data has to a) integrate to this logic, b) the meaning and also the structure of the data can change over time, and often due to the way data works you end up with a lot of sub process / applications in your clients organisation that needs that data -> ad hoc reporting or any complex custom reporting, as well as batch jobs that have been done for custom data feeds etc.
With this in mind i started playing with something a bit left of field. It also has a few assumptions. a) data is heavily read more than write. b) updates do happen, but not at bank levels ie. one or 2 a second say.
The idea was to apply a COM / Interface view to how data was accessed by clients over a set of CONCRETE tables (which varied with schema changes). You could create a seperate view for each type operation - update, delete, insert and read. This is important. The views would either map directly to a table , or allow you to trigger of a dummy table that does the real updates or inserts etc. What i actually wanted was some sort of trappable level indirection that could still be used by crystal reports etc. NOTE - For inserts , update and deletes you could also use stored procs. And you had a version for each version of the product. That way your version 1.0 had its version of the schema, and if the tables changed, you would still have the version 1.0 VIEWS but with NEW backend logic to map to the new tables as needed, but you also had version 2.0 views that would support new fields etc. This was really just to support ad hoc reporting, which if your a BUSINESS person and not a coder is probably the whole point of why you have the product. (your product can be crap but if you have the best reporting in the world you can still win, the reverse is true - your product can be the best feature wise, but if its the worse on reporting you can very easily loose).
okay, hope some of those ideas help.
A: These are all weighty topics, but here is my recommendation for updating.
You did not specify your platform, but for NANT build environments I use Tarantino. For every database update you are ready to commit, you make a change script (using RedGate or another tool). When you build to production, Tarantino checks if the script has been run on the database (it adds a table to your database to keep track). If not, the script is run. It takes all the manual work (read: human error) out of managing database versions.
A: I've heard good things about iBATIS 3 Schema Migrations System:
User Guide: http://svn.apache.org/repos/asf/ibatis/java/ibatis-3/trunk/doc/en/iBATIS-3-Migrations.pdf
A: As Pat said, use liquibase. Especially when you have several developers with their own dev databases
making changes that will become part of the production database.
If there's only one dev, as on one project I'm on now(ha), I just commit the schema changes as SQL text files into a CVS repo, which I check out in batches on the production server when the code changes go in.
But liquibase is better organized than that!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29744",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33"
} |
Q: Get the Directory sizes on a Remote Host with a PHP script I'm looking for something that will show me the size of each folder within my main folder recursively.
This is a LAMP server with a CGI-Bin so most any PHP script should work or anything that will work in the CGI-Bin.
My hosting company does not provide an interface for me to see which folders are consuming the most amount of space. I don't know of anything on the Internet and did a few searches however I came up with no results.
Something implementing graphs (GD/ImageMagick) would be best but not required.
My host supports only Perl in the CGI-BIN.
A: Strange, I came up on Google with many relevant results and this one is probably the most complete.
The function "getDirectorySize" will
ignore link/shorcuts to
files/directory. The function
"sizeFormat" will suffix the size with
bytes,KB,MB or GB accordingly.
Code
function getDirectorySize($path)
{
$totalsize = 0;
$totalcount = 0;
$dircount = 0;
if ($handle = opendir ($path))
{
while (false !== ($file = readdir($handle)))
{
$nextpath = $path . '/' . $file;
if ($file != '.' && $file != '..' && !is_link ($nextpath))
{
if (is_dir ($nextpath))
{
$dircount++;
$result = getDirectorySize($nextpath);
$totalsize += $result['size'];
$totalcount += $result['count'];
$dircount += $result['dircount'];
}
elseif (is_file ($nextpath))
{
$totalsize += filesize ($nextpath);
$totalcount++;
}
}
}
}
closedir ($handle);
$total['size'] = $totalsize;
$total['count'] = $totalcount;
$total['dircount'] = $dircount;
return $total;
}
function sizeFormat($size)
{
if($size<1024)
{
return $size." bytes";
}
else if($size<(1024*1024))
{
$size=round($size/1024,1);
return $size." KB";
}
else if($size<(1024*1024*1024))
{
$size=round($size/(1024*1024),1);
return $size." MB";
}
else
{
$size=round($size/(1024*1024*1024),1);
return $size." GB";
}
}
Usage
$path="/httpd/html/pradeep/";
$ar=getDirectorySize($path);
echo "<h4>Details for the path : $path</h4>";
echo "Total size : ".sizeFormat($ar['size'])."<br>";
echo "No. of files : ".$ar['count']."<br>";
echo "No. of directories : ".$ar['dircount']."<br>";
Output
Details for the path : /httpd/html/pradeep/
Total size : 2.9 MB
No. of files : 196
No. of directories : 20
A: If you have shell access you can run the command
$ du -h
or perhaps use this, if PHP is configured to allow execution:
<?php $d = escapeshellcmd(dirname(__FILE__)); echo nl2br(`du -h $d`) ?>
A: number_files_and_size.php
<?php
if (isset($_POST["nivel"])) {
$mostrar_hasta_nivel = $_POST["nivel"];
$comenzar_nivel_inferior = $_POST["comenzar_nivel_inferior"];
// $mostrar_hasta_nivel = 3;
global $nivel_directorio_raiz;
global $nivel_directorio;
$path = dirname(__FILE__);
if ($comenzar_nivel_inferior == "si") {
$path = substr($path, 0, strrpos($path, "/"));
}
$nivel_directorio_raiz = count(explode("/", $path)) - 1;
$numero_fila = 1;
// Comienzo de Tabla
echo "<table border='1' cellpadding='3' cellspacing='0'>";
// Fila encabezado
echo "<tr style='font-size: 100%; font-weight: bold;' bgcolor='#e2e2e2'><td></td><td>Ruta</td><td align='center'>Nivel</td><td align='right' style='color:#0000ff;'>Ficheros</td><td align='right'>Acumulado fich.</td><td align='right'>Directorio</td><td align='right' style='color:#0000ff;'>Tamaño</td><td align='right'>Acumulado tamaño</td></tr>";
// Inicio Filas de datos
echo "<tr>";
//Función que se invoca a si misma de forma recursiva según recorre el directorio raiz ($path)
FileCount($path, $mostrar_hasta_nivel, $nivel_directorio_raiz);
// Din Filas de datos
echo "</tr>";
// Fin de tabla
echo "</table>";
echo "<div style='font-size: 120%;'>";
echo "<br>Total ficheros en la ruta <b><em>" . $path . ":</em> " . number_format($count,0,",",".") . "</b><br>";
echo "Tamaño total ficheros: <b>". number_format($acumulado_tamanho, 0,",",".") . " Kb.</b><br>";
echo "</div>";
echo "<div style='min-height: 60px;'></div>";
} else {
?>
<form name="formulario" id="formulario" method="post" action="<?php echo $_SERVER['PHP_SELF']; ?>">
<br /><h2>Informe del Alojamiento por directorios (Número de Archivos y Tamaño)</h2>
<br />Nivel de directorios a mostrar: <input type="text" name="nivel" id="nivel" value="3"><br /><br />
<input type="checkbox" name="comenzar_nivel_inferior" value="si" checked="checked"/> Comenzar en nivel de directorio inmediatamente inferior a la ubicación de este módulo PHP<br />(<?php echo dirname(__FILE__) ?>)<br /><br />
<input type="submit" name="comenzar" id="comenzar" value="Comenzar proceso"><br /><br />
</form>
<?php
}
function FileCount($dir, $mostrar_hasta_nivel, $nivel_directorio_raiz){
global $count;
global $count_anterior;
global $suma_tamanho;
global $acumulado_tamanho;
$arr=explode('&',$dir);
foreach($arr as $val){
global $ruta_actual;
if(is_dir($val) && file_exists($val)){
global $total_directorio;
global $numero_fila;
$total_directorio = 0;
$ob=scandir($val);
foreach($ob as $file){
if($file=="."||$file==".."){
continue;
}
$file=$val."/".$file;
if(is_file($file)){
$count++;
$suma_tamanho = $suma_tamanho + filesize($file)/1024;
$acumulado_tamanho = $acumulado_tamanho + filesize($file)/1024;
$total_directorio++;
} elseif(is_dir($file)){
FileCount($file, $mostrar_hasta_nivel, $nivel_directorio_raiz);
}
}
$nivel_directorio = count(explode("/", $val)) - 1;
if ($nivel_directorio > $mostrar_hasta_nivel) {
} else {
$atributo_fila = (($numero_fila%2)==1 ? "background-color:#ffffff;" : "background-color:#f2f2f2;");
echo "<tr style='".$atributo_fila."'><td>".$numero_fila."</td><td>".$val." </td><td align='center'>".$nivel_directorio."</td><td align='right' style='color:#0000ff;'>".number_format(($count - $count_anterior),0,",",".")."</td><td align='right'>".number_format($count,0,",",".")."</td><td align='right'>".number_format($total_directorio,0,",",".")."</td><td align='right' style='color:#0000ff;'>".number_format($suma_tamanho,0,",",".")." Kb.</td><td align='right'>".number_format($acumulado_tamanho,0,",",".")." Kb.</td></tr>";
$count_anterior = $count;
$suma_tamanho = 0;
$numero_fila++;
}
}
}
}
?>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29746",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How can I make this code to submit a UTF-8 form textarea with jQuery/Ajax work? I am having problems submitting forms which contain UTF-8 strings with Ajax. I am developing a Struts web application which runs in a Tomcat server. This is the environment I set up to work with UTF-8:
*
*I have added the attributes URIEncoding="UTF-8" useBodyEncodingForURI="true" into the Connector tag to Tomcat's conf/server.xml file.
*I have a utf-8_general_ci database
*I am using the next filter to ensure my request and responses are encoded in UTF-8
package filters;
import java.io.IOException;
import javax.servlet.*;
public class UTF8Filter implements Filter {
public void destroy() {}
public void doFilter(ServletRequest request,ServletResponse response, FilterChain chain)
throws IOException, ServletException {
request.setCharacterEncoding("UTF-8");
response.setContentType("text/html;charset=UTF-8");
chain.doFilter(request, response);
}
public void init(FilterConfig filterConfig) throws ServletException {
}
}
*I use this filter in WEB-INF/web.xml
*I am using the next code for my JSON responses:
public static void populateWithJSON(HttpServletResponse response,JSONObject json)
{
String CONTENT_TYPE="text/x-json;charset=UTF-8";
response.setContentType(CONTENT_TYPE);
response.setHeader("Cache-Control", "no-cache");
try {
response.getWriter().write(json.toString());
} catch (IOException e) {
throw new ApplicationException("Application Exception raised in RetrievedStories", e);
}
}
Everything seems to work fine (content coming from the database is displayed properly, and I am able to submit forms which are stored in UTF-8 in the database). The problem is that I am not able to submit forms with Ajax. I use jQuery, and I thought the problem was the lack of contentType field in the Ajax request. But I was wrong. I have a really simple form to submit comments which contains of an id and a body. The body field can be in different languages such as Spanish, German, or whatever.
If I submit my form with body textarea containing contraseña, Firebug shows me:
Request Headers
*
*Host localhost:8080
*Accept-Charset ISO-8859-1, utf-8;q=0.7;*q=0.7
*Content-Type application/x-www-form-urlencoded; charset UTF-8
If I execute Copy Location with parameters in Firebug, the encoding seems already wrong:
http://localhost:8080/Cerepedia/corporate/postStoryComment.do?&body=contrase%C3%B1a&id=88
This is my jQuery code:
function addComment() {
var comment_body = $("#postCommentForm textarea").val();
var item_id = $("#postCommentForm input:hidden").val();
var url = rooturl+"corporate/postStoryComment.do?";
$.post(url, { id: item_id, body: comment_body } ,
function(data){
/* Do stuff with the answer */
}, "json"); }
A submission of a form with jQuery is causing the next error server side (note I am using Hibernate).
javax.servlet.ServletException: org.hibernate.exception.GenericJDBCException: Could not execute JDBC batch update
at org.apache.struts.action.RequestProcessor.processException(RequestProcessor.java:520)
at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:427)
at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:228)
at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1913)
at org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:462)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:710)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:803)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at com.cerebra.cerepedia.security.AuthorizationFilter.doFilter(AuthorizationFilter.java:78)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at com.cerebra.cerepedia.hibernate.HibernateSessionRequestFilter.doFilter(HibernateSessionRequestFilter.java:30)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at filters.UTF8Filter.doFilter(UTF8Filter.java:14)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:230)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:104)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:261)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:844)
at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:581)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447)
at java.lang.Thread.run(Unknown Source)
Caused by: org.hibernate.exception.GenericJDBCException: Could not execute JDBC batch update
at org.hibernate.exception.SQLStateConverter.handledNonSpecificException(SQLStateConverter.java:103)
at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:91)
at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:43)
at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:249)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:235)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:139)
at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:298)
at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:27)
at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1000)
at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:338)
at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:106)
at com.cerebra.cerepedia.item.dao.ItemDAOHibernate.addComment(ItemDAOHibernate.java:505)
at com.cerebra.cerepedia.item.ItemManagerPOJOImpl.addComment(ItemManagerPOJOImpl.java:164)
at com.cerebra.cerepedia.struts.item.ItemAction.addComment(ItemAction.java:126)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.struts.actions.DispatchAction.dispatchMethod(DispatchAction.java:269)
at org.apache.struts.actions.DispatchAction.execute(DispatchAction.java:170)
at org.apache.struts.actions.MappingDispatchAction.execute(MappingDispatchAction.java:166)
at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:425)
... 26 more
Caused by: java.sql.BatchUpdateException: Incorrect string value: '\xF1a' for column 'body' at row 1
at com.mysql.jdbc.ServerPreparedStatement.executeBatch(ServerPreparedStatement.java:657)
at com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeBatch(NewProxyPreparedStatement.java:1723)
at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:48)
at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:242)
... 44 more
26-ago-2008 19:54:48 org.apache.catalina.core.StandardWrapperValve invoke
GRAVE: Servlet.service() para servlet action lanzó excepción
java.sql.BatchUpdateException: Incorrect string value: '\xF1a' for column 'body' at row 1
at com.mysql.jdbc.ServerPreparedStatement.executeBatch(ServerPreparedStatement.java:657)
at com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeBatch(NewProxyPreparedStatement.java:1723)
at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:48)
at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:242)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:235)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:139)
at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:298)
at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:27)
at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1000)
at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:338)
at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:106)
at com.cerebra.cerepedia.item.dao.ItemDAOHibernate.addComment(ItemDAOHibernate.java:505)
at com.cerebra.cerepedia.item.ItemManagerPOJOImpl.addComment(ItemManagerPOJOImpl.java:164)
at com.cerebra.cerepedia.struts.item.ItemAction.addComment(ItemAction.java:126)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.struts.actions.DispatchAction.dispatchMethod(DispatchAction.java:269)
at org.apache.struts.actions.DispatchAction.execute(DispatchAction.java:170)
at org.apache.struts.actions.MappingDispatchAction.execute(MappingDispatchAction.java:166)
at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:425)
at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:228)
at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1913)
at org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:462)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:710)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:803)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at com.cerebra.cerepedia.security.AuthorizationFilter.doFilter(AuthorizationFilter.java:78)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at com.cerebra.cerepedia.hibernate.HibernateSessionRequestFilter.doFilter(HibernateSessionRequestFilter.java:30)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at filters.UTF8Filter.doFilter(UTF8Filter.java:14)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:230)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:104)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:261)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:844)
at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:581)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447)
at java.lang.Thread.run(Unknown Source)
javax.servlet.ServletException: java.lang.NumberFormatException: null
at org.apache.struts.action.RequestProcessor.processException(RequestProcessor.java:520)
at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:427)
at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:228)
at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1913)
at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:449)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:690)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:803)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at com.cerebra.cerepedia.security.AuthorizationFilter.doFilter(AuthorizationFilter.java:78)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at com.cerebra.cerepedia.hibernate.HibernateSessionRequestFilter.doFilter(HibernateSessionRequestFilter.java:30)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at filters.UTF8Filter.doFilter(UTF8Filter.java:14)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:230)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:104)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:261)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:844)
at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:581)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447)
at java.lang.Thread.run(Unknown Source)
Caused by: java.lang.NumberFormatException: null
at java.lang.Long.parseLong(Unknown Source)
at java.lang.Long.valueOf(Unknown Source)
at com.cerebra.cerepedia.struts.item.ItemAction.addComment(ItemAction.java:120)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.struts.actions.DispatchAction.dispatchMethod(DispatchAction.java:269)
at org.apache.struts.actions.DispatchAction.execute(DispatchAction.java:170)
at org.apache.struts.actions.MappingDispatchAction.execute(MappingDispatchAction.java:166)
at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:425)
... 26 more
26-ago-2008 20:13:25 org.apache.catalina.core.StandardWrapperValve invoke
GRAVE: Servlet.service() para servlet action lanzó excepción
java.lang.NumberFormatException: null
at java.lang.Long.parseLong(Unknown Source)
at java.lang.Long.valueOf(Unknown Source)
at com.cerebra.cerepedia.struts.item.ItemAction.addComment(ItemAction.java:120)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.struts.actions.DispatchAction.dispatchMethod(DispatchAction.java:269)
at org.apache.struts.actions.DispatchAction.execute(DispatchAction.java:170)
at org.apache.struts.actions.MappingDispatchAction.execute(MappingDispatchAction.java:166)
at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:425)
at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:228)
at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1913)
at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:449)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:690)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:803)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at com.cerebra.cerepedia.security.AuthorizationFilter.doFilter(AuthorizationFilter.java:78)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at com.cerebra.cerepedia.hibernate.HibernateSessionRequestFilter.doFilter(HibernateSessionRequestFilter.java:30)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at filters.UTF8Filter.doFilter(UTF8Filter.java:14)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:230)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:104)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:261)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:844)
at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:581)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447)
at java.lang.Thread.run(Unknown Source)
A: I'm having the same problem. I saw Internet Explorer 8 sends this header:
content-type = application/x-www-form-urlencoded
while Firefox sends this:
content-type = application/x-www-form-urlencoded; charset=UTF-8
My solution was just forcing in jQuery to use the Firefox content-type:
$.ajaxSetup({ scriptCharset: "utf-8" ,contentType: "application/x-www-form-urlencoded; charset=UTF-8" });
A: have you tried adding the following before the call :
$.ajaxSetup({
scriptCharset: "utf-8" ,
contentType: "application/json; charset=utf-8"
});
The options are explained here.
contentType : When sending data to the server, use this content-type. Default is "application/x-www-form-urlencoded", which is fine for most cases.
scriptCharset : Only for requests with 'jsonp' or 'script' dataType and GET type. Forces the request to be interpreted as a certain charset. Only needed for charset differences between the remote and local content.
A: I had the same problem and fixed it by downgrading to mysql-connector-odbc-3.51.16.
A: I had the same problem also and I fixed it in this way:
In PHP, before storing the data in the database, I used the htmlentities() function. And during showing the data, I used the html_entity_decode() function. This worked. I strongly hope this will work for you too.
A: I see this problem a lot.
The meta doesn't work always in your PHP data operations, so just type this at the beginning:
<?php header('Content-type: text/html; charset=utf-8'); ?>
A: As the exception is a jdbc error, your best approach is to capture the input before it is sent to the database.
java.sql.BatchUpdateException: Incorrect string value: '\xF1a' for column 'body' at row 1
A single character is causing the exception.
It may be the case that you will need to override some characters manually. You will find, when working with non-latin-alphabet languages (as I do), that this is a common pain.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29751",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: Stopping MSI from launching an EXE in the SYSTEM context I've got a problem here with an MSI deployment that I'm working on (using InstallShield). We have a program running in the background that needs to run per-user, and it needs to start automatically without user intervention.
The problem is with Group Policy Object/Active Directory (GPO/AD) deployment the application is started in the SYSTEM context before anyone is logged in rather than as the user who is about to log in. The application can only run once per user, and it seems that the SYSTEM process prevents the USER process from starting. This means the PCs need to be rebooted twice before the software can be deployed to the users. How do we to stop this?
Basically the current workflow is:
*
*Installation/upgrade runs... kill background application
*Install new files
*Startup background application
This works for published applications and interactive MSI installations - it's only 'assigned' applications that seem to have the problem. As step 3 happens in the SYSTEM context rather than the user context :(
Ideally, I'd have the development team patch the EXE file to prevent launching in the SYSTEM context, but that's a release cycle away, and I'm looking for an installer-based solution for the interim.
(I don't know Installscript... So I'm guessing VBScript is probably the way to go if there's no native InstallShield stuff I can use.)
A: You can use the LogonUser property of Windows Installer as a condition to the action launching the EXE.
A: AHA! I knew there had to be a cleaner solution... the code I was working on was starting to look something like this:
On Error Resume Next
strComputer = "."
Set objWMIService = GetObject("winmgmts:" _
& "{impersonationLevel=impersonate}!\\" & strComputer & "\root\cimv2")
Set colProcessList = objWMIService.ExecQuery _
("Select * from Win32_Process Where Name = 'BackgroundProcess.exe'")
For Each objProcess in colProcessList
colProperties = objProcess.GetOwner(strNameOfUser,strUserDomain)
If strNameOfUser = "SYSTEM" Then
objProcess.Terminate()
End If
Next
A: I wouldn't rely on a Windows installer property to accomplish this. If I understand correctly you want to run an EXE file once per user - probably to set up user defaults? The only time you can guarantee that you are in the right context is when the user actually logs in. With the amount of impersonation going on these days in the average deployment scenario I just don't trust anything but a real user login as the correct stage to run EXE files.
There are too many problem sources: custom permission and priviledge lockdowns, terminal server lockdown, virtualization redirects, impersonation run by the deployment system, operating system overrides for registry writes etc...
Microsoft has a feature called Active Setup which will allow you to run "something runnable" once per user, on logon. This can be anything from a script to an executable. See my answer here for more details: Updating every profile's registry on Windows Server 2003
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29760",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Good Git repository viewer for Mac Can anyone recommend a good repository viewer for Git, similar to gitk, that works on Mac OS X Leopard? (I'm not saying gitk doesn't work)
Of course I would like a native Mac application, but as I haven't found any, what are the best options to gitk?
I know about gitview, but I'm looking forward to evaluate as many alternatives as possible.
http://sourceforge.net/projects/gitview
A: There are a couple under development.
*
*GitNub
*Gitty (404, dead project)
I don't know if there are any that have hit 1.0.
A: Try SourceTree. It's currently a free Mac client for Git that also supports Mercurial and SVN.
http://www.sourcetreeapp.com/
It has the cleanest UI interface I've seen out of the handful that I have tried.
A: Gitty is under development right now, basically I am working on it and it is in turn working off of BazaarX which is under heavy restructuring right now. Gitty will essentially be BazzarX with the Bazaar backend ripped out and a GIT backend put in instead and any UI tweaks made for GIT Differences from Bazaar (ie hashes instead of version#'s,etc. The good news is that as developers on BazaarX we have got our act together and have our respective assignments for what area of BazaarX to work on and BazaarX is being designed to be VCS agnostic which will make my job of integrating GIT Into it much easier. We also have a bunch more people working on BazaarX now which makes my job of working on Gitty easier.
Currently Gitty is the only native/Cocoa app for this that i am aware of. I can't say when i'll be done and hit 1.0 but I am happy with the direction I am going on in Gitty and BazaarX.
A: How about SmartGit?
A: There's also gitx, it's progressing well and under active development (multiple commits per day).
A: 1.6 comes with Git GUI that works pretty well on my Mac.
A: As horrible as it looks, the git gui and gitk commands are as good as any.
GitX looks extremely promising, and very Mac-like (things like QuickLook'ing any file in any revision). Gitnub is probably the furthest along in development, but it has no concept of branching currently, and is pretty basic (it does far less than gitk)
A: I would recommend on using this experimental version of GitX, by Nathan Kinsinger at http://github.com/brotherbard/gitx/commits/experimental until it gets merged into the main program. I have been using it for some time now, and haven't had any problems. You will have to manually clone the repository and build it in XCode. I has a way better interface than GitX, as well as more features.
A: SourceTree is a great solution for me
A: I'm using git gui and gitk on leopard with the latest git from mac port. Works great for me.
A: Gity: http://macendeavor.com/gity
A: GitX is great; along with the command line tool, it's what I use.
If you want to make gitk look slightly less ugly (decent fonts and native Mac widgets) it can be done: http://effectif.com/git/making-gitk-look-good-on-mac
A: Check http://git-scm.com/downloads/guis
A: You should checkout Sprout (formerly GitMac). http://sproutmacapp.com it focuses on making Git easy, and browsing and committing changes in your projects
http://sproutmacapp.com
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29761",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: Software Deployment in a Virtual Environment I'm looking for a way to give out preview or demo versions of our software to our customers as easy as possible.
The software we are currently developing is a pretty big project. It consists of a client environment, an application server, various databases, web services host etc.
The project is developed incrementally and we want to ship the bits in intervals of one to two months. The first deliveries will not be used in production. They have the puropse of a demo to encourage the customers to give feedback.
We don't want to put burden on the customers to install and configure the system. All in all we are looking for a way to ease the deployment, installation and configuration pain.
What I thought of was to use a virtualizing technique to preinstall and preconfigure a virtual machine with all components that are neccessary. Our customers just have to mount the virtual image and run the application.
I would like to hear from folks who use this technique. I suppose there are some difficulties as well. Especially, what about licensing issues with the installed OS?
Perhaps it is possible to have the virtual machine expire after a certain period of time.
Any experiences out there?
A: Since you're looking at an entire application stack, you'll need to virtualize the entire server to provide your customers with a realistic demo experience. Thinstall is great for single apps, but not an entire stack....
Microsoft have licensing schemes for this type of situation, since it's only been used for demonstration purposes and not production use a TechNet subscription might just cover you. Give your local Microsoft licensing centre a call to discuss, unlike the offshore support teams they're really helpful and friendly.
For running the 'stack' with the least overhead for your clients, I suggest using VMware. The customers can download the free VMware player, load up the machines (or multiple machines) and get a feel for the system... Microsoft Virtual PC or Virtual Server is going to be a bit more intrusive and not quite the "plug n play" solution that you're looking for.
If you're only looking to ship the application, consider either thinstall or providing Citrix / Terminal services access - customers can remotely login to your own (test) machines and run what they need.
Personally if it's doable, a standalone system would be best - tell your customers install vmware player, then run this app... which launches the various parts of your application stack (maybe off of a DVD) and you've got a fully self contained demo for the marketing guys to pimp out :)
A: You should take a look at thinstall(It has been bought by vmware and is called thinapp now), its an application virtualizer.
A: It seems that you're trying to accomplish several competing goals:
*
*"Give" the customer something.
*Simplify and ease the customer experience.
*Ensure the various components coexist and interact happily.
*Accommodate licensing restrictions, both yours and the OS vendor's.
*Allow incremental and piecewise upgrades.
Can you achieve all of these by hosting the back end (database, web server, etc.) and providing your customers with a CD (or download) that contains the client? This will give them the "download/upgrade experience" that goes along with client software, without dealing with the complexity of administering the back end.
For a near plug-and-play experience, you might consider placing your demo on a live linux or Windows CD. Note: you need a licensed copy of Windows for the latter.
Perhaps your "serious" customers might be able to request their own demo copies of the back end as well; they'd be more amenable to the additional work on their part.
As far as OS licenses, if your vendor(s) of choice aren't helpful, you might consider free or open-source alternatives such as FreeDOS or linux.
A: Depending on if you can fit all the needed services into a single OS instance or not...
Vmware Ace or whatever they're calling it nowadays will let you deliver single virtual machines under strict control, with forced updates, expiration and whatnot. But it sounds easier to just set up a demo environment and allow remote access to it.
The issue here I guess is getting several virtual machines to communicate under unknown circumstances - if one is not enough?
An idea then is to ship a physical server preconfigured with virtualisation and whatever amount of virtual servers needed to demonstrate the system.
Using trial versions of the operating system might be good enough for the licensing dilemma - atleast Windows Server is testable for 60 days, extendable to 240 when registering.
A:
Thinstall is great for single apps, but not an entire stack....
I didn't try it yet, but with the new version of thinstall you are able to let different thinstalled application communicate.
But I guess you're right a vm-ware image would be easier
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29775",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Visual Studio 2005 Project options I have a solution in Visual Studio 2005(professional Edition) which in turn has 8 projects.I am facing a problem that even after i set the Command Arguments in the Project settings of the relevant project, it doesnt accept those command line arguments and it shows argc = 1, inspite of me giving more than 1 command arguments. Tried making the settings of this Solution similar to a working solution, but no success.
Any pointers?
-Ajit.
A: Hmm.. Are you sure the specified project is set as the start project (right click > set as startup project) ??
Oh, and obviously you need to be in the correct configuration mode ^_^
(Notice it can be changed to debug | build | all configurations )
A: Are you sure you are setting the command arguments on the same configuration (Debug|Release) you are debugging? As far as I remember command arguments are per configuration.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29777",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Windows XP Default Routes I use my mobile phone for connection to the internet on my laptop, I also have a wired connection to a LAN which doesn't have internet conectivity, it just has our TFS server on it.
The problem is that I can't use the internet (from the phone) with the LAN cable plugged in. Is there a way to set the default route to my phone?
I'm running Windows XP.
A: There's many OS specific ways to force routing over specific interfaces. What OS are you using? XP? Vista? *nix?
The simplest way is to configure your network card with a static IP and NO GATEWAY, the only gateway (ie. internet access) your laptop will find is then via the mobile.
The disadvantage of this method is that you'll need to access your TFS server by IP address (or netbios name) as all DNS requests will be going out over the internet and not through your private LAN.
EDIT: If you can't use the phone when the LAN is plugged in, that's because you've got it setup for DHCP and the DHCP server is advertising (incorrectly for you) that it will accept and route internet traffic. As previously mentioned, setup with a static IP and no gateway... if you insist on using DHCP you'll need to learn the ROUTE command in DOS, find the IP address of your phone (assuming it's acting as a router) set that as the default route, and remove whatever default route was assigned from the DHCP server.
EDIT2: @dan - you can't use the internet from your phone directly (eg. mobile browser), or you can't make your laptop use your phone for internet when the cable is plugged in? (ie. routing issues) ... if it's the former, then your phone is probably configuring a PAN with your phone and trying to route internet back over the LAN
EDIT @Jorge - IP routing is the responsibility of the network layer, not the application. Go review the OSI model ;)
A: You can actually configure what you want to be the default gateway globally using the "routes" command as described here: Default Internet connection on Dual LAN Workstation
I admit though, on windows it'd finicky at best as sometimes that setup will just disappear :(
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29782",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How do you handle versioning on a Web Application? What are the strategies for versioning of a web application/ website?
I notice that here in the Beta there is an svn revision number in the footer and that's ideal for an application that uses svn over one repository. But what if you use externals or a different source control application that versions separate files?
It seems easy for a Desktop app, but I can't seem to find a suitable way of versioning for an asp.net web application.
NB I'm not sure that I have been totally clear with my question.
*
*What I want to know is how to build and auto increment a version number for an asp.net application.
*I'm not interested in how to link it with svn.
A: I think what you are looking for is something like this: How to auto-increment assembly version using a custom MSBuild task. It's a little old but I think it will work.
A: For my big apps I just use a incrementing version number id (1.0, 1.1, ...) that i store in a comment of the main file (usually index.php).
For just websites I usually just have a revision number (1,2,3,...).
A: I have a tendency to stick with basic integers at first (1,2,3), moving onto rational numbers (2.1, 3.13) when things get bigger...
Tried using fruit at one point, that works well for a small office. Oh, the 'banana' release? looks over in the corner "yeah... that's getting pretty old now..."
Unfortunately, confusion started to set in when the development team grew, is it an Orange, or Mandarin, or Tangelo? It looks ok. What do you mean "rotten on the inside?"
... but in all honesty. Setup a separate repository as a master, development goes on in various repositories. For every scheduled release everything is checked into the master repository so that you can quickly roll back when something goes wrong.
(I'm assuming dev/test/production are all separate servers, and dev is never allowed to touch production or the master repository....)
A: I maintain a system of web applications with various components that live in separate SVN repos. To be able to version track the system as a whole, I have another SVN repo which contains all other repos as external references. It also contains install / setup script(s) to deploy the whole thing. With that setup, the SVN revision number of the "metarepository" could possibly be used for versioning the complete system.
In another case, I include the SVN revision via SVN keywords in a class file that serves no other purpose (to avoid the risk of keyword substitution breaking my code). The class in that file contains a string variable that is manipulated by SVN and parsed by a class method.
An inconvenience with both approaches is that the revision number is not automatically updated by changes in the externals (approach 1) or the rest of the code (approach 2).
A: During internal development, I'm using milestone numbers (M1, M2, M3...). After release, I'll probably just update dates ("the January 2009 update").
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29802",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: How to change Instantiated Objects Font Colour in Visual Studio I know the colours are changed in Environment > Fonts and Colors but I haven't found out which Display Item the object is.
Can someone please tell how I can colour the following code:
lblMessage.Text = "You have successfully answered my question!"
I have the string coloured pink, I would like the lblMessage purple and the .Text a light green.
For me the darker the colour the less it will "change". I won't often change object names, I'll more often change properties and am always changing strings - although I have another question about this that I'll post later.
Thanks
A: Go to Environment > Fonts and Colors > Display Items and change
*
*Identifier
*String
I was hoping that their is I can be
more specific with the colours - if
their isn't then that's an acceptable
answer - just disappointing for me.
Yeah, I don't think you can do that. :)
A: This is possible if you use a 3rd party addin like Visual Assist. It lets you assign different colors to classes, variables, macros and functions (among other features).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29806",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Is version control (ie. Subversion) applicable in document tracking? I am in charge of about 100+ documents (word document, not source code) that needs revision by different people in my department. Currently all the documents are in a shared folder where they will retrieve, revise and save back into the folder.
What I am doing now is looking up the "date modified" in the shared folder, opened up recent modified documents and use the "Track Change" function in MS Word to apply the changes. I find this a bit tedious.
So will it be better and easier if I commit this in a version control database?
Basically I want to keep different version of a file.
What have I learn from answers:
*
*Use Time Machine to save different
version (or Shadow copy in Vista)
*There is a difference between text
and binary documents when you use
version control app. (I didn't know
that)
*Diff won't work on binary files
*A notification system (ie email) for revision is great
*Google Docs revision feature.
Update :
I played around with Google Docs revision feature and feel that it is almost right for me. Just a bit annoyed with the too frequent versioning (autosaving).
But what feels right for me doesn't mean it feels right for my dept. Will they be okay with saving all these documents with Google?
A: I've worked with Word documents in SVN. With TortoiseSVN, you can easily diff Word documents (between working copy and repository, or between two repository revisions). It's really slick and definitely recommended.
The other thing to do if you're using Word documents in SVN is to add the svn:needs-lock property to the Word documents. This will prevent two people from trying to edit the same document at the same time, since unfortunately there's no good way to merge Word documents.
With the above two things, handling revision controlled Word documents is at least tolerable. It certainly beats the alternative of using a shared folder and track-changes.
A: Sharepoint also does a good (ok decent) job of versioning MS-specific documents.
A: How about trying git , It seems git can support word .doc and open document .odf files if you configure it in .gitattributes file.
Here is a reference , Scroll down to diffing binary files .
A: For what it's worth, there is also Google Docs. I guess it's not a perfect fit, but it's versioning is very convenient.
A: What on Earth are you all Word-is-binary-so-no-diff people talking about? TortoiseSVN, for example, integrates right out of the box with Word and enables you to use Word's built-in diff and merge functionality. It works just fine.
I have worked on projects that store documents in version control. It has worked out pretty well, although if people are unfamiliar with version control, they are probably going to have conceptual difficulties with things like "working copy" and "merge" and "conflict". Don't overestimate the users' capabilities when you plan your document management system.
I believe there exist big and powerful commercial solutions for all of this, as well. I'm sure if you have enough kilodollars, you can get something that fits your needs perfectly. Document management systems are a big business for big enterprise.
A: Clearcase integrates with Word for revision tracking. I believe Telelogic DOORs does as well.
A: I use Mercurial with the TortoiseHg overlay. I can right-click a changeset, choose "Visual Diff", then choose the "docdiff" tool (comes bundled), which launches the document in Word with the Track Changes.
A: I guess one thing that nobody seems to have asked is if you have a legal requirement to store history of changes to the doc's?
Whether you do or don't is going to have an impact on what solutions you can consider.
Also a notification mechanism for out of date copies is also a bundle of fun. If engineer A has a copy of a document and engineer B then edits it and commits the changes you want engineer A to be notified that his copy is out of date.
Document control can become a real can of worms quite easily.
Maybe keep the doc's under CVS or SVN and set it up so that emails are generated to whoever has checked out a copy when updates for the same doc. are checked in to the repository?
Edit: I forgot to add don't forget to use the binary switch, e.g. -kb for CVS, when adding the new doc. Otherwise, you will get any sequences of data that happen to match the ascii for keyword strings having the relevant config management data appended thereby corrupting your doc. data.
A: Thinking out of the box, would migrating to a Wiki be out of the question?
Since you consider it feasible to force your users into Subversion (or something similar), a larger change seem acceptable.
Another migration target could be to use some kind of structured XML document format (DocBook comes to mind). This would enable you to indeed use diffs and source control, while getting all sorts of document formats for free.
A: You can, but you will allways compare the document versions with Word itself.
I haven't heard a version control database which can track changes in Word documents.
However there are some tools which can compare Word documents, so if you set up your version control client to use these tools for comparison, you can have some fun.
A: Not necessarily. It depends on how often the new files are committed to the repo. If the files are edited several times before a commit, then you're precisely where you are now. The biggest benefit is if the file becomes corrupted.
You can version any file; this is how Time Machine in Mac OS X Leopard works, for example, and there is an interesting article by someone who committed his entire computing environment into CVS and then just maintained working copies on his home and work machines.
But "better" and "easier" are specific to your situation, and I'm not sure I completely understand your problem as things stand.
A: Subversion, CVS and all other source control systems are not good for Word documents and other office files (such as Excel spread sheets), since the files themselves are stored in a binary format. That means that you can never go back and annotate (or blame, or whatever you want to call it), or do diffs between documents.
There are revision control systems for Word documents out there, unfortunately I do not know any good ones. We use such control systems for Excel at my work, and unfortunately they all cost money.
The good thing is that they make life a lot easier, especially if you ever have to do an audit or due diligence.
A: If you use WinMerge it has added support for merging Word and Excel binary files.
A: Have a look at Sharepoint. If cost is an issue, Sharepoint portal sevices can also work for you. Read this for more info
A: Just wanted to clarify an answer someone gave but I don't have enough points yet.
diff will work on binary files but it is only going to say something not really useful like "toto1 and toto2 binary files differ".
A: You could use something like the Revisionator, which is like google docs but with built in revision control including diffs, forks, and 3 way merges. http://revisionator.com
UPDATE: It also fixes the problem of too frequent autosaving that you mention with Google Docs. It'll still autosave to prevent data loss, but it will only create a new version in the revision history and share with other users when you explicitly "release" your changes.
A: You could do that, but if that files are binary you should always put a lock on it before editing. You won't get a conflict (which would be unresolvable).
A: Many of the new version control projects are better suited to entire directories, and not so much for single files.
Convincing someone that they need to get an entire project, when they only want to update an individual file can be a "fun" way to spend an afternoon.
A: Another option you have is a piece of software and cloud computing magic called dropbox. Or, you could ditch the word documents and make a locally shared mediawiki instead.
DropBox:
getdropbox DOT com
MediaWiki:
mediawiki DOT org
A: YES, it's applicable! I totally agree to say that the combo SVN+TortoiseSVN suits well to track MS Office documents. You can lock a document for edition, write protect all unlocked files to avoid conflicts (i.e. parallel modifications), diff two versions of the same file, see the history of all the modifications and of course rollback to an older revision.
I tried to describe all of those tips in a dedicated blog post. (disclaimer: I'm the blog owner)
All of this could even be accessible from the web with a SVN web client! (might need some software development)
But if you're not accustomed to Version Control Systems in an other context this may not be the obvious choice. The needed work for a good integration with docs give dedicated tools an advantage: "electronic document management" systems are made just for that. A VCS like SVN may stay a good alternative for cost reasons :-)
Did you test the online service Simul? It looks promising, I personally like the GitHub-like orientation. Note that I'm not affiliated to Simul!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29810",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "75"
} |
Subsets and Splits