text
stringlengths 8
267k
| meta
dict |
---|---|
Q: Is there any way to prevent find from digging down recursively into subdirectories? When I do:
$ find /
It searches the entire system.
How do I prevent that?
(This question comes from an "answer" to another question.)
A: G'day,
Just wanted to expand on the suggestion from Jon to use -prune. It isn't the easiest of find options to use, for example to just search in the current directory the find command looks like:
find . \( -type d ! -name . -prune \) -o \( <the bit you want to look for> \)
this will stop find from descending into sub-directories within this directory.
Basically, it says, "prune anything that is a directory, whose name isn't ".", i.e. current dir."
The find command evals left to right for each item found in the current directory so after completion of the first element, i.e. the prune segment, it will then continue on with the matched item in your second -o (OR'd) expression.
HTH.
cheers,
Rob
A: You might be better off using wildcards. For instance, if you want to find all ksh scripts in the current directory:
$ ls *.ksh
A: Consider:
-maxdepth n
True if the depth of the current file into the tree is less than
or equal to n.
-mindepth n
True if the depth of the current file into the tree is greater
than or equal to n.
A: Use the -prune option.
A: You may even do
echo /specific/dir/*.jpg
as it's your shell that expands the wildcard. Typing
ls *.jpg
is equivalent to typing
ls foo.jpg bar.jpg
given foo.jpg and bar.jpg are all the files that end with ".jpg" in the current directory.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27077",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: IE6 security login (debugging via VirtualPC) I am debugging my ASP.NET application on my Windows XP box with a virtual directory set up in IIS (5.1).
I am also running VirtualPC with XP and IE6 for testing purposes. When I connect to my real machine from the virtual machine, I enter the URL: http://machinename/projectname.
I get a security popup to connect to my machine (which I expect), but the User name field is disabled. I cannot change it from machinename\Guest to machinename\username in order to connect.
How do I get this to enable so I can enter the correct credentials.
A: It sounds like you need to allow anonymous users. Can you connect from other machines on your network?
Try changing your virtual directory's authentication settings, so that anonymous users are allowed (MyVirtualDir > Properties > Authentication).
A: uncheck allow anonymous on the virtual dir and check integrated authentication. also make sure you are not logging into the virtual xp box as guest or a user w/o a password.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27078",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: The Google Calculator Glitch, could float vs. double be a possible reason? I did this Just for kicks (so, not exactly a question, i can see the downmodding happening already) but, in lieu of Google's newfound inability to do math correctly (check it! according to google 500,000,000,000,002 - 500,000,000,000,001 = 0), i figured i'd try the following in C to run a little theory.
int main()
{
char* a = "399999999999999";
char* b = "399999999999998";
float da = atof(a);
float db = atof(b);
printf("%s - %s = %f\n", a, b, da-db);
a = "500000000000002";
b = "500000000000001";
da = atof(a);
db = atof(b);
printf("%s - %s = %f\n", a, b, da-db);
}
When you run this program, you get the following
399999999999999 - 399999999999998 = 0.000000
500000000000002 - 500000000000001 = 0.000000
It would seem like Google is using simple 32 bit floating precision (the error here), if you switch float for double in the above code, you fix the issue! Could this be it?
/mp
A: For more of this kind of silliness see this nice article pertaining to Windows calculator.
When you change the insides, nobody notices
The innards of Calc - the arithmetic
engine - was completely thrown away
and rewritten from scratch. The
standard IEEE floating point library
was replaced with an
arbitrary-precision arithmetic
library. This was done after people
kept writing ha-ha articles about how
Calc couldn't do decimal arithmetic
correctly, that for example computing
10.21 - 10.2 resulted in 0.0100000000000016.
A:
It would seem like Google is using simple 32 bit floating precision (the error here), if you switch float for double in the above code, you fix the issue! Could this be it?
No, you just defer the issue. doubles still exhibit the same issue, just with larger numbers.
A: in C#, try (double.maxvalue == (double.maxvalue - 100)) , you'll get true ...
but thats what it is supposed to be:
http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems
thinking about it, you have 64 bit representing a number greater than 2^64 (double.maxvalue), so inaccuracy is expected.
A: @ebel
thinking about it, you have 64 bit representing a number greater than 2^64 (double.maxvalue), so inaccuracy is expected.
2^64 is not the maximum value of a double. 2^64 is the number of unique values that a double (or any other 64-bit type) can hold. Double.MaxValue is equal to 1.79769313486232e308.
Inaccuracy with floating point numbers doesn't come from representing values larger than Double.MaxValue (which is impossible, excluding Double.PositiveInfinity). It comes from the fact that the desired range of values is simply too large to fit into the datatype. So we give up precision in exchange for a larger effective range. In essense, we are dropping significant digits in return for a larger exponent range.
@DrPizza
Not even; the IEEE encodings use multiple encodings for the same values. Specifically, NaN is represented by an exponent of all-bits-1, and then any non-zero value for the mantissa. As such, there are 252 NaNs for doubles, 223 NaNs for singles.
True. I didn't account for duplicate encodings. There are actually 252-1 NaNs for doubles and 223-1 NaNs for singles, though. :p
A:
2^64 is not the maximum value of a double. 2^64 is the number of unique values that a double (or any other 64-bit type) can hold. Double.MaxValue is equal to 1.79769313486232e308.
Not even; the IEEE encodings use multiple encodings for the same values. Specifically, NaN is represented by an exponent of all-bits-1, and then any non-zero value for the mantissa. As such, there are 252 NaNs for doubles, 223 NaNs for singles.
A:
True. I didn't account for duplicate encodings. There are actually 252-1 NaNs for doubles and 223-1 NaNs for singles, though. :p
Doh, forgot to subtract the infinities.
A: The rough estimate version of this issue that I learned is that 32-bit floats give you 5 digits of precision and 64-bit floats give you 15 digits of precision. This will of course vary depending on how the floats are encoded, but it's a pretty good starting point.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27095",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Perfmon File Analysis Tools I have a bunch of perfmon files that have captured information over a period of time. Whats the best tool to crunch this information? Idealy I'd like to be able to see avg stats per hour for the object counters that have been monitored.
A: From my experience, even just Excel makes a pretty good tool for quickly whipping up graphs of perfmon if you relog the data to CSV or TSV. You can just plot a rolling average & see the progression. Excel isn't fancy, but if you don't have more than 30-40 megs of data it can do a pretty quick job. I've found that Excel 2007 tends to get unstable when using tables & over 50 megs of data: at one point an 'undo' caused it to consume 100% cpu & 1.3 GB of RAM.
Addendum - relog isn't the best known tool but it is very useful. I don't know of any GUI front ends, so you just have to run it from the command line. The two most common cases I've used it for are
*
*Removing unnecessary counters from logs that different sysadmin gave me, e.g. the entire process & memory objects.
*Converting the binary perfmon logs to .csv or .tsv files.
A: Perhaps look into using LogParser.
It depends on how the info was logged (Perfmon doesn't lack flexibility)
If they're CSV you can even use the ODBC Text drivers and run queries against them!
(performance would be 'intriguing')
And here's the obligatory link to a CodingHorror article on the topic ;-)
A: This is a free tool provided on Codeplex, provides charting capabilities, and inbuilt thresholds for differnt server roles, which can also be modified. Generates HTML reports.
http://www.codeplex.com/PAL/Release/ProjectReleases.aspx?ReleaseId=21261
A: Take a look at SmartMon (www.perfmonanalysis.com). It analyzes Perfmon data in CSV and SQL Server databases.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27108",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Charting library for Java and .Net Can anyone recommend a library for chart generation (bar charts, pie charts etc.) which runs on both Java and .Net?
A: ChartDirector is fantastic and supports more than just Java and .NET.
A: Have you looking into using JFreeChart. I have used it on a few Java projects and its very configurable. Its free but I think you can purchase the developers guide for $50. Its good for quick simple charts too. However performance for real-time data is not quite up to par (Check out the FAQ).
They also have a port to .NET however I have never used it.
Hope that helps.
A: Dundas Charts was about the easiest thing ever to get up and producing amazing looking charts.
A: Flash Charts.
http://www.fusioncharts.com/free/Gallery.asp
A: You could also try Open Flash Charts
A: ChartFX (http://www.softwarefx.com) has been a leader in charting for years. I personally have used several different versions for over 8 years and it is rock solid.
I have re-evaluated charting options periodically, and ChartFX has won in my environment based almost purely on feature set. It is not free or cheap, but it is well worth the price they charge.
-Geoffrey
A: Here is a belated answer:
Use the Google Chart API. It will allow you to create charts in a programming language and platform agnostic way -- assuming your app will have an Internet connection at all times. Use it in combination with .Net and Java wrapper APIs that you can find here.
I wrote one: charts4j.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27129",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: iPhone app that access the Core Location framework over web I was wondering if I could access the iPhones Core Location framework over a website?
My goal is to build a webapp/website that the iPhone would browse to, then upload its current GPS location. This would be a simple site primary for friends/family so we could locate each other. I can have them manually enter lng/lat but its not the easiest thing to find. If the iPhone could display or upload this automatically it would be great.
I don't own a Mac yet (waiting for the new Mac Book Pro) but would like something a little more automatic right now. Once I have the mac I could download the SDK and build a better version later. For now a webapp version would be great if possible. Thanks.
A: Why not simply use W3C GeoLocation API available in mobile Safari? This will work on ipod touch as well (suburb precision).
It's literally 10 lines of code and the javascript will work without change on Firefox 3.5. Far easier than scrape some third party website.
A: I'm pretty sure you can't do what you want directly.
The best idea I can come up with is to "reuse" an iPhone app that records location and makes it accessible on the web. Take Twitter for example. If I'm not mistaken, Tapulous' app Twinkle will grab your location and post it to your Twitter.com user profile. Here's an example of what that looks like:
From your webapp, you could then scrape the user page for each person whose location you're interested in. It's a pain in the butt, but like I said, this is the best I could come up with.
Again, if you don't want to mess with Twitter, there may be other apps out there that do this as well, but I don't personally know of any. Good luck.
A: http://www.instamapper.com/iphone
iPhone App store
While this may not directly answer your question, there are quite a few iPhone apps that already do this kind of thing with GPS. Instamapper is the first one I pulled up from the app store, but I'm sure you could find something to fit your needs.
A: We built a really thin iphone client app that simply calls a predefined .js file on our site. Works like a charm.
See arisgames.org for the project.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27138",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: merge rss feeds I want to merge multiple rss feeds into a single feed, removing any duplicates. Specifically, I'm interested in merging the feeds for the tags I'm interested in.
[A quick search turned up some promising links, which I don't have time to visit at the moment]
Broadly speaking, the ideal would be a reader that would list all the available tags on the site and toggle them on and off, allowing me to explore what's available, keep track of questions I've visited, new answers on interesting feeds, etc, etc . . . though I don't suppose such a things exists right now.
As I randomly explore the site and see questions I think are interesting, I inevitably find "oh yes, that one looked interesting a couple days ago when I read it the first time, and hasn't been updated since". It would be much nicer if my machine would keep track of such deails for me :)
Update: You can now use "and", "or", and "not" to combine multiple tags into a single feed: Tags AND Tags OR Tags
Update: You can now use Filters to watch tags across one or multiple sites: Improved Tag Stes
A: I create a the stackoverflow tag feeds pipe. You can list your tags of choice into the text box and it will combine them into a single feed with all the unique posts. It escapes '#' and '+' characters for you.
Alternatively, you can use the pipe's rss feed by appending your html-encoded tags separated by '+'s:
http://pipes.yahoo.com/pipes/pipe.run?_id=uP22vN923RG_c71O1ZzWFw&_render=rss&tags=.net+c%23+powershell
Unfortunatley, though, this seems to strip out the content of the posts. The content is visible in the debug view, but the output only contains the post title.
[Thanks to everyone for suggesting Yahoo Pipes! Had heard of it before, but never tried it until now :-]
A: Here is an article on Merge Multiple RSS Feeds Into One with Yahoo! Pipes + FeedBurner.
Another option is Feed Rinse, but they have a paid version as well as the free version.
Additionally:
I have heard good things about AideRss
A: SimplePie is a PHP library that supports merging RSS feeds into one combined feed. I don't believe it does dupe checking out-of-the-box, but I found it trivial to write a little function to eliminate duplicate content via their GUIDs.
A: Have you heard of Yahoo's Pipes.
Its an interactive feed aggregator and
manipulator. List of 'hot pipes' to
subscribe to, and ability to create
your own (yahoo account required).
I played with it during beta back in the day, however I had a blast. Its really fun and easy to aggregate different feeds and you can add logic or filters to the "pipes". You can even do more then just RSS like import images from flickr.
A: Yahoo Pipes?
23 minutes later:
Aww, I got answer-sniped by @Bernie Perez. Oh well :)
A: In the latest Podcast, Jeff and Joel talked about the RSS feeds for tags, and Joel noted that there is only the current ability to do AND on tags, not OR.
Jeff suggested that this would be included at some stage in the future.
I think that you should request this on uservoice, or vote for it if it is already there.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27148",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Development resources for Mono on PS3 I have been considering taking the plunge and installing Linux on my Playstation 3. As C# is my current language of choice, the next logical step would be to install Mono.
I have done a little research and found that http://psubuntu.com/wiki/InstallationInstructions has instruction on installing Ubuntu and links to download an ISO containing a PS3 specific version of Ubuntu. There is also this cool project at http://code.google.com/p/celldotnet/ that has developed some code to utilize the 6 additional SPU cores of the CPU, not just the general-purpose one that you have access to by default.
The problem is that the project documentation seems a little thin. Has anyone set up a PS3 to develop .NET code? If so, what problems did you encounter? Does anyone have any code samples of how to even partially load up one of these monster processors?
Update:
I do realize that basic .NET/Mono programming will come into play here. The part I am fuzzy on is what sort of data structures do you pass to a specialty core? If I am reading this right, the 6 SPU cores have 128 registers at 128 bits each. I haven't seen any discussion on how to go about coding effectively for this.
Update 2:
IBM has announced that further development on the Cell processor has been cancelled. While this pretty much kills any desire I might have to develop on the platform, hopefully someone else might add some useful info.
A: Just found this posting from Miguel de Icaza's blog. Promising that as recently as Feb 2008 he was looking into this. As he is a member of the SO community now, I hope he can shed some further light on the topic.
A: The PS3 features a PPC general purpose CPU.
You can try to cross compile mono to ppc and go from there.
Mono from svn has received a lot of attention regarding the ppc port, so I would advise using it instead of the 2.0 release.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27153",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Can fogbugz track case dependencies? Can fogbugz track case dependencies?
A: Yes and no. Cases can be linked to each other, but if you're looking for a tree of cases (prerequisites and such), you need FogBugz 7 or later.
If you're using FogBugz 7.3 or greater, you can now use the Case Dependency Plugin, which was released in April 2011.
A: You didn't define what you mean by dependencies exactly, but if you mean that the resolution of one case requires the resolution of others - formally the answer is no. However, you can refer to other cases from a base case and FogBugz will track the cross references. For example, if you say "see case 2031" in the text of one case, the 2031 portion will turn into a hyperlink and both cases will now report that they refer to each other (both forwards and backwards). It's a pretty cool feature actually.
A: FogBugz 7 now supports sub-cases. This may or may not solve your problem, depending on how you want to handle it.
A: FogBugz has long supported case "relation", which creates an ad hoc link between cases simply through adding "case 1234" to any note. Downside: these are not removable, and this persists into FogBugz 7. (We tried to figure out how to do it right, but just ran out of time, so we left current behavior.)
FogBugz 7, newly released, has added parent-child hierarchy, to allow you to split up a master case into its constituent parts, or to aggregate similar requests under one umbrella case.
FogBugz 7 also offers milestone dependencies, where one milestone cannot be started before another is complete. This only applies to the scheduling features of the software. We don't actually prevent anyone from working on cases in the dependent milestone.
We feel these features represent the real world of dependencies as they exist among different parts of a project.
We intentionally did not implement any sort of Bugzilla-style blocking, for several reasons. First, it can be horrendously inefficient, allowing people to ignore work that they could easily do if it was in front of them. Second, it can cause a morass of interdependencies. Third, it also allows the use of the software as a social bludgeon, ("I can't start stubbing out functions until Jeff has finished his mock-ups.") which is something we try to avoid. We make social software... in that we prefer to let social problems be solved socially and software problems be solved with software. The intentional omission of blocking or hard dependency between cases is part of this philosophy.
That said, FogBugz 7 is highly extensible, with plug-ins, tags, custom fields, and lots of other goodies. If blocking is what you want, I'm sure someone will be able to cobble something together.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27195",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: What are the benefits of using partitions with the Enterprise edition of SQL 2005 I'm comparing between two techniques to create partitioned tables in SQL 2005.
*
*Use partitioned views with a standard version of SQL 2005 (described here)
*Use the built in partition in the Enterprise edition of SQL 2005 (described here)
Given that the enterprise edition is much more expensive, I would like to know what are the main benefits of the newer enterprise built-in implementation. Is it just an time saver for the implementation itself. Or will I gain real performance on large DBs?
I know i can adjust the constraints in the first option to keep a sliding window into the partitions. Can I do it with the built in version?
A: searchdotnet rulz! check this out:
http://www.eggheadcafe.com/forumarchives/SQLServerdatawarehouse/Dec2005/post25052042.asp
Updated: that link is dead. So here's a better one
http://msdn.microsoft.com/en-us/library/ms345146(SQL.90).aspx#sql2k5parti_topic6
From above:
Some of the performance and manageability benefits (of partioned tables) are
*
*Simplify the design and
implementation of large tables that
need to be partitioned for
performance or manageability
purposes.
*Load data into a new partition of an
existing partitioned table with
minimal disruption in data access in
the remaining partitions.
*Load data into a new partition of an
existing partitioned table with
performance equal to loading the same
data into a new, empty table.
*Archive and/or remove a portion of a
partitioned table while minimally
impacting access to the remainder of
the table.
*Allow partitions to be maintained by switching partitions in and out of the partitioned table.
*Allow better scaling and parallelism for extremely large operations over multiple related tables.
*Improve performance over all partitions.
*Improve query optimization time because each partition does not need to be optimized separately.
A: When using the partitioned tables you can more easily move data from partition to partition. You can also partition the indexes as well.
You can also move data from one partition to another table as needed with a single ALTER TABLE command.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27206",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Keeping key value pairs together in HTML with jQuery? Given a select with multiple option's in jQuery.
$select = $("<select></select>");
$select.append("<option>Jason</option>") //Key = 1
.append("<option>John</option>") //Key = 32
.append("<option>Paul</option>") //Key = 423
How should the key be stored and retrieved?
The ID may be an OK place but would not be guaranteed unique if I had multiple select's sharing values (and other scenarios).
Thanks
and in the spirit of TMTOWTDI.
$option = $("<option></option>");
$select = $("<select></select>");
$select.addOption = function(value,text){
$(this).append($("<option/>").val(value).text(text));
};
$select.append($option.val(1).text("Jason").clone())
.append("<option value=32>John</option>")
.append($("<option/>").val(423).text("Paul"))
.addOption("321","Lenny");
A: The HTML <option> tag has an attribute called "value", where you can store your key.
e.g.:
<option value=1>Jason</option>
I don't know how this will play with jQuery (I don't use it), but I hope this is helpful nonetheless.
A: If you are using HTML5, you can use a custom data attribute. It would look like this:
$select = $("<select></select>");
$select.append("<option data-key=\"1\">Jason</option>") //Key = 1
.append("<option data-key=\"32\">John</option>") //Key = 32
.append("<option data-key=\"423\">Paul</option>") //Key = 423
Then to get the selected key you could do:
var key = $('select option:selected').attr('data-key');
Or if you are using XHTML, then you can create a custom namespace.
Since you say the keys can repeat, using the value attribute is probably not an option since then you wouldn't be able to tell which of the different options with the same value was selected on the form post.
A: Like lucas said the value attribute is what you need. Using your code it would look something like this ( I added an id attribute to the select to make it fit ):
$select = $('<select id="mySelect"></select>');
$select.append('<option value="1">Jason</option>') //Key = 1
.append('<option value="32">John</option>') //Key = 32
.append('<option value="423">Paul</option>') //Key = 423
jQuery lets you get the value using the val() method. Using it on the select tag you get the current selected option's value.
$( '#mySelect' ).val(); //Gets the value for the current selected option
$( '#mySelect > option' ).each( function( index, option ) {
option.val(); //The value for each individual option
} );
Just in case, the .each method loops throught every element the query matched.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27219",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: How to convert std::string to LPCWSTR in C++ (Unicode) I'm looking for a method, or a code snippet for converting std::string to LPCWSTR
A: I prefer using standard converters:
#include <codecvt>
std::string s = "Hi";
std::wstring_convert<std::codecvt_utf8_utf16<wchar_t>> converter;
std::wstring wide = converter.from_bytes(s);
LPCWSTR result = wide.c_str();
Please find more details in this answer: https://stackoverflow.com/a/18597384/592651
Update 12/21/2020 : My answer was commented on by @Andreas H . I thought his comment is valuable, so I updated my answer accordingly:
*
*codecvt_utf8_utf16 is deprecated in C++17.
*Also the code implies that source encoding is UTF-8 which it usually isn't.
*In C++20 there is a separate type std::u8string for UTF-8 because of that.
But it worked for me because I am still using an old version of C++ and it happened that my source encoding was UTF-8 .
A: The solution is actually a lot easier than any of the other suggestions:
std::wstring stemp = std::wstring(s.begin(), s.end());
LPCWSTR sw = stemp.c_str();
Best of all, it's platform independent.
A: If you are in an ATL/MFC environment, You can use the ATL conversion macro:
#include <atlbase.h>
#include <atlconv.h>
. . .
string myStr("My string");
CA2W unicodeStr(myStr);
You can then use unicodeStr as an LPCWSTR. The memory for the unicode string is created on the stack and released then the destructor for unicodeStr executes.
A: *
*After knowing that, the C++11-standard has rules for .c_str() method (maybe .data() too), which allows us to use const_cast,
(I mean, normally using const_cast may be dangerous)
*
*we can safely take "const" input parameter.
*Then finally, use "const_cast" instead of any unnecessary allocation and deletion.
std::wstring s2ws(const std::string &s, bool isUtf8 = true)
{
int len;
int slength = (int)s.length() + 1;
len = MultiByteToWideChar(isUtf8 ? CP_UTF8 : CP_ACP, 0, s.c_str(), slength, 0, 0);
std::wstring buf;
buf.resize(len);
MultiByteToWideChar(isUtf8 ? CP_UTF8 : CP_ACP, 0, s.c_str(), slength,
const_cast<wchar_t *>(buf.c_str()), len);
return buf;
}
std::wstring wrapper = s2ws(u8"My UTF8 string!");
LPCWSTR result = wrapper.c_str();
Note that we should use CP_UTF8 for C++'s string-literal, but in some cases you may need to instead use CP_ACP (by setting second parameter to false).
A: If you are using QT then you can convert to QString and then myqstring.toStdWString() will do the trick.
A: It's so easy, no need to apply any custom method. Try with this:
string s = "So Easy Bro"
LPCWSTR wide_string;
wide_string = CA2T(s.c_str());
I think, it will works.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27220",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "141"
} |
Q: DOM manipulation in PHP I am looking for good methods of manipulating HTML in PHP. For example, the problem I currently have is dealing with malformed HTML.
I am getting input that looks something like this:
<div>This is some <b>text
As you noticed, the HTML is missing closing tags. I could use regex or an XML Parser to solve this problem. However, it is likely that I will have to do other DOM manipulation in the future. I wonder if there are any good PHP libraries that handle DOM manipulation similar to how Javascript deals with DOM manipulation.
A: I've found PHP Simple HTML DOM to be the most useful and straight forward library yet. Better than PECL I would say.
I've written an article on how to use it to scrape myspace artist tour dates (just an example.) Here's a link to the php simple html dom parser.
A: The DOM library which is now built-in can solve this problem easily. The loadHTML method will accept malformed XML while the load method will not.
$d = new DOMDocument;
$d->loadHTML('<div>This is some <b>text');
$d->saveHTML();
The output will be:
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html>
<body>
<div>This is some <b>text</b></div>
</body>
</html>
A: PHP has a PECL extension that gives you access to the features of HTML Tidy. Tidy is a pretty powerful library that should be able to take code like that and close tags in an intelligent manner.
I use it to clean up malformed XML and HTML sent to me by a classified ad system prior to import.
A: For manipulating the DOM i think that what you're looking for is this. I've used to parse HTML documents from the web and it worked fine for me.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27222",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Why aren't Enumerations Iterable? In Java 5 and above you have the foreach loop, which works magically on anything that implements Iterable:
for (Object o : list) {
doStuff(o);
}
However, Enumerable still does not implement Iterable, meaning that to iterate over an Enumeration you must do the following:
for(; e.hasMoreElements() ;) {
doStuff(e.nextElement());
}
Does anyone know if there is a reason why Enumeration still does not implement Iterable?
Edit: As a clarification, I'm not talking about the language concept of an enum, I'm talking a Java-specific class in the Java API called 'Enumeration'.
A: As an easy and clean way of using an Enumeration with the enhanced for loop, convert to an ArrayList with java.util.Collections.list.
for (TableColumn col : Collections.list(columnModel.getColumns()) {
(javax.swing.table.TableColumnModel.getColumns returns Enumeration.)
Note, this may be very slightly less efficient.
A: AFAIK Enumeration is kinda "deprecated":
Iterator takes the place of
Enumeration in the Java collections
framework
I hope they'll change the Servlet API with JSR 315 to use Iterator instead of Enumeration.
A: It doesn't make sense for Enumeration to implement Iterable. Iterable is a factory method for Iterator. Enumeration is analogous to Iterator, and only maintains state for a single enumeration.
So, be careful trying to wrap an Enumeration as an Iterable. If someone passes me an Iterable, I will assume that I can call iterator() on it repeatedly, creating as many Iterator instances as I want, and iterating independently on each. A wrapped Enumeration will not fulfill this contract; don't let your wrapped Enumeration escape from your own code. (As an aside, I noticed that Java 7's DirectoryStream violates expectations in just this way, and shouldn't be allowed to "escape" either.)
Enumeration is like an Iterator, not an Iterable. A Collection is Iterable. An Iterator is not.
You can't do this:
Vector<X> list = …
Iterator<X> i = list.iterator();
for (X x : i) {
x.doStuff();
}
So it wouldn't make sense to do this:
Vector<X> list = …
Enumeration<X> i = list.enumeration();
for (X x : i) {
x.doStuff();
}
There is no Enumerable equivalent to Iterable. It could be added without breaking anything to work in for loops, but what would be the point? If you are able to implement this new Enumerable interface, why not just implement Iterable instead?
A: Enumeration hasn't been modified to support Iterable because it's an interface not a concrete class (like Vector, which was modifed to support the Collections interface).
If Enumeration was changed to support Iterable it would break a bunch of people's code.
A: If you would just like it to be syntactically a little cleaner, you can use:
while(e.hasMoreElements()) {
doStuff(e.nextElement());
}
A: It is possible to create an Iterable from any object with a method that returns an Enumeration, using a lambda as an adapter. In Java 8, use Guava's static Iterators.forEnumeration method, and in Java 9+ use the Enumeration instance method asIterator.
Consider the Servlet API's HttpSession.getAttributeNames(), which returns an Enumeration<String> rather than an Iterator<String>.
Java 8 using Guava
Iterable<String> iterable = () -> Iterators.forEnumeration(session.getAttributeNames());
Java 9+
Iterable<String> iterable = () -> session.getAttributeNames().asIterator();
Note that these lambdas are truly Iterable; they return a fresh Iterator each time they are invoked. You can use them exactly like any other Iterable in an enhanced for loop, StreamSupport.stream(iterable.spliterator(), false), and iterable.forEach().
The same trick works on classes that provide an Iterator but don't implement Iterable. Iterable<Something> iterable = notIterable::createIterator;
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27240",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "67"
} |
Q: Where can I learn jQuery? Is it worth it? I've had a lot of good experiences learning about web development on w3schools.com. It's hit or miss, I know, but the PHP and CSS sections specifically have proven very useful for reference.
Anyway, I was wondering if there was a similar site for jQuery. I'm interested in learning, but I need it to be online/searchable, so I can refer back to it easily when I need the information in the future.
Also, as a brief aside, is jQuery worth learning? Or should I look at different JavaScript libraries? I know Jeff uses jQuery on Stack Overflow and it seems to be working well.
Thanks!
Edit: jQuery's website has a pretty big list of tutorials, and a seemingly comprehensive documentation page. I haven't had time to go through it all yet, has anyone else had experience with it?
Edit 2: It seems Google is now hosting the jQuery libraries. That should give jQuery a pretty big advantage in terms of publicity.
Also, if everyone uses a single unified aQuery library hosted at the same place, it should get cached for most Internet users early on and therefore not impact the download footprint of your site should you decide to use it.
2 Months Later...
Edit 3: I started using jQuery on a project at work recently and it is great to work with! Just wanted to let everyone know that I have concluded it is ABSOLUTELY worth it to learn and use jQuery.
Also, I learned almost entirely from the Official jQuery documentation and tutorials. It's very straightforward.
10 Months Later...
jQuery is a part of just about every web app I've made since I initially wrote this post. It makes progressive enhancement a breeze, and helps make the code maintainable.
Also, all the jQuery plug-ins are an invaluable resource!
3 Years Later...
Still using jQuery just about every day. I now author jQuery plug-ins and consult full time. I'm primarily a Djangonaut but I've done several javascript only contracts with jQuery. It's a life saver.
From one jQuery user to another... You should look at templating with jQuery (or underscore -- see below).
Other things I've found valuable in addition to jQuery (with estimated portion of projects I use it on):
*
*jQuery Form Plugin (95%)
*jQuery Form Example Plugin (75%)
*jQuery UI (70%)
*Underscore.js (80%)
*CoffeeScript (30%)
*Backbone.js (10%)
A: I used Prototype for about six months before I decided to learn jQuery. To me, it was like a night and day difference. For example, in Prototype you will loop over a set of elements checking if one exists and then setting something in it, in jQuery you just say $('div.class').find('[name=thing]') or whatever and set it.
It's so much easier to use and feels a lot more powerful. The plugin support is also great. For almost any common js pattern, there's a plugin that does what you want. With prototype, you'll be googling for blogs that have the snippet of code you need.
A: It is very much worth it. jQuery really makes JavaScript fun again. It's as if all of JavaScript best practices were wrapped up into a single library.
I learned it through jQuery in Action (Manning), which I whipped through over a weekend. It's a little bit behind the current state of affairs, especially in regard to plug-ins, but it's a great introduction.
A: Rick Strahl and Matt Berseth's blogs both tipped me into jQuery and man am I glad they did. jQuery completely changes a) your client programming perspective, b) the grief it causes it you, and c) how much fun it can be!
http://www.west-wind.com/weblog/
http://mattberseth.com/
I used the book jQuery in Action
http://www.amazon.com/jQuery-Action-Bear-Bibeault/dp/1933988355/ref=sr_1_1?ie=UTF8&s=books&qid=1219716122&sr=1-1 (I bought it used at Amazon for about $22). It has been a big help into bootstrapping me into jQuery. The documentation at jquery.com are also very helpful.
A place where jQuery falls a little flat is with its UI components. Those don't seem to be quite ready for primetime just yet.
It could be that Prototype or MooTools or ExtJS are as good as jQuery. But for me, jQuery seems to have a little more momentum behind it right now and that counts for something for me.
Check jQuery out. It is very cool!
A: There are numerous JavaScript libraries that are worth at least a cursory review to see if they suit your particular need. First, come up with a short list of criteria to guide your selection and evaluation process.
Then, check out a high level framework comparison/reviews somewhere like Wikipedia, select a few that fit your criteria and interest you. Test them out to see how they work for you. Most, if not all, of these libraries have websites w/ reference documentation and user group type support.
To put some names out there, Prototype, script.aculo.us, Jquery, Dojo, YUI...those all seem to have active users and contributers, so they are probably worth reading up on to see if they meet your needs.
Jquery is good, but with a little extra effort, maybe you'll find that something else works better for you.
Good luck.
A: There are a number of resources to learn jQuery (which is completely worth it IMHO). Start here http://docs.jquery.com/Main_Page to read the jQuery documentation. This is a great site for seeing visually what it has to offer:
http://visualjquery.com/1.1.2.html. Manning publications also has a great book which is highly recommended called jQuery in Action. As far as JavaScript libraries are concerned, this one and Prototype are probably the most popular if you're looking to compare jQuery to something else.
A: I found that these series of tutorials (“jQuery for Absolute Beginners” Video Series) by Jeffery Way are VERY HELPFUL.
It targets those developers who are new to jQuery. He shows how to create many cool stuff with jQuery, like animation, Creating and Removing Elements and more.
I learned a lot from it. He shows how it's easy to use jQuery.
Now I love it and I can read and understand any jQuery script even if it's complex.
Here is one example I like "Resizing Text"
1- jQuery:
<script language="javascript" type="text/javascript">
$(function() {
$('a').click(function() {
var originalSize = $('p').css('font-size'); // Get the font size.
var number = parseFloat(originalSize, 10); // That method will chop off any integer
// from the specifid varibale "originalSize".
var unitOfMassure = originalSize.slice(-2); // Store the unit of massure, Pixle or Inch.
$('p').css('font-size', number / 1.2 + unitOfMassure);
if (this.id == 'larger') {
$('p').css('font-size', number * 1.2 + unitOfMassure);
} // Figure out which element is triggered.
});
});
</script>
2- CSS Styling:
<style type="text/css" >
body{
margin-left:300px;text-align:center;
width:700px;
background-color:#666666;}
.box {
width:500px;
text-align:justify;
padding:5px;
font-family:verdana;
font-size:11px;
color:#0033FF;
background-color:#FFFFCC;}
</style>
2- HTML:
<div class="box">
<a href="#" id="larger">Larger</a> |
<a href="#" id="Smaller">Smaller</a>
<p>
In today’s video tutorial, I’ll show you how to resize text every
time an associated anchor tag is clicked. We’ll be examining
the “slice”, “parseFloat”, and “CSS” Javascript/jQuery methods.
</p>
</div>
I highly recommend these tutorials:
http://blog.themeforest.net/screencasts/jquery-for-absolute-beginners-video-series/
A: I started learning by looking at jQuery extensions to see how other developers work with the jQuery language. It not only helped me to learn jQuery syntax but also taught me how to develop my own extensions.
A: jQuery worths learning!!! I recommend reading "Learning jQuery" and "jQuery in Action". Both books are great with expalanation and examples. The next step is to actually use it to do something. You will find official http://docs.jquery.com docummentation very useful. I use it as a reference, google it all the time :)
Also "Learning jQuery" blog mensioned by Sean is also very useful. Also jQuery HowTo is also has a great collection of jQuery code snippets.
A: I haven't seen JQ-Fundamentals - by Rebecca Murphey mentioned anywhere here.
It is a very good book. It also explains the fundamentals of JavaScript required to understand the basics of JQuery.
A: A great resource for learning jQuery is: Learning jQuery. The author, Karl Swedberg, also co-wrote the book titled... ready? Yup, Learning jQuery. Remy Sharp also has great info geared towards the visual aspects of jQuery on his blog.
--SEAN O
A: Jquery.com is well organized and has many great examples. You don't need to buy a book. I found it easy to pickup on the fly by just referencing website's documentation. If you're someone who learns best by doing, I'd suggest this approach.
And yes, it's absolutely worth learning. It'll save you a lot of time and you'll actually look forward to doing JavaScript work!
A: I use Prototype, which I like. I'm afraid I don't know jQuery, so I can't compare them, but I think Prototype is worth checking out. Their API docs are generally pretty good, in my experience (which certainly helps with learnability).
A: Hey, I am biased in that I now work with these guys, but Carsonified offers some great resources for people learning and improving their jQuery skill set.
Just next Monday there is an online conference on jQuery featuring John Resig himself - http://carsonified.com/online-conferences/jquery/
Also, they now offer video tutorials via their membership scheme on the Think Vitamin blog,
I know there's a lot of free resource out there, I guess the difference here is the quality of the content you get. hope it's useful!
A: Below link my be helpful for you if you know SQL (Only css selectors).
http://karticles.com/2011/06/learning-jquery-with-sql-basic-selectors
http://karticles.com/2011/06/learning-jquery-with-sql-attribute-selectors
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27242",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "126"
} |
Q: NHibernate 1.2 to 2.0 migration What kinds of considerations are there for migrating an application from NHibernate 1.2 to 2.0? What are breaking changes vs. recommended changes?
Are there mapping issues?
A: Breaking changes in NHibernate 2.0
If you have good test coverage it's busywork.
Edit: We upgraded this morning. There is nothing major. You have to Flush() the session after you delete. The Expression namespace got renamed to Criterion. All these are covered in the link above. Mappings need no change. It's quite transparent. Oh, and transactions everywhere, but you were probably doing that already.
By the way, here's an interesting look at the changes: http://codebetter.com/blogs/patricksmacchia/archive/2008/08/26/nhibernate-2-0-changes-overview.aspx
A: I found the answer here:
http://blog.domaindotnet.com/2008/08/24/nhibernate-20-gold-released-must-wait-for-linq-to-nhibernate/
gold release 2.0.0.GA
BREAKING CHANGES from NH1.2.1GA to NH2.0.0
*
*
Infrastructure
*
*.NET 1.1 is no longer supported
*Nullables.NHibernate is no longer supported (use nullable types of .NET 2.0)
*Contrib moved. New Location
*
*http://sourceforge.net/projects/nhcontrib
*
Compile time
*
*NHibernate.Expression namespace was renamed to NHibernate.Criterion
*IInterceptor have additional methods. (IsUnsaved was renamed IsTransient)
*INamingStrategy
*IType
*IEntityPersister
*IVersionType
*IBatcher
*IUserCollectionType
*IEnhancedUserType
*IPropertyAccessor
*ValueTypeType renamed to PrimitiveType
*
Possible Breaking Changes for external frameworks
*
*Various classes were moved between namespaces
*Various classes have been renamed (to match Hibernate 3.2 names)
*ISession interface have additional methods
*ICacheProvider
*ICriterion
*CriteriaQueryTranslator
*
Initialization time
*
*<nhibernate> section, in App.config, is no longer supported and will be ignored. Configuration schema for configuration file and App.config is now identical, and the App.config section name is: <hibernate-configuration>
*<hibernate-configuration> have a different schema and all properties names are cheked
*configuration properties are no longer prefixed by “hibernate.”, if before you would specify “hibernate.dialect”, now you specify just “dialect”
*All named queries will be validated at initialization time, an exception will be thrown if any is not valid (can be disabled if needed)
*Stricter checks for proxying classes (all public methods must be virtual)
*
Run time
*
*SaveOrUpdateCopy() returns a new instance of the entity without changing the original
*AutoFlush will not occur outside a transaction - Database transactions are never optional, all communication with the database must occur inside a transaction, whatever you read or write data.
*NHibernate will return long for count(*) queries on SQL Server
*<formula> must contain parenthesis when needed
*These HQL function names may cause conflict in your HQL reserved names are:
*
*substring
*locate
*trim
*length
*bit_length
*coalesce
*nullif
*abs
*mod
*sqrt
*upper
*lower
*cast
*extract
*concat
*current_timestamp
*sysdate
*second
*minute
*hour
*day
*month
*year
*str
*<any> when meta-type=”class” the persistent type is a string containing the Class.FullName
*
*In order to set a parameter in a query you must use SetParameter(”paraName”, typeof(YourClass).FullName, NHibernateUtil.ClassMetaType)
*
Mapping
*<any> : default meta-type is “string” (was “class”)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27243",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Could you recommend some guides about Epoll on Linux I need to know about Epoll On linux System.
Could you recommend manual or guides about epoll library?
need more detailed guides. it's better to have some examples.
help me. and Thank you for reading.
A: Here's an introduction to Epoll, a pretty basic tutorial: http://blog.kovyrin.net/2006/04/13/epoll-asynchronous-network-programming/
A more complete example can be found here: https://banu.com/blog/2/how-to-use-epoll-a-complete-example-in-c/
Also, the man pages
A: #include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#include <sys/epoll.h>
#define PORT 1500
#define MAX_CON (1200)
static struct epoll_event *events;
int main(int argc, char *argv[])
{
fd_set master;
fd_set read_fds;
struct sockaddr_in serveraddr;
struct sockaddr_in clientaddr;
int fdmax;
int listener;
int newfd;
char buf[1024];
int nbytes;
int addrlen;
int yes;
int epfd = -1;
int res = -1;
struct epoll_event ev;
int i=0;
int index = 0;
int client_fd = -1;
int SnumOfConnection = 0;
time_t Sstart, Send;
if((listener = socket(AF_INET, SOCK_STREAM, 0)) == -1)
{
perror("Server-socket() error lol!");
exit(1);
}
if(setsockopt(listener, SOL_SOCKET, SO_REUSEADDR, &yes, sizeof(int)) == -1)
{
perror("Server-setsockopt() error lol!");
exit(1);
}
serveraddr.sin_family = AF_INET;
serveraddr.sin_addr.s_addr = INADDR_ANY;
serveraddr.sin_port = htons(PORT);
memset(&(serveraddr.sin_zero), '\0', 8);
if(bind(listener, (struct sockaddr *)&serveraddr, sizeof(serveraddr)) == -1)
{
perror("Server-bind() error lol!");
exit(1);
}
if(listen(listener, 10) == -1)
{
perror("Server-listen() error lol!");
exit(1);
}
fdmax = listener; /* so far, it's this one*/
events = calloc(MAX_CON, sizeof(struct epoll_event));
if ((epfd = epoll_create(MAX_CON)) == -1) {
perror("epoll_create");
exit(1);
}
ev.events = EPOLLIN;
ev.data.fd = fdmax;
if (epoll_ctl(epfd, EPOLL_CTL_ADD, fdmax, &ev) < 0) {
perror("epoll_ctl");
exit(1);
}
//time(&start);
for(;;)
{
res = epoll_wait(epfd, events, MAX_CON, -1);
client_fd = events[index].data.fd;
for (index = 0; index < MAX_CON; index++) {
if(client_fd == listener)
{
addrlen = sizeof(clientaddr);
if((newfd = accept(listener, (struct sockaddr *)&clientaddr, &addrlen)) == -1)
{
perror("Server-accept() error lol!");
}
else
{
// printf("Server-accept() is OK...\n");
ev.events = EPOLLIN;
ev.data.fd = newfd;
if (epoll_ctl(epfd, EPOLL_CTL_ADD, newfd, &ev) < 0) {
perror("epoll_ctl");
exit(1);
}
}
break;
}
else
{
if (events[index].events & EPOLLHUP)
{
if (epoll_ctl(epfd, EPOLL_CTL_DEL, client_fd, &ev) < 0) {
perror("epoll_ctl");
}
close(client_fd);
break;
}
if (events[index].events & EPOLLIN) {
if((nbytes = recv(client_fd, buf, sizeof(buf), 0)) <= 0)
{
if(nbytes == 0) {
// printf("socket %d hung up\n", client_fd);
}
else {
printf("recv() error lol! %d", client_fd);
perror("");
}
if (epoll_ctl(epfd, EPOLL_CTL_DEL, client_fd, &ev) < 0) {
perror("epoll_ctl");
}
close(client_fd);
}
else
{
if(send(client_fd, buf, nbytes, 0) == -1)
perror("send() error lol!");
}
break;
}
}
}
}
return 0;
}
I wrote this program for testing and I was able to connect more than 80k connections and I find average system load only to 0.27%.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27247",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29"
} |
Q: Is there a tool for reformatting C# code? I am looking for a (preferably) command-line tool that can reformat the C# source code on a directory tree. Ideally, I should be able to customize the formatting. Bonus points if the tool can be run on Mono (or Linux).
A: You could also try NArrange to reformat your code. The formatting options it supports are still pretty limited, but it can process an entire directory and is a command-line tool. Also, NArrange runs under Mono.
A: You could give Artistic Style a try. It requires Perl to be installed though.
It's got a decent list of formatting options, and supports C and Java as well.
A: This isn't command-line, Mono or Linux, but it's something: I've been using ReSharper (made by JetBrains) and it's rather good. It's a Visual Studio plugin, so I'm guessing it's not your cup of tea.
A: Take a look at Polystyle
A: I use Emacs and csharp-mode. One keystroke and the module is reformatted according to my desires.
Before:
After:
A: See our SD C# Formatter. Uses a full C# parser and prettyprinter; it will not break your code.
EDIT: September, 2013: Now runs on Windows and Linux. Covers C# v5.
A: For completeness, check out http://uncrustify.sourceforge.net/
A: Check out astyle. I am sure the KDE guys use it, but the website said that it supports C#.
A: I am going to second the ReSharper suggestion. I can't live without it.
The built-in reformatting is under ReSharper → Tools → Cleanup Code menu and is bound to Ctrl + E, Ctrl + C by default.
A: Maybe you could take a look at this free Addin for Visual Studio 2010/2012 i recently wrote :)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27253",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: ASP.NET JavaScript Callbacks Without Full PostBacks? I'm about to start a fairly Ajax heavy feature in my company's application. What I need to do is make an Ajax callback every few minutes a user has been on the page.
*
*I don't need to do any DOM updates before, after, or during the callbacks.
*I don't need any information from the page, just from a site cookie which should always be sent with requests anyway, and an ID value.
What I'm curious to find out, is if there is any clean and simple way to make a JavaScript Ajax callback to an ASP.NET page without posting back the rest of the information on the page. I'd like to not have to do this if it is possible.
I really just want to be able to call a single method on the page, nothing else.
Also, I'm restricted to ASP.NET 2.0 so I can't use any of the new 3.5 framework ASP AJAX features, although I can use the ASP AJAX extensions for the 2.0 framework.
UPDATE
I've decided to accept DanP's answer as it seems to be exactly what I'm looking for. Our site already uses jQuery for some things so I'll probably use jQuery for making requests since in my experience it seems to perform much better than ASP's AJAX framework does.
What do you think would be the best method of transferring data to the IHttpHandler? Should I add variables to the query string or POST the data I need to send?
The only thing I think I have to send is a single ID, but I can't decide what the best method is to send the ID and have the IHttpHandler handle it. I'd like to come up with a solution that would prevent a person with basic computer skills from accidentally or intentionally accessing the page directly or repeating requests. Is this possible?
A: If you don't want to create a blank page, you could call a IHttpHandler (ashx) file:
public class RSSHandler : IHttpHandler
{
public void ProcessRequest (HttpContext context)
{
context.Response.ContentType = "text/xml";
string sXml = BuildXMLString(); //not showing this function,
//but it creates the XML string
context.Response.Write( sXml );
}
public bool IsReusable
{
get { return true; }
}
}
A: You should use ASP.Net Callbacks which were introduced in Asp.Net 2.0. Here is an article that should get you set to go:
Implementing Client Callbacks Programmatically Without Postbacks in ASP.NET Web Pages
Edit: Also look at this:
ICallback & JSON Based JavaScript Serialization
A:
What do you think would be the best method of transferring data to the IHttpHandler? Should I added variables to the query string or POST the data I need to send? The only thing I think I have to send is a single ID, but I can't decide what the best method is to send the ID and have the IHttpHandler handle it. I'd like to come up with a solution that would prevent a person with basic computer skills from accidentally or intentionally accessing the page directly
Considering the callback is buried in the client code, it would take someone with equal determination to get either the querystring or the POST request. IE, if they have firebug, your equally screwed.
So, in that case, do whatever is easiest to you (Hint: I'd just use the querystring).
To handle repeating requests/direct access, I'd generate a key that is sent with each request. Perhaps a hash of the current time (Fuzzy, I'd go down to minutes, but not seconds due to network latency) + the client IP.
Then in the HTTPHandler, perform the same hash, and only run if they match.
A: You are not just restricted to ASP.NET AJAX but can use any 3rd party library like jQuery, YUI etc to do the same thing. You can then just make a request to a blank page on your site which should return the headers that contain the cookies.
A: My vote is with the HTTPHandler suggestion as well. I utilize this often. Because it does not invoke an instance of the page class, it is very lightweight.
All of the ASP.NET AJAX framework tricks actually instantiate and create the entire page again on the backend per call, so they are huge resource hogs.
Hence, my typical style of XmlHttpRequest back to a HttpHandler.
A: Since you are using only ASP.NET 2.0 I would recommend AjaxPro will which create the .ashx file. All you have to do is to pull the AjaxPro.dll into your web site. I developed an entire application with AjaxPro and found it worked very well. It uses serialization to pass objects back and forth.
This is just a sample on how to simply use it.
namespace MyDemo
{
public class Default
{
protected void Page_Load(object sender, EventArgs e)
{
AjaxPro.Utility.RegisterTypeForAjax(typeof(Default));
}
[AjaxPro.AjaxMethod]
public DateTime GetServerTime()
{
return DateTime.Now;
}
}
}
To call it via JavaScript it is as simple as
function getServerTime()
{
MyDemo._Default.GetServerTime(getServerTime_callback); // asynchronous call
}
// This method will be called after the method has been executed
// and the result has been sent to the client.
function getServerTime_callback(res)
{
alert(res.value);
}
EDIT
You also have to add
To the config. AjaxPro also works well side by side with APS.NET Ajax and you can pass C# objects from Client to Sever if the class is marked as [Serializable]
A: Just to offer a different perspective, you could also use a PageMethod on your main page. Dave Ward has a nice post that illustrates this. Essentially you use jQuery ajax post to call the method, as illustrated in Dave's post:
$.ajax({
type: "POST",
url: "Default.aspx/GetFeedburnerItems",
// Pass the "Count" parameter, via JSON object.
data: "{'Count':'7'}",
contentType: "application/json; charset=utf-8",
dataType: "json",
success: function(msg) {
BuildTable(msg.d);
}
});
No need for Asp.Net Ajax extensions at all.
A: You can also use WebMethods which are built into the asp.net ajax library. You simply create a static method on the page's codebehind and call that from your Ajax.
There's a pretty basic example of how to do it here
A: You should use a web service (.asmx). With Microsoft's ASP.NET AJAX you can even auto-generate the stubs.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27258",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Avoid traffic shaping by using ssh on port 443 I heard that if you use port 443 (the port usually used for https) for ssh, the encrypted packets look the same to your isp.
Could this be a way to avoid traffic shaping/throttling?
A: I'm not sure it's true that any given ssh packet "looks" the same as any given https packet.
However, over their lifetime they don't behave the same way. The session set up and tear down don't look alike (SSH offer a plain text banner during initial connect, for one thing). Also, typically wouldn't an https session be short lived? Connect, get your data, disconnect, whereas ssh would connect and persist for long periods of time? I think perhaps using 443 instead of 22 might get past naive filters, but I don't think it would fool someone specifically looking for active attempts to bypass their filters.
Is throttling ssh a common occurrence? I've experienced people blocking it, but I don't think I've experienced throttling. Heck, I usually use ssh tunnels to bypass other blocks since people don't usually care about it.
A: 443, when used for HTTPS, relies on SSL (not SSH) for its encryption. SSH looks different than SSL, so it would depend on what your ISP was actually looking for, but it is entirely possible that they could detect the difference. In my experience, though, you'd be more likely to see some personal firewall software block that sort of behavior since it's nonstandard. Fortunately, it's pretty easy to write an SSL tunnel using a SecureSocket of some type.
In general, they can see how much bandwidth you are using, whether or not the traffic is encrypted. They'll still know the endpoints of the connection, how long it's been open, and how many packets have been sent, so if they base their shaping metrics on this sort of data, there's really nothing you can do to prevent them from throttling your connection.
A: Your ISP is probably more likely to traffic shape port 443 over 22, seeing as 22 requires more real-time responsiveness.
Not really a programming question though, maybe you'll get a more accurate response somewhere else..
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27266",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Delete Amazon S3 buckets? I've been interacting with Amazon S3 through S3Fox and I can't seem to delete my buckets. I select a bucket, hit delete, confirm the delete in a popup, and... nothing happens. Is there another tool that I should use?
A: With s3cmd:
Create a new empty directory
s3cmd sync --delete-removed empty_directory s3://yourbucket
A: This may be a bug in S3Fox, because it is generally able to delete items recursively. However, I'm not sure if I've ever tried to delete a whole bucket and its contents at once.
The JetS3t project, as mentioned by Stu, includes a Java GUI applet you can easily run in a browser to manage your S3 buckets: Cockpit. It has both strengths and weaknesses compared to S3Fox, but there's a good chance it will help you deal with your troublesome bucket. Though it will require you to delete the objects first, then the bucket.
Disclaimer: I'm the author of JetS3t and Cockpit
A: SpaceBlock also makes it simple to delete s3 buckets - right click bucket, delete, wait for job to complete in transfers view, done.
This is the free and open source windows s3 front-end that I maintain, so shameless plug alert etc.
A: I've implemented bucket-destroy, a multi threaded utility that does everything it takes to delete a bucket. I handle non empty buckets, as well as version enabled bucket keys.
You can read the blog post here http://bytecoded.blogspot.com/2011/01/recursive-delete-utility-for-version.html and the instructions here http://code.google.com/p/bucket-destroy/
I've successfully deleted with it a bucket that contains double '//' in the key name, versioned key and DeleteMarker keys. Currently I'm running it on a bucket that contains ~40,000,000 so far I've been able to delete 1,200,000 in several hours on m1.large. Note that the utility is multi threaded but does not (yet) implemented shuffling (which will horizontal scaling, launching the utility on several machines).
A: If you use amazon's console and on a one-time basis need to clear out a bucket: You can browse to your bucket then select the top key then scroll to the bottom and then press shift on your keyboard then click on the bottom one. It will select all in between then you can right click and delete.
A: If you have ruby (and rubygems) installed, install aws-s3 gem with
gem install aws-s3
or
sudo gem install aws-s3
create a file delete_bucket.rb:
require "rubygems" # optional
require "aws/s3"
AWS::S3::Base.establish_connection!(
:access_key_id => 'access_key_id',
:secret_access_key => 'secret_access_key')
AWS::S3::Bucket.delete("bucket_name", :force => true)
and run it:
ruby delete_bucket.rb
Since Bucket#delete returned timeout exceptions a lot for me, I have expanded the script:
require "rubygems" # optional
require "aws/s3"
AWS::S3::Base.establish_connection!(
:access_key_id => 'access_key_id',
:secret_access_key => 'secret_access_key')
while AWS::S3::Bucket.find("bucket_name")
begin
AWS::S3::Bucket.delete("bucket_name", :force => true)
rescue
end
end
A: I guess the easiest way would be to use S3fm, a free online file manager for Amazon S3. No applications to install, no 3rd party web sites registrations. Runs directly from Amazon S3, secure and convenient.
Just select your bucket and hit delete.
A: One technique that can be used to avoid this problem is putting all objects in a "folder" in the bucket, allowing you to just delete the folder then go along and delete the bucket. Additionally, the s3cmd tool available from http://s3tools.org can be used to delete a bucket with files in it:
s3cmd rb --force s3://bucket-name
A: Remeber that S3 Buckets need to be empty before they can be deleted. The good news is that most 3rd party tools automate this process. If you are running into problems with S3Fox, I recommend trying S3FM for GUI or S3Sync for command line. Amazon has a great article describing how to use S3Sync. After setting up your variables, the key command is
./s3cmd.rb deleteall <your bucket name>
Deleting buckets with lots of individual files tends to crash a lot of S3 tools because they try to display a list of all files in the directory. You need to find a way to delete in batches. The best GUI tool I've found for this purpose is Bucket Explorer. It deletes files in a S3 bucket in 1000 file chunks and does not crash when trying to open large buckets like s3Fox and S3FM.
I've also found a few scripts that you can use for this purpose. I haven't tried these scripts yet but they look pretty straightforward.
RUBY
require 'aws/s3'
AWS::S3::Base.establish_connection!(
:access_key_id => 'your access key',
:secret_access_key => 'your secret key'
)
bucket = AWS::S3::Bucket.find('the bucket name')
while(!bucket.empty?)
begin
puts "Deleting objects in bucket"
bucket.objects.each do |object|
object.delete
puts "There are #{bucket.objects.size} objects left in the bucket"
end
puts "Done deleting objects"
rescue SocketError
puts "Had socket error"
end
end
PERL
#!/usr/bin/perl
use Net::Amazon::S3;
my $aws_access_key_id = 'your access key';
my $aws_secret_access_key = 'your secret access key';
my $increment = 50; # 50 at a time
my $bucket_name = 'bucket_name';
my $s3 = Net::Amazon::S3->new({aws_access_key_id => $aws_access_key_id, aws_secret_access_key => $aws_secret_access_key, retry => 1, });
my $bucket = $s3->bucket($bucket_name);
print "Incrementally deleting the contents of $bucket_name\n";
my $deleted = 1;
my $total_deleted = 0;
while ($deleted > 0) {
print "Loading up to $increment keys...\n";
$response = $bucket->list({'max-keys' => $increment, }) or die $s3->err . ": " . $s3->errstr . "\n";
$deleted = scalar(@{ $response->{keys} }) ;
$total_deleted += $deleted;
print "Deleting $deleted keys($total_deleted total)...\n";
foreach my $key ( @{ $response->{keys} } ) {
my $key_name = $key->{key};
$bucket->delete_key($key->{key}) or die $s3->err . ": " . $s3->errstr . "\n";
}
}
print "Deleting bucket...\n";
$bucket->delete_bucket or die $s3->err . ": " . $s3->errstr;
print "Done.\n";
SOURCE: Tarkblog
Hope this helps!
A: recent versions of s3cmd have --recursive
e.g.,
~/$ s3cmd rb --recursive s3://bucketwithfiles
http://s3tools.org/kb/item5.htm
A: It is finally possible to delete all the files in one go using the new Lifecycle (expiration) rules feature. You can even do it from the AWS console.
Simply right click on the bucket name in AWS console, select "Properties" and then in the row of tabs at the bottom of the page select "lifecycle" and "add rule". Create a lifecycle rule with the "Prefix" field set blank (blank means all files in the bucket, or you could set it to "a" to delete all files whose names begin with "a"). Set the "Days" field to "1". That's it. Done. Assuming the files are more than one day old they should all get deleted, then you can delete the bucket.
I only just tried this for the first time so I'm still waiting to see how quickly the files get deleted (it wasn't instant but presumably should happen within 24 hours) and whether I get billed for one delete command or 50 million delete commands... fingers crossed!
A: I hacked together a script for doing it from Python, it successfully removed my 9000 objects. See this page:
https://efod.se/blog/archive/2009/08/09/delete-s3-bucket
A: One more shameless plug: I got tired of waiting for individual HTTP delete requests when I had to delete 250,000 items, so I wrote a Ruby script that does it multithreaded and completes in a fraction of the time:
http://github.com/sfeley/s3nuke/
This is one that works much faster in Ruby 1.9 because of the way threads are handled.
A: This is a hard problem. My solution is at http://stuff.mit.edu/~jik/software/delete-s3-bucket.pl.txt. It describes all of the things I've determined can go wrong in a comment at the top. Here's the current version of the script (if I change it, I'll put a new version at the URL but probably not here).
#!/usr/bin/perl
# Copyright (c) 2010 Jonathan Kamens.
# Released under the GNU General Public License, Version 3.
# See <http://www.gnu.org/licenses/>.
# $Id: delete-s3-bucket.pl,v 1.3 2010/10/17 03:21:33 jik Exp $
# Deleting an Amazon S3 bucket is hard.
#
# * You can't delete the bucket unless it is empty.
#
# * There is no API for telling Amazon to empty the bucket, so you have to
# delete all of the objects one by one yourself.
#
# * If you've recently added a lot of large objects to the bucket, then they
# may not all be visible yet on all S3 servers. This means that even after the
# server you're talking to thinks all the objects are all deleted and lets you
# delete the bucket, additional objects can continue to propagate around the S3
# server network. If you then recreate the bucket with the same name, those
# additional objects will magically appear in it!
#
# It is not clear to me whether the bucket delete will eventually propagate to
# all of the S3 servers and cause all the objects in the bucket to go away, but
# I suspect it won't. I also suspect that you may end up continuing to be
# charged for these phantom objects even though the bucket they're in is no
# longer even visible in your S3 account.
#
# * If there's a CR, LF, or CRLF in an object name, then it's sent just that
# way in the XML that gets sent from the S3 server to the client when the
# client asks for a list of objects in the bucket. Unfortunately, the XML
# parser on the client will probably convert it to the local line ending
# character, and if it's different from the character that's actually in the
# object name, you then won't be able to delete it. Ugh! This is a bug in the
# S3 protocol; it should be enclosing the object names in CDATA tags or
# something to protect them from being munged by the XML parser.
#
# Note that this bug even affects the AWS Web Console provided by Amazon!
#
# * If you've got a whole lot of objects and you serialize the delete process,
# it'll take a long, long time to delete them all.
use threads;
use strict;
use warnings;
# Keys can have newlines in them, which screws up the communication
# between the parent and child processes, so use URL encoding to deal
# with that.
use CGI qw(escape unescape); # Easiest place to get this functionality.
use File::Basename;
use Getopt::Long;
use Net::Amazon::S3;
my $whoami = basename $0;
my $usage = "Usage: $whoami [--help] --access-key-id=id --secret-access-key=key
--bucket=name [--processes=#] [--wait=#] [--nodelete]
Specify --processes to indicate how many deletes to perform in
parallel. You're limited by RAM (to hold the parallel threads) and
bandwidth for the S3 delete requests.
Specify --wait to indicate seconds to require the bucket to be verified
empty. This is necessary if you create a huge number of objects and then
try to delete the bucket before they've all propagated to all the S3
servers (I've seen a huge backlog of newly created objects take *hours* to
propagate everywhere). See the comment at the top of the script for more
information about this issue.
Specify --nodelete to empty the bucket without actually deleting it.\n";
my($aws_access_key_id, $aws_secret_access_key, $bucket_name, $wait);
my $procs = 1;
my $delete = 1;
die if (! GetOptions(
"help" => sub { print $usage; exit; },
"access-key-id=s" => \$aws_access_key_id,
"secret-access-key=s" => \$aws_secret_access_key,
"bucket=s" => \$bucket_name,
"processess=i" => \$procs,
"wait=i" => \$wait,
"delete!" => \$delete,
));
die if (! ($aws_access_key_id && $aws_secret_access_key && $bucket_name));
my $increment = 0;
print "Incrementally deleting the contents of $bucket_name\n";
$| = 1;
my(@procs, $current);
for (1..$procs) {
my($read_from_parent, $write_to_child);
my($read_from_child, $write_to_parent);
pipe($read_from_parent, $write_to_child) or die;
pipe($read_from_child, $write_to_parent) or die;
threads->create(sub {
close($read_from_child);
close($write_to_child);
my $old_select = select $write_to_parent;
$| = 1;
select $old_select;
&child($read_from_parent, $write_to_parent);
}) or die;
close($read_from_parent);
close($write_to_parent);
my $old_select = select $write_to_child;
$| = 1;
select $old_select;
push(@procs, [$read_from_child, $write_to_child]);
}
my $s3 = Net::Amazon::S3->new({aws_access_key_id => $aws_access_key_id,
aws_secret_access_key => $aws_secret_access_key,
retry => 1,
});
my $bucket = $s3->bucket($bucket_name);
my $deleted = 1;
my $total_deleted = 0;
my $last_start = time;
my($start, $waited);
while ($deleted > 0) {
$start = time;
print "\nLoading ", ($increment ? "up to $increment" :
"as many as possible")," keys...\n";
my $response = $bucket->list({$increment ? ('max-keys' => $increment) : ()})
or die $s3->err . ": " . $s3->errstr . "\n";
$deleted = scalar(@{ $response->{keys} }) ;
if (! $deleted) {
if ($wait and ! $waited) {
my $delta = $wait - ($start - $last_start);
if ($delta > 0) {
print "Waiting $delta second(s) to confirm bucket is empty\n";
sleep($delta);
$waited = 1;
$deleted = 1;
next;
}
else {
last;
}
}
else {
last;
}
}
else {
$waited = undef;
}
$total_deleted += $deleted;
print "\nDeleting $deleted keys($total_deleted total)...\n";
$current = 0;
foreach my $key ( @{ $response->{keys} } ) {
my $key_name = $key->{key};
while (! &send(escape($key_name) . "\n")) {
print "Thread $current died\n";
die "No threads left\n" if (@procs == 1);
if ($current == @procs-1) {
pop @procs;
$current = 0;
}
else {
$procs[$current] = pop @procs;
}
}
$current = ($current + 1) % @procs;
threads->yield();
}
print "Sending sync message\n";
for ($current = 0; $current < @procs; $current++) {
if (! &send("\n")) {
print "Thread $current died sending sync\n";
if ($current = @procs-1) {
pop @procs;
last;
}
$procs[$current] = pop @procs;
$current--;
}
threads->yield();
}
print "Reading sync response\n";
for ($current = 0; $current < @procs; $current++) {
if (! &receive()) {
print "Thread $current died reading sync\n";
if ($current = @procs-1) {
pop @procs;
last;
}
$procs[$current] = pop @procs;
$current--;
}
threads->yield();
}
}
continue {
$last_start = $start;
}
if ($delete) {
print "Deleting bucket...\n";
$bucket->delete_bucket or die $s3->err . ": " . $s3->errstr;
print "Done.\n";
}
sub send {
my($str) = @_;
my $fh = $procs[$current]->[1];
print($fh $str);
}
sub receive {
my $fh = $procs[$current]->[0];
scalar <$fh>;
}
sub child {
my($read, $write) = @_;
threads->detach();
my $s3 = Net::Amazon::S3->new({aws_access_key_id => $aws_access_key_id,
aws_secret_access_key => $aws_secret_access_key,
retry => 1,
});
my $bucket = $s3->bucket($bucket_name);
while (my $key = <$read>) {
if ($key eq "\n") {
print($write "\n") or die;
next;
}
chomp $key;
$key = unescape($key);
if ($key =~ /[\r\n]/) {
my(@parts) = split(/\r\n|\r|\n/, $key, -1);
my(@guesses) = shift @parts;
foreach my $part (@parts) {
@guesses = (map(($_ . "\r\n" . $part,
$_ . "\r" . $part,
$_ . "\n" . $part), @guesses));
}
foreach my $guess (@guesses) {
if ($bucket->get_key($guess)) {
$key = $guess;
last;
}
}
}
$bucket->delete_key($key) or
die $s3->err . ": " . $s3->errstr . "\n";
print ".";
threads->yield();
}
return;
}
A: I am one of the Developer Team member of Bucket Explorer Team, We will provide different option to delete Bucket as per the users choice...
1) Quick Delete -This option will delete you data from bucket in chunks of 1000.
2) Permanent Delete-This option will Delete objects in queue.
How to delete Amazon S3 files and bucket?
A: Amazon recently added a new feature, "Multi-Object Delete", which allows up to 1,000 objects to be deleted at a time with a single API request. This should allow simplification of the process of deleting huge numbers of files from a bucket.
The documentation for the new feature is available here: http://docs.amazonwebservices.com/AmazonS3/latest/dev/DeletingMultipleObjects.html
A: I've always ended up using their C# API and little scripts to do this. I'm not sure why S3Fox can't do it, but that functionality appears to be broken within it at the moment. I'm sure that many of the other S3 tools can do it as well, though.
A: Delete all of the objects in the bucket first. Then you can delete the bucket itself.
Apparently, one cannot delete a bucket with objects in it and S3Fox does not do this for you.
I've had other little issues with S3Fox myself, like this, and now use a Java based tool, jets3t which is more forthcoming about error conditions. There must be others, too.
A: You must make sure you have correct write permission set for the bucket, and the bucket contains no objects.
Some useful tools that can assist your deletion: CrossFTP, view and delete the buckets like the FTP client. jets3t Tool as mentioned above.
A: I'll have to have a look at some of these alternative file managers. I've used (and like) BucketExplorer, which you can get from - surprisingly - http://www.bucketexplorer.com/.
It's a 30 day free trial, then (currently) costing US$49.99 per licence (US$49.95 on the purchase cover page).
A: Try https://s3explorer.appspot.com/ to manage your S3 account.
A: This is what I use. Just simple ruby code.
case bucket.size
when 0
puts "Nothing left to delete"
when 1..1000
bucket.objects.each do |item|
item.delete
puts "Deleting - #{bucket.size} left"
end
end
A: Use the amazon web managment console. With Google chrome for speed. Deleted the objects a lot faster than firefox (about 10 times faster). Had 60 000 objects to delete.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27267",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "56"
} |
Q: How do I save a datagrid to excel in vb.net? I know that this should be easy but how do I export/save a DataGridView to excel?
A: You can use this library for more detailed formatting
http://www.carlosag.net/Tools/ExcelXmlWriter/
There are samples in the page.
A: Does it need to be a native XLS file? Your best bet is probably just to export the data to a CSV file, which is plain text and reasonably easy to generate. CSVs open in Excel by default for most users so they won't know the difference.
A: I'd warn again doing a double for loop to pull out each datacell's data, and writing out individually to an excel cell. Instead, use a 2D object array, and loop through your datagrid saving all your data there. You'll then be able to set an excel range equal to that 2D object array.
This will be several orders of magnitude faster than writing excel cell by cell. Some reports that I've been working on that used to take two hours simply to export have been cut down to under a minute.
A: I setup the gridview and then used the html text writer object to spit it out to a .xls file, like so:
Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load
'get the select command of the gridview
sqlGridview.SelectCommand = Session("strSql")
gvCompaniesExport.DataBind()
lblTemp.Text = Session("strSql")
'do the export
doExport()
'close the window
Dim closeScript As String = "<script language='javascript'> window.close() </scri"
closeScript = closeScript & "pt>"
'split the ending script tag across a concatenate to keep it from causing problems
'this will write it to the asp.net page and fire it off, closing the window
Page.RegisterStartupScript("closeScript", closeScript)
End Sub
Public Sub doExport()
Response.AddHeader("content-disposition", "attachment;filename=IndianaCompanies.xls")
Response.ContentType = "application/vnd.ms-excel"
Response.Charset = ""
Me.EnableViewState = False
Dim objStrWriter As New System.IO.StringWriter
Dim objHtmlTextWriter As New System.Web.UI.HtmlTextWriter(objStrWriter)
'Get the gridview HTML from the control
gvCompaniesExport.RenderControl(objHtmlTextWriter)
'writes the dg info
Response.Write(objStrWriter.ToString())
Response.End()
End Sub
A: I use this all the time:
public static class GridViewExtensions
{
public static void ExportToExcel(this GridView gridView, string fileName, IEnumerable<string> excludeColumnNames)
{
//Prepare Response
HttpContext.Current.Response.Clear();
HttpContext.Current.Response.AddHeader("content-disposition",
string.Format("attachment; filename={0}", fileName));
HttpContext.Current.Response.ContentType = "application/ms-excel";
using (StringWriter sw = new StringWriter())
{
using (HtmlTextWriter htw = new HtmlTextWriter(sw))
{
// Create a table to contain the grid
Table table = new Table();
// include the gridline settings
table.GridLines = gridView.GridLines;
// add the header row to the table
if (gridView.HeaderRow != null)
{
PrepareControlForExport(gridView.HeaderRow);
table.Rows.Add(gridView.HeaderRow);
}
// add each of the data rows to the table
foreach (GridViewRow row in gridView.Rows)
{
PrepareControlForExport(row);
table.Rows.Add(row);
}
// add the footer row to the table
if (gridView.FooterRow != null)
{
PrepareControlForExport(gridView.FooterRow);
table.Rows.Add(gridView.FooterRow);
}
// Remove unwanted columns (header text listed in removeColumnList arraylist)
foreach (DataControlField column in gridView.Columns)
{
if (excludeColumnNames != null && excludeColumnNames.Contains(column.HeaderText))
{
column.Visible = false;
}
}
// render the table into the htmlwriter
table.RenderControl(htw);
// render the htmlwriter into the response
HttpContext.Current.Response.Write(sw.ToString());
HttpContext.Current.Response.End();
}
}
}
/// <summary>
/// Replace any of the contained controls with literals
/// </summary>
/// <param name="control"></param>
private static void PrepareControlForExport(Control control)
{
for (int i = 0; i < control.Controls.Count; i++)
{
Control current = control.Controls[i];
if (current is LinkButton)
{
control.Controls.Remove(current);
control.Controls.AddAt(i, new LiteralControl((current as LinkButton).Text));
}
else if (current is ImageButton)
{
control.Controls.Remove(current);
control.Controls.AddAt(i, new LiteralControl((current as ImageButton).AlternateText));
}
else if (current is HyperLink)
{
control.Controls.Remove(current);
control.Controls.AddAt(i, new LiteralControl((current as HyperLink).Text));
}
else if (current is DropDownList)
{
control.Controls.Remove(current);
control.Controls.AddAt(i, new LiteralControl((current as DropDownList).SelectedItem.Text));
}
else if (current is CheckBox)
{
control.Controls.Remove(current);
control.Controls.AddAt(i, new LiteralControl((current as CheckBox).Checked ? "True" : "False"));
}
if (current.HasControls())
{
PrepareControlForExport(current);
}
}
}
}
A: Try this out, it's a touch simpler than Brendans but not as 'feature rich':
Protected Sub btnExport_Click(ByVal sender As Object, ByVal e As System.EventArgs)
'Export to excel
Response.Clear()
Response.Buffer = True
Response.ContentType = "application/vnd.ms-excel"
Response.Charset = ""
Me.EnableViewState = False
Dim oStringWriter As System.IO.StringWriter = New System.IO.StringWriter
Dim oHtmlTextWriter As System.Web.UI.HtmlTextWriter = New System.Web.UI.HtmlTextWriter(oStringWriter)
Me.ClearControls(gvSearchTerms)
gvSearchTerms.RenderControl(oHtmlTextWriter)
Response.Write(oStringWriter.ToString)
Response.End()
End Sub
Private Sub ClearControls(ByVal control As Control)
Dim i As Integer = (control.Controls.Count - 1)
Do While (i >= 0)
ClearControls(control.Controls(i))
i = (i - 1)
Loop
If Not (TypeOf control Is TableCell) Then
If (Not (control.GetType.GetProperty("SelectedItem")) Is Nothing) Then
Dim literal As LiteralControl = New LiteralControl
control.Parent.Controls.Add(literal)
Try
literal.Text = CType(control.GetType.GetProperty("SelectedItem").GetValue(control, Nothing), String)
Catch ex As System.Exception
End Try
control.Parent.Controls.Remove(control)
ElseIf (Not (control.GetType.GetProperty("Text")) Is Nothing) Then
Dim literal As LiteralControl = New LiteralControl
control.Parent.Controls.Add(literal)
literal.Text = CType(control.GetType.GetProperty("Text").GetValue(control, Nothing), String)
control.Parent.Controls.Remove(control)
End If
End If
Return
End Sub
Public Overrides Sub VerifyRenderingInServerForm(ByVal control As Control)
Return
End Sub
A: You could use crystal since it is built into VS. Predefine a crystal report with the appropriate columns and then you can use any datasource you would use for a datagrid or gridview.
Dim report_source As CrystalDecisions.Web.CrystalReportSource
report_source.ReportDocument.SetDataSource(dt) 'DT IS A DATATABLE
report_source.Report.FileName = "test.rpt"
report_source.ReportDocument.Refresh()
report_source.ReportDocument.ExportToDisk(CrystalDecisions.Shared.ExportFormatType.Excel, "c:\test.xls")
A: First Import COM library Microsoft Excel Object
Sample Code:
Public Sub exportOfficePCandWorkstation(ByRef mainForm As Form1, ByVal Location As String, ByVal WorksheetName As String)
Dim xlApp As New Excel.Application
Dim xlWorkBook As Excel.Workbook
Dim xlWorkSheet As Excel.Worksheet
Dim misValue As Object = System.Reflection.Missing.Value
Dim Header(23) As String
Dim HeaderCell(23) As String
Header = {"No.", "PC Name", "User", "E-mail", "Department/Location", "CPU Model", "CPU Processor", "CPU Speed", "CPU HDD#1", "CPU HDD#2", "CPU Memory", "CPU OS", "CPU Asset Tag", "CPU MAC Address", "Monitor 1 Model", "Monitor Serial Number", "Monitor2 Model", "Monitor2 Serial Number", "Office", "Wi-LAN", "KVM Switch", "Attachment", "Remarks", "Date and Time"}
HeaderCell = {"A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X"}
xlWorkBook = xlApp.Workbooks.Add
xlWorkSheet = xlWorkBook.Sheets("Sheet1")
xlWorkSheet.Name = WorksheetName
xlApp.Visible = True
xlWorkSheet.Application.ActiveWindow.SplitRow = 1
xlWorkSheet.Application.ActiveWindow.SplitColumn = 3
xlWorkSheet.Application.ActiveWindow.FreezePanes = True
With xlWorkSheet
For count As Integer = 0 To 23
.Range(HeaderCell(count) & 1).Value = Header(count)
Next
With .Range("A1:X1")
.Interior.Color = 1
With .Font
.Size = 16
.ColorIndex = 2
.Name = "Times New Roman"
End With
End With
For i = 0 To mainForm.DataGridView1.RowCount - 1
For j = 0 To mainForm.DataGridView1.ColumnCount - 1
If mainForm.DataGridView1(j, i).Value.ToString = "System.Byte[]" Then
xlWorkSheet.Cells(i + 2, j + 2) = "Attached"
Else
xlWorkSheet.Cells(i + 2, j + 2) = mainForm.DataGridView1(j, i).Value.ToString()
End If
Next
.Range("A" & i + 2).Value = (i + 1).ToString
Next
With .Range("A:Z")
.EntireColumn.AutoFit()
End With
With .Range("B2:X" & mainForm.DataGridView1.RowCount + 1)
.HorizontalAlignment = Excel.XlVAlign.xlVAlignJustify
End With
With .Range("A1:A" & mainForm.DataGridView1.RowCount + 1)
.HorizontalAlignment = Excel.XlVAlign.xlVAlignCenter
End With
'-----------------------------------Insert Border Lines--------------------------------------
With .Range("A1:X" & mainForm.DataGridView1.RowCount + 1)
With .Borders(Excel.XlBordersIndex.xlEdgeLeft)
.LineStyle = Excel.XlLineStyle.xlDouble
.ColorIndex = 0
.TintAndShade = 0
.Weight = Excel.XlBorderWeight.xlThin
End With
With .Borders(Excel.XlBordersIndex.xlEdgeTop)
.LineStyle = Excel.XlLineStyle.xlContinuous
.ColorIndex = 0
.TintAndShade = 0
.Weight = Excel.XlBorderWeight.xlThin
End With
With .Borders(Excel.XlBordersIndex.xlEdgeBottom)
.LineStyle = Excel.XlLineStyle.xlContinuous
.ColorIndex = 0
.TintAndShade = 0
.Weight = Excel.XlBorderWeight.xlThin
End With
With .Borders(Excel.XlBordersIndex.xlEdgeRight)
.LineStyle = Excel.XlLineStyle.xlContinuous
.ColorIndex = 0
.TintAndShade = 0
.Weight = Excel.XlBorderWeight.xlThin
End With
With .Borders(Excel.XlBordersIndex.xlInsideVertical)
.LineStyle = Excel.XlLineStyle.xlContinuous
.ColorIndex = 0
.TintAndShade = 0
.Weight = Excel.XlBorderWeight.xlThin
End With
With .Borders(Excel.XlBordersIndex.xlInsideHorizontal)
.LineStyle = Excel.XlLineStyle.xlContinuous
.ColorIndex = 0
.TintAndShade = 0
.Weight = Excel.XlBorderWeight.xlThin
End With
End With
End With
xlWorkSheet.SaveAs(Location)
xlWorkBook.Close()
xlApp.Quit()
MsgBox("Export Record successful", MsgBoxStyle.Information, "Export to Excel")
End Sub
I Use SaveFileDialog to Create the excel in the specific location
A: Here some code we use to do it across lots of our apps. We have a special method to clean up "not exportable" column. Additionally, we don't export cols without headers but you can adjust that logic to your needs.
Edit: The code formatter doesn't love vb.net - you can copy/paste into visual studio and it will be fine.
Public Overloads Shared Function BuildExcel(ByVal gView As System.Web.UI.WebControls.GridView) As String
PrepareGridViewForExport(gView)
Dim excelDoc As New StringBuilder
Dim startExcelXML As String = " " + _
" " + _
" " + _
" " + _
" " + _
" " + _
" " + _
" " + _
" "
Dim endExcelXML As String = ""
Dim rowCount As Int64 = 0
Dim sheetCount As Int16 = 1
excelDoc.Append(startExcelXML)
excelDoc.Append("")
excelDoc.Append("")
' write out column headers
excelDoc.Append("")
For x As Int32 = 0 To gView.Columns.Count - 1
'Only write out columns that have column headers.
If Not gView.Columns(x).HeaderText = String.Empty Then
excelDoc.Append("")
excelDoc.Append(gView.Columns(x).HeaderText.ToString)
excelDoc.Append("")
End If
Next
excelDoc.Append("")
For r As Int32 = 0 To gView.Rows.Count - 1
rowCount += rowCount
If rowCount = 64000 Then
rowCount = 0
sheetCount += sheetCount
excelDoc.Append("")
excelDoc.Append(" ")
excelDoc.Append("")
excelDoc.Append("")
End If
excelDoc.Append("")
For c As Int32 = 0 To gView.Rows(r).Cells.Count - 1
'Don't write out a column without a column header.
If Not gView.Columns(c).HeaderText = String.Empty Then
Dim XMLstring As String = gView.Rows(r).Cells(c).Text
XMLstring = XMLstring.Trim()
XMLstring = XMLstring.Replace("&", "&")
XMLstring = XMLstring.Replace(">", ">")
XMLstring = XMLstring.Replace("" + "")
excelDoc.Append(XMLstring)
excelDoc.Append("")
End If
Next
excelDoc.Append("")
Next
excelDoc.Append("")
excelDoc.Append(" ")
excelDoc.Append(endExcelXML)
Return excelDoc.ToString
End Function
Shared Sub PrepareGridViewForExport(ByVal gview As System.Web.UI.Control)
' Cleans up grid for exporting. Takes links and visual elements and turns them into text.
Dim lb As New System.Web.UI.WebControls.LinkButton
Dim l As New System.Web.UI.WebControls.Literal
Dim name As String = String.Empty
For i As Int32 = 0 To gview.Controls.Count - 1
If TypeOf gview.Controls(i) Is System.Web.UI.WebControls.LinkButton Then
l.Text = CType(gview.Controls(i), System.Web.UI.WebControls.LinkButton).Text
gview.Controls.Remove(gview.Controls(i))
gview.Controls.AddAt(i, l)
ElseIf TypeOf gview.Controls(i) Is System.Web.UI.WebControls.DropDownList Then
l.Text = CType(gview.Controls(i), System.Web.UI.WebControls.DropDownList).SelectedItem.Text
gview.Controls.Remove(gview.Controls(i))
gview.Controls.AddAt(i, l)
ElseIf TypeOf gview.Controls(i) Is System.Web.UI.WebControls.CheckBox Then
l.Text = CType(gview.Controls(i), System.Web.UI.WebControls.CheckBox).Checked.ToString
gview.Controls.Remove(gview.Controls(i))
gview.Controls.AddAt(i, l)
End If
If gview.Controls(i).HasControls() Then
PrepareGridViewForExport(gview.Controls(i))
End If
Next
End Sub
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27288",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: What is the best solution for maintaining backup and revision control on live websites? What is the best solution for maintaining backup and revision control on live websites?
As part of my job I work with several live websites. We need an efficient means of maintaining backups of the live folders over time. Additionally, updating these sites can be a pain, especially if a change happens to break in the live environment for whatever reason.
What would be ideal would be hassle-free source control. I implemented SVN for a while which was great as a semi-solution for backup as well as revision control (easy reversion of temporary or breaking changes) etc.
Unfortunately SVN places .SVN hidden directories everywhere which cause problems, especially when other developers make folder structure changes or copy/move website directories. I've heard the argument that this is a matter of education etc. but the approach taken by SVN is simply not a practical solution for us.
I am thinking that maybe an incremental backup solution may be better.
Other possibilities include:
*
*SVK, which is command-line only which becomes a problem. Besides, I am unsure on how appropriate this would be.
*Mercurial, perhaps with some triggers to hide the distributed component which is not required in this case and would be unnecessarily complicated for other developers.
I experimented briefly with Mercurial but couldn't find a nice way to have the repository seperate and kept constantly in-sync with the live folder working copy. Maybe as a source control solution (making repository and live folder the same place) combined with another backup solution this could be the way to go.
One downside of Mercurial is that it doesn't place empty folders under source control which is problematic for websites which often have empty folders as placeholder locations for file uploads etc.
*Rsync, which I haven't really investigated.
I'd really appreciate your advice on the best way to maintain backups of live websites, ideally with an easy means of retrieving past versions quickly.
Answer replies:
*
*@Kibbee:
*
*It's not so much about education as no familiarity with anything but VSS and a lack of time/effort to learn anything else.
*The xcopy/7-zip approach sounds reasonable I guess but it would quickly take up a lot of room right?
*As far as source control, I think I'd like the source control to just say that "this is the state of the folder now, I'll deal with that and if I can't match stuff up that's your fault, I'll just start new histories" rather than fail hard.
*@Steve M:
*
*Yeah that's a nicer way of doing it but would require a significant cultural change. Having said that I very much like this approach.
*@mk:
*
*Nice, I didn't think about using Rsync to deploy. Does this only upload the differences? Overwriting the entire live directory everytime we make a change would be problematic due to site downtime.
I am still curious to see if there are any more traditional options
A: You can still use SVN, but instead of doing a checkout on your live environment, do an export, that way no .svn directories will be created. The downside, of course, is that no code changes on your live environment can take place. This is a good thing.
As a general rule, code changes on production systems should never be allowed. The change should be made and tested in a development/test/UAT environment, then once confirmed as OK, you can tag that code in SVN with something like RELEASE-x-x-x. Then, on the live system, export the code with that tag.
A: We use option 3. Rsync. I wrote a bash script to do this along with some extra checking, but here are the basics of what it does.
*
*Make a tag for pushing to live.
*Run svn export on that tag.
*rsync to live.
So far it has been working out. We don't have to worry about user conflicts or have a separate user for running svn up on the production machine.
A: Any source control solution you pick is going to have problems if people are moving, deleting, or adding files and not telling the source control system about it. I'm not aware of any source control item that could solve this problem.
In the case where you just can't educate the people working on the project[1], then you may just have to go with daily snapshots. Something as simple as batch file using xcopy to a network drive, and possibly 7-zip on the command line to compress it so it doesn't take up too much space would probably be the simplest solution.
[1] I would highly disbelieve this, probably just more a case of people being too stubborn and not willing to learn, or do "extra work". Nevermind how much time source control could save them when they have to go back to previous versions, or 2 people have edited the same file.
A: rsync will only upload the differences. I haven't personally used it, but Mark Pilgrim wrote a long time ago about how it even handles binary diffs brilliantly.
svn+rsync sounds like a fantastic solution. I'll have to try that in the future.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27292",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Abstract Factory Design Pattern I'm working on an internal project for my company, and part of the project is to be able to parse various "Tasks" from an XML file into a collection of tasks to be ran later.
Because each type of Task has a multitude of different associated fields, I decided it would be best to represent each type of Task with a seperate class.
To do this, I constructed an abstract base class:
public abstract class Task
{
public enum TaskType
{
// Types of Tasks
}
public abstract TaskType Type
{
get;
}
public abstract LoadFromXml(XmlElement task);
public abstract XmlElement CreateXml(XmlDocument currentDoc);
}
Each task inherited from this base class, and included the code necessary to create itself from the passed in XmlElement, as well as serialize itself back out to an XmlElement.
A basic example:
public class MergeTask : Task
{
public override TaskType Type
{
get { return TaskType.Merge; }
}
// Lots of Properties / Methods for this Task
public MergeTask (XmlElement elem)
{
this.LoadFromXml(elem);
}
public override LoadFromXml(XmlElement task)
{
// Populates this Task from the Xml.
}
public override XmlElement CreateXml(XmlDocument currentDoc)
{
// Serializes this class back to xml.
}
}
The parser would then use code similar to this to create a task collection:
XmlNode taskNode = parent.SelectNode("tasks");
TaskFactory tf = new TaskFactory();
foreach (XmlNode task in taskNode.ChildNodes)
{
// Since XmlComments etc will show up
if (task is XmlElement)
{
tasks.Add(tf.CreateTask(task as XmlElement));
}
}
All of this works wonderfully, and allows me to pass tasks around using the base class, while retaining the structure of having individual classes for each task.
However, I am not happy with my code for TaskFactory.CreateTask. This method accepts an XmlElement, and then returns an instance of the appropriate Task class:
public Task CreateTask(XmlElement elem)
{
if (elem != null)
{
switch(elem.Name)
{
case "merge":
return new MergeTask(elem);
default:
throw new ArgumentException("Invalid Task");
}
}
}
Because I have to parse the XMLElement, I'm using a huge (10-15 cases in the real code) switch to pick which child class to instantiate. I'm hoping there is some sort of polymorphic trick I can do here to clean up this method.
Any advice?
A: Create a "Prototype" instanace of each class and put them in a hashtable inside the factory , with the string you expect in the XML as the key.
so CreateTask just finds the right Prototype object,
by get() ing from the hashtable.
then call LoadFromXML on it.
you have to pre-load the classes into the hashtable,
If you want it more automatic...
You can make the classes "self-registering" by calling a static register method on the factory.
Put calls to register ( with constructors) in the static blocks on the Task subclasses.
Then all you need to do is "mention" the classes to get the static blocks run.
A static array of Task subclasses would then suffice to "mention" them.
Or use reflection to mention the classes.
A: How do you feel about Dependency Injection? I use Ninject and the contextual binding support in it would be perfect for this situation. Look at this blog post on how you can use contextual binding with creating controllers with the IControllerFactory when they are requested. This should be a good resource on how to use it for your situation.
A: @jholland
I don't think the Type enum is needed, because I can always do something like this:
Enum?
I admit that it feels hacky. Reflection feels dirty at first, but once you tame the beast you will enjoy what it allows you to do. (Remember recursion, it feels dirty, but its good)
The trick is to realize, you are analyzing meta data, in this case a string provided from xml, and turning it into run-time behavior. That is what reflection is the best at.
BTW: the is operator, is reflection too.
http://en.wikipedia.org/wiki/Reflection_(computer_science)#Uses
A: @Tim, I ended up using a simplified version of your approach and ChanChans, Here is the code:
public class TaskFactory
{
private Dictionary<String, Type> _taskTypes = new Dictionary<String, Type>();
public TaskFactory()
{
// Preload the Task Types into a dictionary so we can look them up later
foreach (Type type in typeof(TaskFactory).Assembly.GetTypes())
{
if (type.IsSubclassOf(typeof(CCTask)))
{
_taskTypes[type.Name.ToLower()] = type;
}
}
}
public CCTask CreateTask(XmlElement task)
{
if (task != null)
{
string taskName = task.Name;
taskName = taskName.ToLower() + "task";
// If the Type information is in our Dictionary, instantiate a new instance of that task
Type taskType;
if (_taskTypes.TryGetValue(taskName, out taskType))
{
return (CCTask)Activator.CreateInstance(taskType, task);
}
else
{
throw new ArgumentException("Unrecognized Task:" + task.Name);
}
}
else
{
return null;
}
}
}
A: I use reflection to do this.
You can make a factory that basically expands without you having to add any extra code.
make sure you have "using System.Reflection", place the following code in your instantiation method.
public Task CreateTask(XmlElement elem)
{
if (elem != null)
{
try
{
Assembly a = typeof(Task).Assembly
string type = string.Format("{0}.{1}Task",typeof(Task).Namespace,elem.Name);
//this is only here, so that if that type doesn't exist, this method
//throws an exception
Type t = a.GetType(type, true, true);
return a.CreateInstance(type, true) as Task;
}
catch(System.Exception)
{
throw new ArgumentException("Invalid Task");
}
}
}
Another observation, is that you can make this method, a static and hang it off of the Task class, so that you don't have to new up the TaskFactory, and also you get to save yourself a moving piece to maintain.
A: @ChanChan
I like the idea of reflection, yet at the same time I've always been shy to use reflection. It's always struck me as a "hack" to work around something that should be easier. I did consider that approach, and then figured a switch statement would be faster for the same amount of code smell.
You did get me thinking, I don't think the Type enum is needed, because I can always do something like this:
if (CurrentTask is MergeTask)
{
// Do Something Specific to MergeTask
}
Perhaps I should crack open my GoF Design Patterns book again, but I really thought there was a way to polymorphically instantiate the right class.
A:
Enum?
I was referring to the Type property and enum in my abstract class.
Reflection it is then! I'll mark you answer as accepted in about 30 minutes, just to give time for anyone else to weigh in. Its a fun topic.
A: Thanks for leaving it open, I won't complain. It is a fun topic, I wish you could polymorphicly instantiate.
Even ruby (and its superior meta-programming) has to use its reflection mechanism for this.
A: @Dale
I have not inspected nInject closely, but from my high level understanding of dependency injection, I believe it would be accomplishing the same thing as ChanChans suggestion, only with more layers of cruft (er abstraction).
In a one off situation where I just need it here, I think using some handrolled reflection code is a better approach than having an additional library to link against and only calling it one place...
But maybe I don't understand the advantage nInject would give me here.
A: Some frameworks may rely on reflection where needed, but most of the time you use a boot- strapper, if you will, to setup what to do when an instance of an object is needed. This is usually stored in a generic dictionary. I used my own up until recently, when I started using Ninject.
With Ninject, the main thing I liked about it, is that when it does need to use reflection, it doesn't. Instead it takes advantage of the code generation features of .NET which make it incredibly fast. If you feel reflection would be faster in the context you are using, it also allows you to set it up that way.
I know this maybe overkill for what you need at the moment, but I just wanted to point out dependency injection and give you some food for thought for the future. Visit the dojo for a lesson.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27294",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: Databind RenderTransform Scaling in Silverlight 2 Beta 2 Anyone know if it's possible to databind the ScaleX and ScaleY of a render transform in Silverlight 2 Beta 2? Binding transforms is possible in WPF - But I'm getting an error when setting up my binding in Silverlight through XAML. Perhaps it's possible to do it through code?
<Image Height="60" HorizontalAlignment="Right"
Margin="0,122,11,0" VerticalAlignment="Top" Width="60"
Source="Images/Fish128x128.png" Stretch="Fill"
RenderTransformOrigin="0.5,0.5" x:Name="fishImage">
<Image.RenderTransform>
<TransformGroup>
<ScaleTransform ScaleX="1" ScaleY="1"/>
<SkewTransform/>
<RotateTransform/>
<TranslateTransform/>
</TransformGroup>
</Image.RenderTransform>
</Image>
I want to bind the ScaleX and ScaleY of the ScaleTransform element.
I'm getting a runtime error when I try to bind against a double property on my data context:
Message="AG_E_PARSER_BAD_PROPERTY_VALUE [Line: 1570 Position: 108]"
My binding looks like this:
<ScaleTransform ScaleX="{Binding Path=SelectedDive.Visibility}"
ScaleY="{Binding Path=SelectedDive.Visibility}"/>
I have triple verified that the binding path is correct - I'm binding a slidebar against the same value and that works just fine...
Visibility is of type double and is a number between 0.0 and 30.0. I have a value converter that scales that number down to 0.5 and 1 - I want to scale the size of the fish depending on the clarity of the water. So I don't think it's a problem with the type I'm binding against...
A: Is it a runtime error or compile-time, Jonas? Looking at the documentation, ScaleX and ScaleY are dependency properties, so you should be able to write
<ScaleTransform ScaleX="{Binding Foo}" ScaleY="{Binding Bar}" />
... where Foo and Bar are of the appropriate type.
Edit: Of course, that's the WPF documentation. I suppose it's possible that they've changed ScaleX and ScaleY to be standard properties rather than dependency properties in Silverlight. I'd love to hear more about the error you're seeing.
A: ScaleTransform doesn't have a data context so most likely the binding is looking for SelectedDive.Visibility off it's self and not finding it. There is much in Silverlight xaml and databinding that is different from WPF...
Anyway to solve this you will want to set up the binding in code**, or manually listen for the PropertyChanged event of your data object and set the Scale in code behind.
I would choose the latter if you wanted to do an animation/storyboard for the scale change.
** i need to check but you may not be able to bind to it. as i recall if the RenderTransform is not part of an animation it gets turned into a matrix transform and all bets are off.
A: Ah I think I see your problem. You're attempting to bind a property of type Visibility (SelectedDive.Visibility) to a property of type Double (ScaleTransform.ScaleX). WPF/Silverlight can't convert between those two types.
What are you trying to accomplish? Maybe I can help you with the XAML. What is "SelectedDive" and what do you want to happen when its Visibility changes?
A: Sorry - was looking for the answer count to go up so I didn't realise you'd edited the question with more information.
OK, so Visibility is of type Double, so the binding should work in that regard.
As a workaround, could you try binding your ScaleX and ScaleY values directly to the slider control that SelectedDive.Visibility is bound to? Something like:
<ScaleTransform ScaleX="{Binding ElementName=slider1,Path=Value}" ... />
If that works then it'll at least get you going.
Edit: Ah, I just remembered that I read once that Silverlight doesn't support the ElementName syntax in bindings, so that might not work.
A: Yeah maybe the embedded render transforms aren't inheriting the DataContext from the object they apply to. Can you force the DataContext into them? For example, give the transform a name:
<ScaleTransform x:Name="myScaler" ... />
... and then in your code-behind:
myScaler.DataContext = fishImage.DataContext;
... so that the scaler definitely shares its DataContext with the Image.
A: Ok, is the Image itself picking up the DataContext properly?
Try adding this:
<Image Tooltip="{Binding SelectedDive.Visibility}" ... />
If that compiles and runs, hover over the image and see if it displays the right value.
A: I was hoping to solve this through XAML, but turns out Brian's suggestion was the way to go. I used Matt's suggestion to give the scale transform a name, so that I can access it from code. Then I hooked the value changed event of the slider, and manually updates the ScaleX and ScaleY property. I kept my value converter to convert from the visibility range (0-30m) to scale (0.5 to 1). The code looks like this:
private ScaleConverter converter;
public DiveLog()
{
InitializeComponent();
converter = new ScaleConverter();
visibilitySlider.ValueChanged += new
RoutedPropertyChangedEventHandler<double>(visibilitySlider_ValueChanged);
}
private void visibilitySlider_ValueChanged(object sender,
RoutedPropertyChangedEventArgs<double> e)
{
fishScale.ScaleX = (double)converter.Convert(e.NewValue,
typeof(double), null, CultureInfo.CurrentCulture);
fishScale.ScaleY = fishScale.ScaleX;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27303",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How do I install the php_gd2 extension in MAMP on a Mac? I'm running MAMP 1.7.2 on a Mac and I'd like to install the extension php_gd2. How do I do this? I know that on Windows using WAMP I'd simply select the php_gd2 entry in the extensions menu to activate it. How is it done when using MAMP? I know that I can do it using MacPorts but I'd prefer not to make any changes to my default OS X PHP installation.
A: You shouldn't need to install the extension. I have 1.7.2 installed and running right now and it has GD bundled (2.0.34 compatible).
From the MAMP start page, click on phpinfo and you should see a GD section.
A: php.ini for MAMP 1.7.2 is located:
if using php5, here: /Applications/MAMP/conf/php5/php.ini
if php4, here: /Applications/MAMP/conf/php4/php.ini
hope that helps...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27345",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: RSync only if filesystem is mounted I want to setup a cron job to rsync a remote system to a backup partition, something like:
bash -c 'rsync -avz --delete --exclude=proc --exclude=sys root@remote1:/ /mnt/remote1/'
I would like to be able to "set it and forget it" but what if /mnt/remote1 becomes unmounted? (After a reboot or something) I'd like to error out if /mnt/remote1 isn't mounted, rather than filling up the local filesystem.
Edit:
Here is what I came up with for a script, cleanup improvements appreciated (especially for the empty then ... else, I couldn't leave them empty or bash errors)
#!/bin/bash
DATA=data
ERROR="0"
if cut -d' ' -f2 /proc/mounts | grep -q "^/mnt/$1\$"; then
ERROR=0
else
if mount /dev/vg/$1 /mnt/$1; then
ERROR=0
else
ERROR=$?
echo "Can't backup $1, /mnt/$1 could not be mounted: $ERROR"
fi
fi
if [ "$ERROR" = "0" ]; then
if cut -d' ' -f2 /proc/mounts | grep -q "^/mnt/$1/$DATA\$"; then
ERROR=0
else
if mount /dev/vg/$1$DATA /mnt/$1/data; then
ERROR=0
else
ERROR=$?
echo "Can't backup $1, /mnt/$1/data could not be mounted."
fi
fi
fi
if [ "$ERROR" = "0" ]; then
rsync -aqz --delete --numeric-ids --exclude=proc --exclude=sys \
root@$1.domain:/ /mnt/$1/
RETVAL=$?
echo "Backup of $1 completed, return value of rsync: $RETVAL"
fi
A: if cut -d' ' -f2 /proc/mounts | grep '^/mnt/remote1$' >/dev/null; then
rsync -avz ...
fi
Get the list of mounted partitions from /proc/mounts, only match /mnt/remote1 (and if it is mounted, send grep's output to /dev/null), then run your rsync job.
Recent greps have a -q option that you can use instead of sending the output to /dev/null.
A: A quick google led me to this bash script that can check if a filesystem is mounted. It seems that grepping the output of df or mount is the way to go:
if df |grep -q '/mnt/mountpoint$'
then
echo "Found mount point, running task"
# Do some stuff
else
echo "Aborted because the disk is not mounted"
# Do some error correcting stuff
exit -1
fi
A: mountpoint seems to be the best solution to this: it returns 0 if a path is a mount point:
#!/bin/bash
if [[ `mountpoint -q /path` ]]; then
echo "filesystem mounted"
else
echo "filesystem not mounted"
fi
Found at LinuxQuestions.
A: *
*Copy and paste the script below to a file (e.g. backup.sh).
*Make the script executable (e.g. chmod +x backup.sh)
*Call the script as root with the format backup.sh [username (for rsync)] [backup source device] [backup source location] [backup target device] [backup target location]
!!!ATTENTION!!! Don't execute the script as root user without understanding the code!
I think there's nothing to explain. The code is straightforward and well documented.
#!/bin/bash
##
## COMMAND USAGE: backup.sh [username] [backup source device] [backup source location] [backup target device] [backup target location]
##
## for example: sudo /home/manu/bin/backup.sh "manu" "/media/disk1" "/media/disk1/." "/media/disk2" "/media/disk2"
##
##
## VARIABLES
##
# execute as user
USER="$1"
# Set source location
BACKUP_SOURCE_DEV="$2"
BACKUP_SOURCE="$3"
# Set target location
BACKUP_TARGET_DEV="$4"
BACKUP_TARGET="$5"
# Log file
LOG_FILE="/var/log/backup_script.log"
##
## SCRIPT
##
function end() {
echo -e "###########################################################################\
#########################################################################\n\n" >> "$LOG_FILE"
exit $1
}
# Check that the log file exists
if [ ! -e "$LOG_FILE" ]; then
touch "$LOG_FILE"
chown $USER "$LOG_FILE"
fi
# Check if backup source device is mounted
if ! mountpoint "$BACKUP_SOURCE_DEV"; then
echo "$(date "+%Y-%m-%d %k:%M:%S") - ERROR: Backup source device is not mounted!" >> "$LOG_FILE"
end 1
fi
# Check that source dir exists and is readable.
if [ ! -r "$BACKUP_SOURCE" ]; then
echo "$(date "+%Y-%m-%d %k:%M:%S") - ERROR: Unable to read source dir." >> "$LOG_FILE"
echo "$(date "+%Y-%m-%d %k:%M:%S") - ERROR: Unable to sync." >> "$LOG_FILE"
end 1
fi
# Check that target dir exists and is writable.
if [ ! -w "$BACKUP_TARGET" ]; then
echo "$(date "+%Y-%m-%d %k:%M:%S") - ERROR: Unable to write to target dir." >> "$LOG_FILE"
echo "$(date "+%Y-%m-%d %k:%M:%S") - ERROR: Unable to sync." >> "$LOG_FILE"
end 1
fi
# Check if the drive is mounted
if ! mountpoint "$BACKUP_TARGET_DEV"; then
echo "$(date "+%Y-%m-%d %k:%M:%S") - WARNING: Backup device needs mounting!" >> "$LOG_FILE"
# If not, mount the drive
if mount "$BACKUP_TARGET_DEV" > /dev/null 2>&1 || /bin/false; then
echo "$(date "+%Y-%m-%d %k:%M:%S") - Backup device mounted." >> "$LOG_FILE"
else
echo "$(date "+%Y-%m-%d %k:%M:%S") - ERROR: Unable to mount backup device." >> "$LOG_FILE"
echo "$(date "+%Y-%m-%d %k:%M:%S") - ERROR: Unable to sync." >> "$LOG_FILE"
end 1
fi
fi
# Start entry in the log
echo "$(date "+%Y-%m-%d %k:%M:%S") - Sync started." >> "$LOG_FILE"
# Start sync
su -c "rsync -ayhEAX --progress --delete-after --inplace --compress-level=0 --log-file=\"$LOG_FILE\" \"$BACKUP_SOURCE\" \"$BACKUP_TARGET\"" $USER
echo "" >> "$LOG_FILE"
# Unmount the drive so it does not accidentally get damaged or wiped
if umount "$BACKUP_TARGET_DEV" > /dev/null 2>&1 || /bin/false; then
echo "$(date "+%Y-%m-%d %k:%M:%S") - Backup device unmounted." >> "$LOG_FILE"
else
echo "$(date "+%Y-%m-%d %k:%M:%S") - WARNING: Backup device could not be unmounted." >> "$LOG_FILE"
fi
# Exit successfully
end 0
A: I am skimming This but I would think you would rather rsync -e ssh and setup the keys to accept the account.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27359",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Should websites expand on window resize? I'm asking this question purely from a usability standpoint!
Should a website expand/stretch to fill the viewing area when you resize a browser window?
I know for sure there are the obvious cons:
*
*Wide columns of text are hard to read.
*Writing html/css using percents can be a pain.
*It makes you vulnerable to having your design stretched past it's limits if an image is too wide, or a block of text is added that is too long. (see it's a pain to code the html/css).
The only Pro I can think of is that users who use the font-resizing that is built into their browser won't have to deal with columns that are only a few words long, with a body of white-space on either side.
However, I think that may be a browser problem more than anything else (Firefox 3 allows you to zoom everything instead of just the text, which comes in handy all the time)
edit: I noticed stack overflow is fixed width, but coding horror resizes. It seems Jeff doesn't have a strong preference either way.
A: In terms of web site scaling I like fixed sized web sites that scales nicely using the browsers "zoom" function. I don't want a really wide page with tiny fonts on my 1920 res monitor. I don't know if the web designer has to do anything to make it scale nicely when zoomed, but the zoom in FF3 is awesome, the one in IE7 is useless...
A: The design should be fluid within sensible bounds.
Use CSS has min-width and max-width properties (which work in every browser, including IE7+) to prevent design from stretching too much.
A: The important thing is never to have a block of text stretch too wide. If a window is expanded, no block of text should indefinitely stretch to match because reading becomes a difficulty.
A: Like people have said, it really depends on what information the site is displaying. Two good examples are StackOverflow, and Google Images..
If stackoverflow stretched to fit the screen, longer answers would be annoying to read, because the eye finds it difficult to scan over long lines - this is exactly why newspapers use columns for everything, and why books are the all the same sort of width.
With Google Images, where the content is basically a bunch of 200px wide images, it stretches to fit the browser width and is still perfectly readable.
Basically, bear in mind the eye hates reading long lines of text, and base your design on that. You can design your site so when you increase the font size, all the layout scales nicely with it (The only site I can think of that does this is www.geektechnique.org - press Ctrl+-/= or Ctrl+scrollwheel, and the layout changes width with the font size)
A: Raw HTML does just that. Are you changing your data so that it doesn't render so good in random sized windows?
In the olden days, everyone had VGA screens. Now, that resolution is most uncommon. Who knows what resolutions are going to be common in the future? And why expect a certain minimum width or height?
From a usability viewpoint, demanding a certain resolution from your users is just going to create a degraded experience for anyone not using that resolution. Another thing that comes from this is what is fixed width? I've seen plenty of fixed size windows (popups) that just don't render right because my fonts are different from the designer's.
A: This is a matter of styling preference. Both can be equally usable depending on implementation. Columns can also be used, if the screen gets wide enough. Personally, I find it annoying when there is a single, narrow column of text going down the screen.
Edit for 2012: Yes, your website should respond to the size of the window it is being displayed in.
There are many places to read more about this, including:
*
*http://johnpolacek.github.com/scrolldeck.js/decks/responsive/
*http://www.abookapart.com/products/responsive-web-design
*http://en.wikipedia.org/wiki/Responsive_Web_Design
A: I guess like a lot of things: it depends. I usually do both. Some content stays fixed width to look good or if it can't benefit form more space. other stuff is set to 100% if it seems like it'd be usefull.
A: This should be decided on how complicated the design of your website is. The more complicated, graphically or component wise (amount of content divs), will determine how well your website will scale. Generally you will find most graphic designers website will not scale because they are graphically intensive. However informational website will scale to make the best use of readable space on the screen and are not complicated for ease of use. Its a matter of preference really.
A: I think it depends on the content of the site. Sites like SOFlow, Forums, and other sites have an emphasis on reading lots of details, so having more real estate to do so is a big benefit in my mind. The less vertical scroll, the better.
However, for sites a little less demanding on the reading level, even blogs or retail sites where you're simply displaying an individual product, having a fixed width allows you to keep things more concise.
A: Note: if you use the zoom functionality in your browser, a fixed layout squashes the text, whereas a fluid layout allows it to take up the whole screen.
Maybe this is just a browser problem, but it's definately an argument in favor of fluid
A: I'm a big fan of fully-fluid designs. As to the usability complaints about lines of text that are too long... if they're too long because of the size of my browser window, then I can just as easily make the window narrower as I can make it wider.
A: Paragraph widths larger than your display make a web site completely unusable. You have to jiggle the horizontal scrollbar back and forth for every single line you read. I'm doing a web design subject at university and the textbook calls the designs which adapt to your screen width fluid layout.
I'm designing my big class project using fluid layout, it's a bit more trouble than fixed width. I suspect none of the other students will use it, the markers won't notice and none of the professional sites we're imitating are fluid either.
A: There's probably a compromise design between fixed and fluid designs. You can design a site fluid-like but set the css property max-width to 1024 (or whatever). This means you get a fluid layout when the window width is less then 1024 and fixed width when it is greater.
Then narrow screen users (like my 800 pixel eee 701) don't have to twiddle the horizontal scrollbar to read every single line and wide screen users (who don't know how to resize their browser window) don't get 500 character wide, 1 character high paragraphs.
A: I'd say fluid all the way. The user can always go back to a smaller size window if he doesn't like the result of enlarging it, but he can't do anything about a fixed layout.
If you really, really hate the idea of your site looking ugly because of something a user with a large screen does, then for the sake of all that is true and beautiful, at least never use pixel-based fixed layouts! CSS has these neat text-relative size units like "em" that allow parts of your page to scale with the font size while others (like images) stay in their "natural" size.
Why not use them and make your page scale well without relying on the less flexible "scale everything" of FF3 that's really just a workaround for sites that use a dumb pixel-based fixed layout?
A: A lot of people are saying things like "this is a matter of taste" or "I don't like big fonts on my high-pixel display." Number of pixels has nothing to do with it, and it's not a matter of taste. It's a matter of DPI, which is directly related to display resolution and font size. If your layout scales along with the DPI of the fonts (by being specified in ems for instance, and using SVG), then you end up with very beautiful, very crisp websites that work optimally with any display.
http://www.boutell.com/newfaq/creating/anyresolution.html
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27381",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: What is the fastest way to multiply a 16-bit integer with a double? On an 8-bit micro controller I would like to do the following:
16bit_integer = another_16bit_integer * 0.997;
with the least possible number of instructions.
A: How about integer arithmetic in 32 bits?
16bit_integer = (int16_t) (another_16bit_integer * (int32_t) 997 / 1000);
32 bits will be enough to store (INT16_MAX × 997), do the sum on values 1000 times larger then divide back to your 16 bit scale.
A: Bit shifts are usually very fast:
y = 0xFF3B * (int32_t) x >> 16;
This is probably better written as:
y = (0.997 * 0x10000) * (int32_t)x >> 16;
A good compiler will generate equivalent output.
If your integers are signed, the constants should be changed to 0x8000 and 15.
A: You probably meant to have some rounding in there, rather than truncating the result to an integer, otherwise the purpose of the operation is really limited.
But since you asked the question with that specific formula, it brought to mind that your result set is really coarse. For the first 333 numbers, the result is: another_16bit_integer-1. You can approximate it (maybe even exactly, when not performed in my head) with something like:
16bit_integer = another_16bit_integer - 1 - (another_16bit_integer/334);
edit: unsigned int, and you handle 0 on your own.
A: On my platform ( Atmel AVR 8-bit micro-controller, running gcc )
16bit_integer = another_16bit_integer * 0.997;
Takes about 26 instructions.
16bit_integer = (int16_t) (another_16bit_integer * (int32_t) 997 / 1000);
Takes about 25 instructions.
A: Here is a very fast way to do this operation:
a = b * 0.99609375;
It's similar to what you want, but it's much faster.
a = b;
a -= b>>8;
Or even faster using a trick that only works on little endian systems, like the PIC.
a = b;
a -= *((int8*)((&b)+1));
Off the top of my head, this comes down to the following assembler on a PIC18:
; a = b
MOVFF 0xc4, 0xc2
NOP
MOVFF 0xc5, 0xc3
NOP
; a -= *((int8*)((&b)+1));
MOVF 0xc5, w
SUBWF 0xc2, f
BTFSC STATUS, C
DECF 0xc
A: Precomputed lookup table:
16bit_integer = products[another_16bit_integer];
A:
Precomputed lookup table:
16bit_integer = products[another_16bit_integer];
That's not going to work so good on the AVR, the 16bit address space is going to be exhausted.
A: Since you are using an 8 bit processor, you can probably only handle 16 bit results, not 32 bit results. To reduce 16 bit overflow issues I would restate the formula like this:
result16 = operand16 - (operand16 * 3)/1000
This would give accurate results for unsigned integers up to 21845, or signed integers up to 10922. I am assuming the the processor can do 16 bit integer division. If you cannot then you need to do the division the hard way. Multiplying by 3 can be done by simple shifts & adds, if no multiply instruction exists or if multiplication only works with 8 bit operands.
Without knowing the exact microprocessor it is impossible to determine how long such a calculation would take.
A:
On my platform ( Atmel AVR 8-bit micro-controller, running gcc )
16bit_integer = another_16bit_integer * 0.997;
Takes about 26 instructions.
16bit_integer = (int16_t) (another_16bit_integer * (int32_t) 997 / 1000);
Takes about 25 instructions.
The Atmel AVR is a RISC chip, so counting instructions is a valid comparison.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27405",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: What does "VM Size" mean in the Windows Task Manager? Virtual memory from a computer size perspective is
[a way to make the program] think it
has a large range of contiguous
addresses; but in reality the parts it
is currently using are scattered
around RAM, and the inactive parts are
saved in a disk file. (Wikipedia)
I would interpret VM Size in the Windows Task manager as either the total addressable virtual memory space or the amount of memory the process is currently using in the virtual memory space.
But in the Task Manager the WM Size is in many cases less than Mem Usage, which should be amount of RAM the process is using. Therefor I guess that WM Size means something else?
A: It's the total of all private (not shared) bytes allocated by this process, whether currently in physical memory or not.
See also An introductory guide to Windows Memory Management or Commit Charge Wikipedia article
For a developer watching process state like this I would recommend to install SysInternals Process Explorer and to use it instead of the default Task Manager. This value is called "Private Bytes" in it.
A: The amount of memory mapped into that process' address space. This can include shared memory mappings.
In a process there will be sections of the memory space for each shared object (DLL) that is part of it, as well as some memory for stack, and areas allocated by the process itself.
For example looking at the memory map of a cat command on my system I can see its memory mappings. In this case I use cat /proc/self/maps to investigate the cat process itself. Mapped into its virtual memory is the binary itself, some heap, locale information, libc (with various permission flags), ld.so (the dynamic linker), stack, vdso and vsyscall sections and some anonymous mappings (mapped pages with no backing file).
00400000-00408000 r-xp /bin/cat
00607000-00608000 rw-p /bin/cat
008ac000-008cd000 rw-p [heap]
7fbd54175000-7fbd543cf000 r--p /usr/lib/locale/locale-archive
7fbd543cf000-7fbd54519000 r-xp /lib/libc-2.7.so
7fbd54519000-7fbd54718000 ---p /lib/libc-2.7.so
7fbd54718000-7fbd5471b000 r--p /lib/libc-2.7.so
7fbd5471b000-7fbd5471d000 rw-p /lib/libc-2.7.so
7fbd5471d000-7fbd54722000 rw-p
7fbd54722000-7fbd5473e000 r-xp /lib/ld-2.7.so
7fbd5491d000-7fbd5491f000 rw-p
7fbd5493a000-7fbd5493d000 rw-p
7fbd5493d000-7fbd5493f000 rw-p /lib/ld-2.7.so
7fff5c929000-7fff5c93e000 rw-p [stack]
7fff5c9fe000-7fff5c9ff000 r-xp [vdso]
ffffffffff600000-ffffffffff601000 r-xp [vsyscall]
For each mapping, subtract the start address from the end address to determine its size, for example the [stack] line: 0x7fff5c9ff000 - 0x7fff5c9fe000 = 0x1000. In decimal, 4096 bytes - a 4 kiB stack.
If you add up all these figures, you'll get the process' virtual memory (VM) size.
VM size is not a reliable way to determine how much memory a process is using. For instance there will only be one copy of each of the read-only /lib/libc-2.7.so maps in physical memory, regardless of how many processes use it.
A: What's the correct answer about VM Size?
*
*In Coding Horror
How much of the processes' less frequently used memory has been paged to disk.
*In Comment of Coding Horror
You're wrong on VM Size. It's the total of all private (not shared) bytes allocated by this process, whether currently in physical memory or not. It's a better value for tracking whether you have a memory leak than 'Mem Usage'. The same value is available in Performance Monitor as 'Process: Private Bytes'.
*In MSDN
Virtual Memory Size :
The amount of virtual memory, or address space, committed to a process.
I am confusing what is corrent.
A: I can't see VM size in the Windows task manager, Whatup Gold has a VM size in its task manager - do you mean that? in this case i beleive it relates to the total amount available to the VM
A: How about a coding horror post to answer this: http://www.codinghorror.com/blog/archives/000393.html
"VM Size: How much of the processes' less frequently used memory has been paged to disk."
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27407",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: MySQL vs PostgreSQL for Web Applications I am working on a web application using Python (Django) and would like to know whether MySQL or PostgreSQL would be more suitable when deploying for production.
In one podcast Joel said that he had some problems with MySQL and the data wasn't consistent.
I would like to know whether someone had any such problems. Also when it comes to performance which can be easily tweaked?
A: Although it's a bit out of date, it would be worth reading the MySQL Gotchas page. Many of the items listed there are still true, to the best of my knowledge.
I use PostgreSQL.
A: I use both extensively. My choice for a particular project boils down to:
*
*Licensing - Are you going to distribute your app (IANAL)
*Existing Infrastructure and Knowledge Base
*Any special sauce you have to have.
By special sauce I mean things like:
*
*Easy/cheap replication = MySQL
*Huge dataset problems with small results = PostgreSQL. Use the language extensions, and have very efficient data operations. (PL/Python, PL/TCL, PL/Perl, etc)
*Interface with R Statistical Libraries = PostgreSQL PL/R available in debian/ubuntu
A: Well, I don't think you should be using a different database brand in anything past development (build, staging, prod) as that will come back to bite you.
From how I understand it PostgreSQL is a more 'correct' database implementation while mySQl is less correct (less compliant) but faster.
So if you are pretty much writing a CRUD application mySQL is the way to go. If you require certain features out of your database (if you're not sure then you don't) then you may want to look into postgreSQL.
A: I haven't used Django, but I have used both MySQL and PostgreSQL. If you'll be using your database only as a backend for Django, it doesn't matter much, because it will abstract away most of the differences. PostgreSQL is a little more scalable because it doesn't hit the brick wall as fast as MySQL as data-size/client-count increase.
The real difference comes in if you are doing a new system. Then I'd recommend PostgreSQL hands down, because it has a lot more features which make your DB layer much more customizable, so that you can fine-tune it to any requirements you might have.
A: Just chiming in many months later.
The geographical capabilities of the two databases are very, very different. PostgreSQL has the exceptional PostGIS extension. MySQL's geographical functionality is practically zero in comparison.
If your web service has a location component, choose PostgreSQL.
A: A note to future readers: The text below was last edited in August 2008. That's nearly 11 years ago as of this edit. Software can change rapidly from version to version, so before you go choosing a DBMS based on the advice below, do some research to see if it's still accurate.
Check for newer answers below.
Better?
MySQL is much more commonly provided by web hosts.
PostgreSQL is a much more mature product.
There's this discussion addressing your "better" question
Apparently, according to this web page, MySQL is fast when concurrent access levels are low, and when there are many more reads than writes. On the other hand, it exhibits low scalability with increasing loads and write/read ratios. PostgreSQL is relatively slow at low concurrency levels, but scales well with increasing load levels, while providing enough isolation between concurrent accesses to avoid slowdowns at high write/read ratios. It goes on to link to a number of performance comparisons, because these things are very... sensitive to conditions.
So if your decision factor is, "which is faster?" Then the answer is "it depends. If it really matters, test your application against both." And if you really, really care, you get in two DBAs (one who specializes in each database) and get them to tune the crap out of the databases, and then choose. It's astonishing how expensive good DBAs are; and they are worth every cent.
When it matters.
Which it probably doesn't, so just pick whichever database you like the sound of and go with it; better performance can be bought with more RAM and CPU, and more appropriate database design, and clever stored procedure tricks and so on - and all of that is cheaper and easier for random-website-X than agonizing over which to pick, MySQL or PostgreSQL, and specialist tuning from expensive DBAs.
Joel also said in that podcast that comment would come back to bite him because people would be saying that MySQL was a piece of crap - Joel couldn't get a count of rows back. The plural of anecdote is not data. He said:
MySQL is the only database I've ever programmed against in my career that has had data integrity problems, where you do queries and you get nonsense answers back, that are incorrect.
and he also said:
It's just an anecdote. And that's one of the things that frustrates me, actually, about blogging or just the Internet in general. [...] There's just a weird tendency to make anecdotes into truths and I actually as a blogger I'm starting to feel a little bit guilty about this
A: If you are writing an application which may get distributed quite a bit on different servers, MySQL carries a lot of weight over PostgreSQL because of the portability. PostgreSQL is difficult to find on less than satisfactory web hosts, albeit there are a few. In most regards, PostgreSQL is slower than MySQL, especially when it comes to fine tuning in the end. All in all, I'd say to give PostgreSQL a shot for a short amount of time, that way you aren't completely avoiding it, and then make a judgement.
A: Thank you. I've used Django with MySQL and it's fine. Choose your database on the features you need. Hard to compare MySQL and Postgres. Better to compare Postgress to SQl Server.
A: @WolfmanDragon
PostgreSQL has (tiny) support for objects, but it is, by nature, a relational database. From its about page:
PostgreSQL is a powerful, open source relational database system.
A: MySQL is a relational database management system while PostgreSQL is an object-relational database management system. PostgreSQL is suited well for C++ or Java developers, as it gives us more control over how queries are written. ORDBMS also gives us Objects and User Defined Types. The SQL queries themselves are much closer to the ISO standards than MySQL.
Do you need an ORDBMS or a RDBMS? That will better answer your question.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27435",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "134"
} |
Q: Is there a rake task for backing up the data in your database? Is there a rake task for backing up the data in your database?
I already have my schema backed up, but I want to make a backup of the data. It's a small MySQL database.
A: The below script is a simplified version taken from eycap, specifically from this file.
set :dbuser "user"
set :dbhost "host"
set :database "db"
namespace :db do
desc "Get the database password from user"
task :get_password do
set(:dbpass) do
Capistrano::CLI.ui.ask "Enter mysql password: "
end
end
task :backup_name, :only => { :primary => true } do
now = Time.now
run "mkdir -p #{shared_path}/db_backups"
backup_time = [now.year,now.month,now.day,now.hour,now.min,now.sec].join('-')
set :backup_file, "#{shared_path}/db_backups/#{database}-snapshot-#{backup_time}.sql"
end
desc "Dump database to backup file"
task :dump, :roles => :db, :only => {:primary => true} do
backup_name
run "mysqldump --add-drop-table -u #{dbuser} -h #{dbhost} -p#{dbpass} #{database} | bzip2 -c > #{backup_file}.bz2"
end
end
Edit: Yeah, I guess I missed the point that you were looking for a rake task and not a capistrano task, but I don't have a rake one on hand, sorry.
A: Just in case people are still surfing for solutions, we currently use the ar_fixtures plugin to backup our db, well as part of the solution anyway.
It provides the rake db:fixtures:dump tasks. This spits out everythin in YAML into test/fixtures, so it can be loaded in again using db:fixtures:load.
We use this to backup before every feature push to production. We also used this when migrating from sqlite3 to Postgres - which is subtlety very useful as incompatibilities between SQL dialects are, for the most part, hidden.
All the best, D
A: I don't have a rake task for backing up my MySQL db, but I did write a script in Ruby to do just that for my WordPress DB:
filename = 'wp-config.php'
def get_db_info(file)
username = nil
password = nil
db_name = nil
file.each { |line|
if line =~ /'DB_(USER|PASSWORD|NAME)', '([[:alnum:]]*)'/
if $1 == "USER"
username = $2
elsif $1 == "PASSWORD"
password = $2
elsif $1 == "NAME"
db_name = $2
end
end
}
if username.nil? || password.nil? || db_name.nil?
puts "[backup_db][bad] couldn't get all needed info"
exit
end
return username, password, db_name
end
begin
config_file = open("#{filename}")
rescue Errno::ENOENT
puts "[backup_db][bad] File '#{filename}' didn't exist"
exit
else
puts "[backup_db][good] File '#{filename}' existed"
end
username, password, db_name = get_db_info(config_file)
sql_dump_info = `mysqldump --user=#{username} --password=#{password} #{dbname}`
puts sql_dump_info
You should be able to take this and do some mild pruning of it to put in your username/password/dbname to get it up and working for you. I put it in my crontab to run everyday as well, and it shouldn't be too much work to convert this to run as a rake task since it's already Ruby code (might be a good learning exercise as well).
Tell us how it goes!
A: There are a few solutions already on google. I am going to guess that you are using activerecord as your orm?
If you are running rails, then you can looks at the Rakefile that it uses for activerecord in \ruby\lib\ruby\gems\1.8\gems\rails-2.0.2-\lib\tasks\database.rake. That gave me a lot of information on how to extend the generic Rakefile.
You could take the capistrano tasks that thelsdj provides, and add it to your rake file. Then modify it a bit so that it uses the activerecord connection to the database.
A: There's a plugin out there called "mysql tasks", just google for it. It's just a rakefile -- I've found it very easy to use.
A: Make sure to add the "--routines" parameter to mysqldump if you have any stored procs in your database so it backs them up too.
A: There is my rake task to backup mysql, and rotate backups cyclically.
#encoding: utf-8
#require 'fileutils'
namespace :mls do
desc 'Create of realty_dev database backup'
task :backup => :environment do
backup_max_records = 4
datestamp = Time.now.strftime("%Y-%m-%d_%H-%M")
backup_dir = File.join(Rails.root, ENV['DIR'] || 'backups', 'db')
backup_file_name = "#{datestamp}_#{Rails.env}_dump.sql"
backup_file_path = File.join(backup_dir, "#{backup_file_name}")
FileUtils.mkdir_p(backup_dir)
#database processing
db_config = ActiveRecord::Base.configurations[Rails.env]
system "mysqldump -u#{db_config['username']} -p#{db_config['password']} -i -c -q #{db_config['database']} > #{backup_file_path}"
raise 'Unable to make DB backup!' if ($?.to_i > 0)
# sql dump file compression
system "gzip -9 #{backup_file_path}"
# backup rotation
dir = Dir.new(backup_dir)
backup_all_records = dir.entries.sort[2..-1].reverse
puts "Created backup: #{backup_file_name}.gz"
#redundant records
backup_del_records = backup_all_records[backup_max_records..-1] || []
# backup deleting too old records
for backup_del_record in backup_del_records
FileUtils.rm_rf(File.join(backup_dir, backup_del_record))
end
puts "Deleted #{backup_del_records.length} old backups, #{backup_all_records.length - backup_del_records.length} backups available"
puts "Backup passed"
end
end
=begin
run by this command: " rake db:backup RAILS_ENV="development" "
=end
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27442",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Does Mono support System.Drawing and System.Drawing.Printing? I'm attempting to use Mono to load a bitmap and print it on Linux but I'm getting an exception. Does Mono support printing on Linux? The code/exception are below:
EDIT: No longer getting the exception, but I'm still curious what kind of support there is. Leaving the code for posterity or something.
private void btnPrintTest_Click(object sender, EventArgs e)
{
_printDocTest.DefaultPageSettings.Landscape = true;
_printDocTest.DefaultPageSettings.Margins = new Margins(50,50,50,50);
_printDocTest.Print();
}
void _printDocTest_PrintPage(object sender, PrintPageEventArgs e)
{
var bmp = new Bitmap("test.bmp");
// Determine center of graph
var xCenter = e.MarginBounds.X + (e.MarginBounds.Width - bmp.Width) / 2;
var yCenter = e.MarginBounds.Y + (e.MarginBounds.Height - bmp.Height) / 2;
e.Graphics.DrawImage(bmp, xCenter, yCenter);
e.HasMorePages = false;
}
A: From the Mono docs, I think yes:
Managed.Windows.Forms (aka
System.Windows.Forms): A complete and
cross platform, System.Drawing based
Winforms implementation.
It also useful if you run the Mono Migration Analyzer first.
A: According to
System.Drawing is now complete, and in addition to being the underlying rendering engine for Windows.Forms, it has also been tested for using third party controls that heavily depend on it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27455",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Timeout not being honoured in connection string I have a long running SQL statement that I want to run, and no matter what I put in the "timeout=" clause of my connection string, it always seems to end after 30 seconds.
I'm just using SqlHelper.ExecuteNonQuery() to execute it, and letting it take care of opening connections, etc.
Is there something else that could be overriding my timeout, or causing sql server to ignore it? I have run profiler over the query, and the trace doesn't look any different when I run it in management studio, versus in my code.
Management studio completes the query in roughly a minute, but even with a timeout set to 300, or 30000, my code still times out after 30 seconds.
A: What are you using to set the timeout in your connection string? From memory that's "ConnectionTimeout" and only affects the time it takes to actually connect to the server.
Each individual command has a separate "CommandTimeout" which would be what you're looking for. Not sure how SqlHelper implements that though.
A: In addition to timeout in connection string, try using the timeout property of the SQL command. Below is a C# sample, using the SqlCommand class. Its equivalent should be applicable to what you are using.
SqlCommand command = new SqlCommand(sqlQuery, _Database.Connection);
command.CommandTimeout = 0;
int rows = command.ExecuteNonQuery();
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27472",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28"
} |
Q: Email SMTP validator I need to send hundreds of newsletters, but would like to check first if email exists on server. It's called SMTP validation, at least I think so, based on my research on Internet.
There's several libraries that can do that, and also a page with open-source code in ASP Classic (http://www.coveryourasp.com/ValidateEmail.asp#Result3), but I have hard time reading ASP Classic, and it seems that it uses some third-party library...
Is there some code for SMTP validation in C#, and/or general explanation of how it works?
A: While it's true that many domains will return false positives because of abuse, there are still some great components out there that will perform several levels of validation beyond just the SMTP validation. For example, it's worth it to check first to see if at least the domain exists. I'm in the process of compiling my own list of resources related to this question which you can track here:
http://delicious.com/dworthley/email.validation (broken link)
For those who might want to add to this list, I'll also include what I currently have here:
*
*aspNetMX
*.NET Email Validation Wizard Class Library
*MONOProg Email Validator.Net
For a bulletproof form and a great user experience, it's helpful to validate as many aspects of the email address as possible. I can see from the aspNetMX validator that they check:
*
*the syntax
*the email against a list of bad email addresses
*the domain against a list of bad domains
*a list of mailbox domains
*whether or not the domain exists
*whether there are MX records for the domain
*and finally through SMTP whether or not a mailbox exists
It's this last step that can be circumvented by administrators by returning true to basically all account verification requests, but in most cases if the user has intentionally entered a bad address it's already been caught. And if it was user error in the domain part of the address, that will be caught too.
Of course, a best practice for using this kind of a service for a registration screen or form would be to combine this kind of validation with a verification process to ensure that the email address is valid. The great thing about using an email validator in front of a verification process is that it will make for a better overall user experience.
A: You can try the below code, it works fine for me :
public class EmailTest {
private static int hear(BufferedReader in) throws IOException {
String line = null;
int res = 0;
while ((line = in.readLine()) != null) {
String pfx = line.substring(0, 3);
try {
res = Integer.parseInt(pfx);
} catch (Exception ex) {
res = -1;
}
if (line.charAt(3) != '-')
break;
}
return res;
}
private static void say(BufferedWriter wr, String text) throws IOException {
wr.write(text + "\r\n");
wr.flush();
return;
}
@SuppressWarnings({ "rawtypes", "unchecked" })
private static ArrayList getMX(String hostName) throws NamingException {
// Perform a DNS lookup for MX records in the domain
Hashtable env = new Hashtable();
env.put("java.naming.factory.initial", "com.sun.jndi.dns.DnsContextFactory");
DirContext ictx = new InitialDirContext(env);
Attributes attrs = ictx.getAttributes(hostName, new String[] { "MX" });
Attribute attr = attrs.get("MX");
// if we don't have an MX record, try the machine itself
if ((attr == null) || (attr.size() == 0)) {
attrs = ictx.getAttributes(hostName, new String[] { "A" });
attr = attrs.get("A");
if (attr == null)
throw new NamingException("No match for name '" + hostName + "'");
}
/*
Huzzah! we have machines to try. Return them as an array list
NOTE: We SHOULD take the preference into account to be absolutely
correct. This is left as an exercise for anyone who cares.
*/
ArrayList res = new ArrayList();
NamingEnumeration en = attr.getAll();
while (en.hasMore()) {
String mailhost;
String x = (String) en.next();
String f[] = x.split(" ");
// THE fix *************
if (f.length == 1)
mailhost = f[0];
else if (f[1].endsWith("."))
mailhost = f[1].substring(0, (f[1].length() - 1));
else
mailhost = f[1];
// THE fix *************
res.add(mailhost);
}
return res;
}
@SuppressWarnings("rawtypes")
public static boolean isAddressValid(String address) {
// Find the separator for the domain name
int pos = address.indexOf('@');
// If the address does not contain an '@', it's not valid
if (pos == -1)
return false;
// Isolate the domain/machine name and get a list of mail exchangers
String domain = address.substring(++pos);
ArrayList mxList = null;
try {
mxList = getMX(domain);
} catch (NamingException ex) {
return false;
}
/*
Just because we can send mail to the domain, doesn't mean that the
address is valid, but if we can't, it's a sure sign that it isn't
*/
if (mxList.size() == 0)
return false;
/*
Now, do the SMTP validation, try each mail exchanger until we get
a positive acceptance. It *MAY* be possible for one MX to allow
a message [store and forwarder for example] and another [like
the actual mail server] to reject it. This is why we REALLY ought
to take the preference into account.
*/
for (int mx = 0; mx < mxList.size(); mx++) {
boolean valid = false;
try {
int res;
//
Socket skt = new Socket((String) mxList.get(mx), 25);
BufferedReader rdr = new BufferedReader(new InputStreamReader(skt.getInputStream()));
BufferedWriter wtr = new BufferedWriter(new OutputStreamWriter(skt.getOutputStream()));
res = hear(rdr);
if (res != 220)
throw new Exception("Invalid header");
say(wtr, "EHLO rgagnon.com");
res = hear(rdr);
if (res != 250)
throw new Exception("Not ESMTP");
// validate the sender address
say(wtr, "MAIL FROM: <[email protected]>");
res = hear(rdr);
if (res != 250)
throw new Exception("Sender rejected");
say(wtr, "RCPT TO: <" + address + ">");
res = hear(rdr);
// be polite
say(wtr, "RSET");
hear(rdr);
say(wtr, "QUIT");
hear(rdr);
if (res != 250)
throw new Exception("Address is not valid!");
valid = true;
rdr.close();
wtr.close();
skt.close();
} catch (Exception ex) {
// Do nothing but try next host
ex.printStackTrace();
} finally {
if (valid)
return true;
}
}
return false;
}
public static void main(String args[]) {
String testData[] = { "[email protected]", "[email protected]", "[email protected]",
"[email protected]" };
System.out.println(testData.length);
for (int ctr = 0; ctr < testData.length; ctr++) {
System.out.println(testData[ctr] + " is valid? " + isAddressValid(testData[ctr]));
}
return;
}
}
Thanks & Regards
Rahul Saraswat
A: Be aware that most MTAs (Mail Transfer Agent) will have the VRFY command turned off for spam protection reasons, they'll probably even block you if you try several RCPT TO in a row (see http://www.spamresource.com/2007/01/whatever-happened-to-vrfy.html). So even if you find a library to do that verification, it won't be worth a lot. Ishmaeel is right, the only way to really find out, is sending an email and see if it bounces or not.
@Hrvoje: Yes, I'm suggesting you monitor rejected emails. BUT: not all the bounced mails should automatically end up on your "does not exist"-list, you also have to differentiate between temporary (e.g. mailbox full) and permanent errors.
A: The Real(TM) e-mail validation is trying to send something to the address, and seeing if it is rejected/bounced. So, you'll just have to send them away, and remove the addresses that fail from your mailing list.
A: Don't take this the wrong way, but sending newsletters to more than a handful of people these days is a fairly serious matter. Yes, you need to be monitoring bounces (rejected emails) which can occur synchronously during the SMTP send (typically if the SMTP server you are connected to is authoritative), or asynchronously as a system-generated email message that occurs some amount of time after the SMTP send succeeded.
Also keep the CAN-SPAM Act in mind and abide by the law when sending these emails; you've got to provide an unsub link as well as a physical street address (to both identify you and t0 allow users to send unsub requests via snail-mail if they so choose).
Failure to do these things could get your IP null-routed at best and sued at worst.
A: SMTP is a text based protocol carried over TCP/IP.
Your validation program needs to open a TCP/IP connection to the server's port 25 (SMTP), write in a few lines and read the answer. Validation is done (but not always) on the "RCTP TO" line and on the "VFRY" line.
The SMTP RFC describes how this works (see [email protected] below, S are lines sent by the client, R are lines received from the server):
Example of the SMTP Procedure
This SMTP example shows mail sent by Smith at host Alpha.ARPA,
to Jones, Green, and Brown at host Beta.ARPA. Here we assume
that host Alpha contacts host Beta directly.
S: MAIL FROM:
R: 250 OK
S: RCPT TO:
R: 250 OK
S: RCPT TO:
R: 550 No such user here
A: You may need this Email Validator component for .NET
Here is the code example:
// Create a new instance of the EmailValidator class.
EmailValidator em = new EmailValidator();
em.MessageLogging += em_MessageLogging;
em.EmailValidated += em_EmailValidationCompleted;
try
{
string[] list = new string[3] { "[email protected]", "[email protected]", "[email protected]" };
em.ValidateEmails(list);
}
catch (EmailValidatorException exc2)
{
Console.WriteLine("EmailValidatorException: " + exc2.Message);
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27474",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: Commenting on LaTeX PDF documents with PDF reader Im currently writing my bachelor thesis with latex and using TexnicCenter. I want to be able to send my generated pdf file to people and they should be able to write comments.
It seems like commenting is not allowed by default, how do I change this?
I am using straight to PDF with pdflatex and acrobat reader 9 to read and comment on the files
A: In order to comment using the free Adobe Reader application, the document needs to be signed with a cryptographic key only available from Adobe's commercial (non-free, for-pay) software suites. Likewise, if one is using Adobe Acrobat (not the free Reader) to view a PDF document, commenting may be activated -- or so I hear. The idea here is that it takes some piece of commercial Adobe software in the scenario -- be it producer or consumer -- to make commenting possible.
There are other free PDF producer and consumer applications that allow some form of annotation, but none of them are equivalent to the "native" form offered by Adobe's products.
A: Strange... I just finished my master thesis, using TexnicCenter and the MikTeX distribution, and comments worked just fine. What build profile do you use? Straight to PDF with pdflatex, or via the PS->PDF route? You might want to try the pdflatex method.
(EDIT): ah, we used Acrobat Pro for commenting, so that's why it did work in our case... Thanks rsg!
A: I think your problem is that acrobat reader doesn't allow commenting on documents not produced by abode approved products, which I don't think pdflatex would be.
You should look at the free PDF-XChange Viewer which allows you to comment and annotate the text. Its a portable windows app (download), so doesn't need to be installed on your (or the reviewers) machines.
A: I would definitely have a look at the LaTeX Web Companion. There is a whole section about generating PDF from LaTeX, including esoterica such as forms.
A: You can download the 30 day trial of Acrobat Professional 9, and enable the user rights required on the pdf so that they can comment using Acrobat Reader.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27482",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: C++ Memory management I've learned in College that you always have to free your unused Objects but not how you actually do it. For example structuring your code right and so on.
Are there any general rules on how to handle pointers in C++?
I'm currently not allowed to use boost. I have to stick to pure c++ because the framework I'm using forbids any use of generics.
A: Rules:
*
*Wherever possible, use a
smart pointer. Boost has some
good ones.
*If you
can't use a smart pointer, null out
your pointer after deleting it.
*Never work anywhere that won't let you use rule 1.
If someone disallows rule 1, remember that if you grab someone else's code, change the variable names and delete the copyright notices, no-one will ever notice. Unless it's a school project, where they actually check for that kind of shenanigans with quite sophisticated tools. See also, this question.
A: I would add another rule here:
*
*Don't new/delete an object when an automatic object will do just fine.
We have found that programmers who are new to C++, or programmers coming over from languages like Java, seem to learn about new and then obsessively use it whenever they want to create any object, regardless of the context. This is especially pernicious when an object is created locally within a function purely to do something useful. Using new in this way can be detrimental to performance and can make it all too easy to introduce silly memory leaks when the corresponding delete is forgotten. Yes, smart pointers can help with the latter but it won't solve the performance issues (assuming that new/delete or an equivalent is used behind the scenes). Interestingly (well, maybe), we have found that delete often tends to be more expensive than new when using Visual C++.
Some of this confusion also comes from the fact that functions they call might take pointers, or even smart pointers, as arguments (when references would perhaps be better/clearer). This makes them think that they need to "create" a pointer (a lot of people seem to think that this is what new does) to be able to pass a pointer to a function. Clearly, this requires some rules about how APIs are written to make calling conventions as unambiguous as possible, which are reinforced with clear comments supplied with the function prototype.
A: In the general case (resource management, where resource is not necessarily memory), you need to be familiar with the RAII pattern. This is one of the most important pieces of information for C++ developers.
A: In general, avoid allocating from the heap unless you have to. If you have to, use reference counting for objects that are long-lived and need to be shared between diverse parts of your code.
Sometimes you need to allocate objects dynamically, but they will only be used within a certain span of time. For example, in a previous project I needed to create a complex in-memory representation of a database schema -- basically a complex cyclic graph of objects. However, the graph was only needed for the duration of a database connection, after which all the nodes could be freed in one shot. In this kind of scenario, a good pattern to use is something I call the "local GC idiom." I'm not sure if it has an "official" name, as it's something I've only seen in my own code, and in Cocoa (see NSAutoreleasePool in Apple's Cocoa reference).
In a nutshell, you create a "collector" object that keeps pointers to the temporary objects that you allocate using new. It is usually tied to some scope in your program, either a static scope (e.g. -- as a stack-allocated object that implements the RAII idiom) or a dynamic one (e.g. -- tied to the lifetime of a database connection, as in my previous project). When the "collector" object is freed, its destructor frees all of the objects that it points to.
Also, like DrPizza I think the restriction to not use templates is too harsh. However, having done a lot of development on ancient versions of Solaris, AIX, and HP-UX (just recently - yes, these platforms are still alive in the Fortune 50), I can tell you that if you really care about portability, you should use templates as little as possible. Using them for containers and smart pointers ought to be ok, though (it worked for me). Without templates the technique I described is more painful to implement. It would require that all objects managed by the "collector" derive from a common base class.
A: I have worked with the embedded Symbian OS, which had an excellent system in place for this, based entirely on developer conventions.
*
*Only one object will ever own a pointer. By default this is the creator.
*Ownership can be passed on. To indicate passing of ownership, the object is passed as a pointer in the method signature (e.g. void Foo(Bar *zonk);).
*The owner will decide when to delete the object.
*To pass an object to a method just for use, the object is passed as a reference in the method signature (e.g. void Foo(Bat &zonk);).
*Non-owner classes may store references (never pointers) to objects they are given only when they can be certain that the owner will not destroy it during use.
Basically, if a class simply uses something, it uses a reference. If a class owns something, it uses a pointer.
This worked beautifully and was a pleasure to use. Memory issues were very rare.
A: G'day,
I'd suggest reading the relevant sections of "Effective C++" by Scott Meyers. Easy to read and he covers some interesting gotchas to trap the unwary.
I'm also intrigued by the lack of templates. So no STL or Boost. Wow.
BTW Getting people to agree on conventions is an excellent idea. As is getting everyone to agree on conventions for OOD. BTW The latest edition of Effective C++ doesn't have the excellent chapter about OOD conventions that the first edition had which is a pity, e.g. conventions such as public virtual inheritance always models an "isa" relationship.
Rob
A: *
*When you have to use manage memory
manually, make sure you call delete
in the same
scope/function/class/module, which
ever applies first, e.g.:
*Let the caller of a function allocate the memory that is filled by it,
do not return new'ed pointers.
*Always call delete in the same exe/dll as you called new in, because otherwise you may have problems with heap corruptions (different incompatible runtime libraries).
A: you could derive everything from some base class that implement smart pointer like functionality (using ref()/unref() methods and a counter.
All points highlighted by @Timbo are important when designing that base class.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27492",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: What is Multiversion Concurrency Control (MVCC) and who supports it? Recently Jeff has posted regarding his trouble with database deadlocks related to reading. Multiversion Concurrency Control (MVCC) claims to solve this problem. What is it, and what databases support it?
updated: these support it (which others?)
*
*oracle
*postgresql
A: The following have an implementation of MVCC:
SQL Server 2005 (Non-default, SET READ_COMMITTED_SNAPSHOT ON)
*
*http://msdn.microsoft.com/en-us/library/ms345124.aspx
Oracle (since version 8)
MySQL 5 (only with InnoDB tables)
PostgreSQL
Firebird
Informix
I'm pretty sure Sybase and IBM DB2 Mainframe/LUW do not have an implementation of MVCC
A: Oracle has had an excellent multi version control system in place since very long(at least since oracle 8.0)
Following should help.
*
*User A starts a transaction and is updating 1000 rows with some value At Time T1
*User B reads the same 1000 rows at time T2.
*User A updates row 543 with value Y (original value X)
*User B reaches row 543 and finds that a transaction is in operation since Time T1.
*The database returns the unmodified record from the Logs. The returned value is the value that was committed at the time less than or equal to T2.
*If the record could not be retreived from the redo logs it means the database is not setup appropriately. There needs to be more space allocated to the logs.
*This way the read consitency is achieved. The returned results are always the same with respect to the start time of transaction. So within a transaction the read consistency is achieved.
I have tried to explain in the simplest terms possible...there is a lot to multiversioning in databases.
A: Firebird does it, they call it MGA (Multi Generational Architecture).
They keep the original version intact, and add a new version that only the session using it can see, when committed the older version is disabled, and the newer version is enabled for everybody(the file piles-up with data and needs regular cleanup).
Oracle overwrites the data itself, and uses a rollback segments/undo tablespaces for other sessions and to rollback.
A: XtremeData dbX supports MVCC.
In addition, dbX can make use of SQL primitives implemented in FPGA hardware.
A: SAP HANA also uses MVCC.
SAP HANA is a full In-Memory Computing System, so MVCC costs for select is very low... :)
A: PostgreSQL's Multi-Version Concurrency Control
As well as this article which features diagrams of how MVCC works when issuing INSERT, UPDATE, and DELETE statements.
A: Here is a link to the PostgreSQL doc page on MVCC. The choice quote (emphasis mine):
The main advantage to using the MVCC model of concurrency control rather than locking is that in MVCC locks acquired for querying (reading) data do not conflict with locks acquired for writing data, and so reading never blocks writing and writing never blocks reading.
This is why Jeff was so confounded by his deadlocks. A read should never be able to cause them.
A: SQL Server 2005 and up offer MVCC as an option; it isn't the default, however. MS calls it snapshot isolation, if memory serves.
A: MVCC can also be implemented manually, by adding a version number column to your tables, and always doing inserts instead of updates.
The cost of this is a much larger database, and slower selects since each one needs a subquery to find the latest record.
It's an excellent solution for systems that require 100% auditing for all changes.
A: MySQL also uses MVCC by default if you use InnoDB tables:
http://dev.mysql.com/doc/refman/5.0/en/innodb-multi-versioning.html
A: McObject announced in 11/09 that it has added an optional MVCC transaction manager to its eXtremeDB embedded database:
http://www.mcobject.com/november9/2009
eXtremeDB, originally developed as an in-memory database system (IMDS), is now available in editions with hybrid (in-memory/on-disk) storage, High Availability, 64-bit support and more.
A: There's a good explanation of MVCC -- with diagrams -- and some performance numbers for eXtremeDB in this article, written by McObject's co-founder and CEO, in RTC Magazine:
http://www.rtcmagazine.com/articles/view/101612
Clearly MVCC is increasingly beneficial as an application scales to include many tasks executing on multiple CPU cores.
A: DB2 version 9.7 has a licensed version of postgress plus in it. This means that this feature (in the right mode) supports this feature.
A: Berkeley DB also supports MVCC.
And when BDB storage engine is used in MySQL, MySQL also supports MVCC.
Berkeley DB is a very powerful, customizable fully ACID conform DBMS. It supports several different methods for indexing, master-slave replication, can be used as a pure key value store with it's own dynamic API or queried with SQL if wanted. Worth taking a look at.
Another document oriented DBMS embracing MVCC would be CouchDB. MVCC here also is a big plus for the built in peer-to-peer replication.
A: From http://vschart.com/list/multiversion-concurrency-control/
Couchbase,
OrientDB,
CouchDB,
PostgreSQL,
Project Voldemort,
BigTable,
Percona Server,
HyperGraphDB,
Drizzle,
Cloudant,
IBM DB2,
InterSystems Caché,
InterBase
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27499",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "48"
} |
Q: Detecting an undefined object property How do I check if an object property in JavaScript is undefined?
A: I believe there are a number of incorrect answers to this topic. Contrary to common belief, "undefined" is not a keyword in JavaScript and can in fact have a value assigned to it.
Correct Code
The most robust way to perform this test is:
if (typeof myVar === "undefined")
This will always return the correct result, and even handles the situation where myVar is not declared.
Degenerate code. DO NOT USE.
var undefined = false; // Shockingly, this is completely legal!
if (myVar === undefined) {
alert("You have been misled. Run away!");
}
Additionally, myVar === undefined will raise an error in the situation where myVar is undeclared.
A: There is a nice and elegant way to assign a defined property to a new variable if it is defined or assign a default value to it as a fallback if it’s undefined.
var a = obj.prop || defaultValue;
It’s suitable if you have a function, which receives an additional configuration property:
var yourFunction = function(config){
this.config = config || {};
this.yourConfigValue = config.yourConfigValue || 1;
console.log(this.yourConfigValue);
}
Now executing
yourFunction({yourConfigValue:2});
//=> 2
yourFunction();
//=> 1
yourFunction({otherProperty:5});
//=> 1
A: Here is my situation:
I am using the result of a REST call. The result should be parsed from JSON to a JavaScript object.
There is one error I need to defend. If the arguments to the REST call were incorrect as far as the user specifying the arguments wrong, the REST call comes back basically empty.
While using this post to help me defend against this, I tried this:
if( typeof restResult.data[0] === "undefined" ) { throw "Some error"; }
For my situation, if restResult.data[0] === "object", then I can safely start inspecting the rest of the members. If undefined then throw the error as above.
What I am saying is that for my situation, all the previous suggestions in this post did not work. I'm not saying I'm right and everyone is wrong. I am not a JavaScript master at all, but hopefully this will help someone.
A: The issue boils down to three cases:
*
*The object has the property and its value is not undefined.
*The object has the property and its value is undefined.
*The object does not have the property.
This tells us something I consider important:
There is a difference between an undefined member and a defined member with an undefined value.
But unhappily typeof obj.foo does not tell us which of the three cases we have. However we can combine this with "foo" in obj to distinguish the cases.
| typeof obj.x === 'undefined' | !("x" in obj)
1. { x:1 } | false | false
2. { x : (function(){})() } | true | false
3. {} | true | true
Its worth noting that these tests are the same for null entries too
| typeof obj.x === 'undefined' | !("x" in obj)
{ x:null } | false | false
I'd argue that in some cases it makes more sense (and is clearer) to check whether the property is there, than checking whether it is undefined, and the only case where this check will be different is case 2, the rare case of an actual entry in the object with an undefined value.
For example: I've just been refactoring a bunch of code that had a bunch of checks whether an object had a given property.
if( typeof blob.x != 'undefined' ) { fn(blob.x); }
Which was clearer when written without a check for undefined.
if( "x" in blob ) { fn(blob.x); }
But as has been mentioned these are not exactly the same (but are more than good enough for my needs).
A: All the answers are incomplete. This is the right way of knowing that there is a property 'defined as undefined':
var hasUndefinedProperty = function hasUndefinedProperty(obj, prop){
return ((prop in obj) && (typeof obj[prop] == 'undefined'));
};
Example:
var a = { b : 1, e : null };
a.c = a.d;
hasUndefinedProperty(a, 'b'); // false: b is defined as 1
hasUndefinedProperty(a, 'c'); // true: c is defined as undefined
hasUndefinedProperty(a, 'd'); // false: d is undefined
hasUndefinedProperty(a, 'e'); // false: e is defined as null
// And now...
delete a.c ;
hasUndefinedProperty(a, 'c'); // false: c is undefined
Too bad that this been the right answer and is buried in wrong answers >_<
So, for anyone who pass by, I will give you undefined's for free!!
var undefined ; undefined ; // undefined
({}).a ; // undefined
[].a ; // undefined
''.a ; // undefined
(function(){}()) ; // undefined
void(0) ; // undefined
eval() ; // undefined
1..a ; // undefined
/a/.a ; // undefined
(true).a ; // undefined
A: Going through the comments, for those who want to check both is it undefined or its value is null:
//Just in JavaScript
var s; // Undefined
if (typeof s == "undefined" || s === null){
alert('either it is undefined or value is null')
}
If you are using jQuery Library then jQuery.isEmptyObject() will suffice for both cases,
var s; // Undefined
jQuery.isEmptyObject(s); // Will return true;
s = null; // Defined as null
jQuery.isEmptyObject(s); // Will return true;
//Usage
if (jQuery.isEmptyObject(s)) {
alert('Either variable:s is undefined or its value is null');
} else {
alert('variable:s has value ' + s);
}
s = 'something'; // Defined with some value
jQuery.isEmptyObject(s); // Will return false;
A: If you are using Angular:
angular.isUndefined(obj)
angular.isUndefined(obj.prop)
Underscore.js:
_.isUndefined(obj)
_.isUndefined(obj.prop)
A: I provide three ways here for those who expect weird answers:
function isUndefined1(val) {
try {
val.a;
} catch (e) {
return /undefined/.test(e.message);
}
return false;
}
function isUndefined2(val) {
return !val && val+'' === 'undefined';
}
function isUndefined3(val) {
const defaultVal = {};
return ((input = defaultVal) => input === defaultVal)(val);
}
function test(func){
console.group(`test start :`+func.name);
console.log(func(undefined));
console.log(func(null));
console.log(func(1));
console.log(func("1"));
console.log(func(0));
console.log(func({}));
console.log(func(function () { }));
console.groupEnd();
}
test(isUndefined1);
test(isUndefined2);
test(isUndefined3);
isUndefined1:
Try to get a property of the input value, and check the error message if it exists. If the input value is undefined, the error message would be Uncaught TypeError: Cannot read property 'b' of undefined.
isUndefined2:
Convert the input value to a string to compare with "undefined" and ensure it's a negative value.
isUndefined3:
In JavaScript, an optional parameter works when the input value is exactly undefined.
A: There is a very easy and simple way.
You can use optional chaining:
x = {prop:{name:"sajad"}}
console.log(x.prop?.name) // Output is: "sajad"
console.log(x.prop?.lastName) // Output is: undefined
or
if(x.prop?.lastName) // The result of this 'if' statement is false and is not throwing an error
You can use optional chaining even for functions or arrays.
As of mid-2020 this is not universally implemented. Check the documentation at https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Optional_chaining
A: I use if (this.variable) to test if it is defined. A simple if (variable), recommended in a previous answer, fails for me.
It turns out that it works only when a variable is a field of some object, obj.someField to check if it is defined in the dictionary. But we can use this or window as the dictionary object since any variable is a field in the current window, as I understand it. Therefore here is a test:
if (this.abc)
alert("defined");
else
alert("undefined");
abc = "abc";
if (this.abc)
alert("defined");
else
alert("undefined");
It first detects that variable abc is undefined and it is defined after initialization.
A: if ( typeof( something ) == "undefined")
This worked for me while the others didn't.
A: I'm not sure where the origin of using === with typeof came from, and as a convention I see it used in many libraries, but the typeof operator returns a string literal, and we know that up front, so why would you also want to type check it too?
typeof x; // some string literal "string", "object", "undefined"
if (typeof x === "string") { // === is redundant because we already know typeof returns a string literal
if (typeof x == "string") { // sufficient
A: function isUnset(inp) {
return (typeof inp === 'undefined')
}
Returns false if variable is set, and true if is undefined.
Then use:
if (isUnset(var)) {
// initialize variable here
}
A: I would like to show you something I'm using in order to protect the undefined variable:
Object.defineProperty(window, 'undefined', {});
This forbids anyone to change the window.undefined value therefore destroying the code based on that variable. If using "use strict", anything trying to change its value will end in error, otherwise it would be silently ignored.
A: From lodash.js.
var undefined;
function isUndefined(value) {
return value === undefined;
}
It creates a local variable named undefined which is initialized with the default value -- the real undefined, then compares value with the variable undefined.
Update 9/9/2019
I found Lodash updated its implementation. See my issue and the code.
To be bullet-proof, simply use:
function isUndefined(value) {
return value === void 0;
}
A: You can also use a Proxy. It will work with nested calls, but it will require one extra check:
function resolveUnknownProps(obj, resolveKey) {
const handler = {
get(target, key) {
if (
target[key] !== null &&
typeof target[key] === 'object'
) {
return resolveUnknownProps(target[key], resolveKey);
} else if (!target[key]) {
return resolveUnknownProps({ [resolveKey]: true }, resolveKey);
}
return target[key];
},
};
return new Proxy(obj, handler);
}
const user = {}
console.log(resolveUnknownProps(user, 'isUndefined').personalInfo.name.something.else); // { isUndefined: true }
So you will use it like:
const { isUndefined } = resolveUnknownProps(user, 'isUndefined').personalInfo.name.something.else;
if (!isUndefined) {
// Do something
}
A: In recent JavaScript release there is new chaining operator introduced, which is most probably best way to check if property exists else it will give you undefined
see example below
const adventurer = {
name: 'Alice',
cat: {
name: 'Dinah'
}
};
const dogName = adventurer.dog?.name;
console.log(dogName);
// expected output: undefined
console.log(adventurer.someNonExistentMethod?.());
// expected output: undefined
We can replace this old syntax
if (response && response.data && response.data.someData && response.data.someData.someMoreData) {}
with this neater syntax
if( response?.data?.someData?.someMoreData) {}
This syntax is not supported in IE, Opera, safari & samsund android
for more detail you can check this URL
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Optional_chaining
A: I didn't see (hope I didn't miss it) anyone checking the object before the property. So, this is the shortest and most effective (though not necessarily the most clear):
if (obj && obj.prop) {
// Do something;
}
If the obj or obj.prop is undefined, null, or "falsy", the if statement will not execute the code block. This is usually the desired behavior in most code block statements (in JavaScript).
UPDATE: (7/2/2021)
The latest version of JavaScript introduces a new operator for
optional chaining: ?.
This is probably going to be the most explicit and efficient method of checking for the existence of object properties, moving forward.
Ref: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Optional_chaining
A: The usual way to check if the value of a property is the special value undefined, is:
if(o.myProperty === undefined) {
alert("myProperty value is the special value `undefined`");
}
To check if an object does not actually have such a property, and will therefore return undefined by default when you try to access it:
if(!o.hasOwnProperty('myProperty')) {
alert("myProperty does not exist");
}
To check if the value associated with an identifier is the special value undefined, or if that identifier has not been declared:
if(typeof myVariable === 'undefined') {
alert('myVariable is either the special value `undefined`, or it has not been declared');
}
Note: this last method is the only way to refer to an undeclared identifier without an early error, which is different from having a value of undefined.
In versions of JavaScript prior to ECMAScript 5, the property named "undefined" on the global object was writeable, and therefore a simple check foo === undefined might behave unexpectedly if it had accidentally been redefined. In modern JavaScript, the property is read-only.
However, in modern JavaScript, "undefined" is not a keyword, and so variables inside functions can be named "undefined" and shadow the global property.
If you are worried about this (unlikely) edge case, you can use the void operator to get at the special undefined value itself:
if(myVariable === void 0) {
alert("myVariable is the special value `undefined`");
}
A: Many answers here are vehement in recommending typeof, but typeof is a bad choice. It should never be used for checking whether variables have the value undefined, because it acts as a combined check for the value undefined and for whether a variable exists. In the vast majority of cases, you know when a variable exists, and typeof will just introduce the potential for a silent failure if you make a typo in the variable name or in the string literal 'undefined'.
var snapshot = …;
if (typeof snaposhot === 'undefined') {
// ^
// misspelled¹ – this will never run, but it won’t throw an error!
}
var foo = …;
if (typeof foo === 'undefned') {
// ^
// misspelled – this will never run, but it won’t throw an error!
}
So unless you’re doing feature detection², where there’s uncertainty whether a given name will be in scope (like checking typeof module !== 'undefined' as a step in code specific to a CommonJS environment), typeof is a harmful choice when used on a variable, and the correct option is to compare the value directly:
var foo = …;
if (foo === undefined) {
⋮
}
Some common misconceptions about this include:
*
*that reading an “uninitialized” variable (var foo) or parameter (function bar(foo) { … }, called as bar()) will fail. This is simply not true – variables without explicit initialization and parameters that weren’t given values always become undefined, and are always in scope.
*that undefined can be overwritten. It’s true that undefined isn’t a keyword, but it is read-only and non-configurable. There are other built-ins you probably don’t avoid despite their non-keyword status (Object, Math, NaN…) and practical code usually isn’t written in an actively malicious environment, so this isn’t a good reason to be worried about undefined. (But if you are writing a code generator, feel free to use void 0.)
With how variables work out of the way, it’s time to address the actual question: object properties. There is no reason to ever use typeof for object properties. The earlier exception regarding feature detection doesn’t apply here – typeof only has special behaviour on variables, and expressions that reference object properties are not variables.
This:
if (typeof foo.bar === 'undefined') {
⋮
}
is always exactly equivalent to this³:
if (foo.bar === undefined) {
⋮
}
and taking into account the advice above, to avoid confusing readers as to why you’re using typeof, because it makes the most sense to use === to check for equality, because it could be refactored to checking a variable’s value later, and because it just plain looks better, you should always use === undefined³ here as well.
Something else to consider when it comes to object properties is whether you really want to check for undefined at all. A given property name can be absent on an object (producing the value undefined when read), present on the object itself with the value undefined, present on the object’s prototype with the value undefined, or present on either of those with a non-undefined value. 'key' in obj will tell you whether a key is anywhere on an object’s prototype chain, and Object.prototype.hasOwnProperty.call(obj, 'key') will tell you whether it’s directly on the object. I won’t go into detail in this answer about prototypes and using objects as string-keyed maps, though, because it’s mostly intended to counter all the bad advice in other answers irrespective of the possible interpretations of the original question. Read up on object prototypes on MDN for more!
¹ unusual choice of example variable name? this is real dead code from the NoScript extension for Firefox.
² don’t assume that not knowing what’s in scope is okay in general, though. bonus vulnerability caused by abuse of dynamic scope: Project Zero 1225
³ once again assuming an ES5+ environment and that undefined refers to the undefined property of the global object.
A: Crossposting my answer from related question How can I check for "undefined" in JavaScript?.
Specific to this question, see test cases with someObject.<whatever>.
Some scenarios illustrating the results of the various answers:
http://jsfiddle.net/drzaus/UVjM4/
(Note that the use of var for in tests make a difference when in a scoped wrapper)
Code for reference:
(function(undefined) {
var definedButNotInitialized;
definedAndInitialized = 3;
someObject = {
firstProp: "1"
, secondProp: false
// , undefinedProp not defined
}
// var notDefined;
var tests = [
'definedButNotInitialized in window',
'definedAndInitialized in window',
'someObject.firstProp in window',
'someObject.secondProp in window',
'someObject.undefinedProp in window',
'notDefined in window',
'"definedButNotInitialized" in window',
'"definedAndInitialized" in window',
'"someObject.firstProp" in window',
'"someObject.secondProp" in window',
'"someObject.undefinedProp" in window',
'"notDefined" in window',
'typeof definedButNotInitialized == "undefined"',
'typeof definedButNotInitialized === typeof undefined',
'definedButNotInitialized === undefined',
'! definedButNotInitialized',
'!! definedButNotInitialized',
'typeof definedAndInitialized == "undefined"',
'typeof definedAndInitialized === typeof undefined',
'definedAndInitialized === undefined',
'! definedAndInitialized',
'!! definedAndInitialized',
'typeof someObject.firstProp == "undefined"',
'typeof someObject.firstProp === typeof undefined',
'someObject.firstProp === undefined',
'! someObject.firstProp',
'!! someObject.firstProp',
'typeof someObject.secondProp == "undefined"',
'typeof someObject.secondProp === typeof undefined',
'someObject.secondProp === undefined',
'! someObject.secondProp',
'!! someObject.secondProp',
'typeof someObject.undefinedProp == "undefined"',
'typeof someObject.undefinedProp === typeof undefined',
'someObject.undefinedProp === undefined',
'! someObject.undefinedProp',
'!! someObject.undefinedProp',
'typeof notDefined == "undefined"',
'typeof notDefined === typeof undefined',
'notDefined === undefined',
'! notDefined',
'!! notDefined'
];
var output = document.getElementById('results');
var result = '';
for(var t in tests) {
if( !tests.hasOwnProperty(t) ) continue; // bleh
try {
result = eval(tests[t]);
} catch(ex) {
result = 'Exception--' + ex;
}
console.log(tests[t], result);
output.innerHTML += "\n" + tests[t] + ": " + result;
}
})();
And results:
definedButNotInitialized in window: true
definedAndInitialized in window: false
someObject.firstProp in window: false
someObject.secondProp in window: false
someObject.undefinedProp in window: true
notDefined in window: Exception--ReferenceError: notDefined is not defined
"definedButNotInitialized" in window: false
"definedAndInitialized" in window: true
"someObject.firstProp" in window: false
"someObject.secondProp" in window: false
"someObject.undefinedProp" in window: false
"notDefined" in window: false
typeof definedButNotInitialized == "undefined": true
typeof definedButNotInitialized === typeof undefined: true
definedButNotInitialized === undefined: true
! definedButNotInitialized: true
!! definedButNotInitialized: false
typeof definedAndInitialized == "undefined": false
typeof definedAndInitialized === typeof undefined: false
definedAndInitialized === undefined: false
! definedAndInitialized: false
!! definedAndInitialized: true
typeof someObject.firstProp == "undefined": false
typeof someObject.firstProp === typeof undefined: false
someObject.firstProp === undefined: false
! someObject.firstProp: false
!! someObject.firstProp: true
typeof someObject.secondProp == "undefined": false
typeof someObject.secondProp === typeof undefined: false
someObject.secondProp === undefined: false
! someObject.secondProp: true
!! someObject.secondProp: false
typeof someObject.undefinedProp == "undefined": true
typeof someObject.undefinedProp === typeof undefined: true
someObject.undefinedProp === undefined: true
! someObject.undefinedProp: true
!! someObject.undefinedProp: false
typeof notDefined == "undefined": true
typeof notDefined === typeof undefined: true
notDefined === undefined: Exception--ReferenceError: notDefined is not defined
! notDefined: Exception--ReferenceError: notDefined is not defined
!! notDefined: Exception--ReferenceError: notDefined is not defined
A: If you do
if (myvar == undefined )
{
alert('var does not exists or is not initialized');
}
it will fail when the variable myvar does not exists, because myvar is not defined, so the script is broken and the test has no effect.
Because the window object has a global scope (default object) outside a function, a declaration will be 'attached' to the window object.
For example:
var myvar = 'test';
The global variable myvar is the same as window.myvar or window['myvar']
To avoid errors to test when a global variable exists, you better use:
if(window.myvar == undefined )
{
alert('var does not exists or is not initialized');
}
The question if a variable really exists doesn't matter, its value is incorrect. Otherwise, it is silly to initialize variables with undefined, and it is better use the value false to initialize. When you know that all variables that you declare are initialized with false, you can simply check its type or rely on !window.myvar to check if it has a proper/valid value. So even when the variable is not defined then !window.myvar is the same for myvar = undefined or myvar = false or myvar = 0.
When you expect a specific type, test the type of the variable. To speed up testing a condition you better do:
if( !window.myvar || typeof window.myvar != 'string' )
{
alert('var does not exists or is not type of string');
}
When the first and simple condition is true, the interpreter skips the next tests.
It is always better to use the instance/object of the variable to check if it got a valid value. It is more stable and is a better way of programming.
(y)
A: Also, the same things can be written shorter:
if (!variable){
// Do it if the variable is undefined
}
or
if (variable){
// Do it if the variable is defined
}
A: Use:
To check if property is undefined:
if (typeof something === "undefined") {
alert("undefined");
}
To check if property is not undefined:
if (typeof something !== "undefined") {
alert("not undefined");
}
A: I'm assuming you're going to also want to check for it being either undefined or null. If so, I suggest:
myVar == null
This is one of the only times a double equals is very helpful as it will evaluate to true when myVar is undefined or null, but it will evaluate to false when it is other falsey values such as 0, false, '', and NaN.
This the actual the source code for Lodash's isNil method.
A: A simple way to check if a key exists is to use in:
if (key in obj) {
// Do something
} else {
// Create key
}
const obj = {
0: 'abc',
1: 'def'
}
const hasZero = 0 in obj
console.log(hasZero) // true
A: We at ES6 can with !! Convert all values to Boolean.
Using this, all falsy values become false.
First solution
if (!(!!variable)) {
// Code
}
Second solution
if (!variable) {
// Code
}
A: In JavaScript there is null and there is undefined. They have different meanings.
*
*undefined means that the variable value has not been defined; it is not known what the value is.
*null means that the variable value is defined and set to null (has no value).
Marijn Haverbeke states, in his free, online book "Eloquent JavaScript" (emphasis mine):
There is also a similar value, null, whose meaning is 'this value is defined, but it does not have a value'. The difference in meaning between undefined and null is mostly academic, and usually not very interesting. In practical programs, it is often necessary to check whether something 'has a value'. In these cases, the expression something == undefined may be used, because, even though they are not exactly the same value, null == undefined will produce true.
So, I guess the best way to check if something was undefined would be:
if (something == undefined)
Object properties should work the same way.
var person = {
name: "John",
age: 28,
sex: "male"
};
alert(person.name); // "John"
alert(person.fakeVariable); // undefined
A: In the article Exploring the Abyss of Null and Undefined in JavaScript I read that frameworks like Underscore.js use this function:
function isUndefined(obj){
return obj === void 0;
}
A: Simply anything is not defined in JavaScript, is undefined, doesn't matter if it's a property inside an Object/Array or as just a simple variable...
JavaScript has typeof which make it very easy to detect an undefined variable.
Simply check if typeof whatever === 'undefined' and it will return a boolean.
That's how the famous function isUndefined() in AngularJs v.1x is written:
function isUndefined(value) {return typeof value === 'undefined';}
So as you see the function receive a value, if that value is defined, it will return false, otherwise for undefined values, return true.
So let's have a look what gonna be the results when we passing values, including object properties like below, this is the list of variables we have:
var stackoverflow = {};
stackoverflow.javascipt = 'javascript';
var today;
var self = this;
var num = 8;
var list = [1, 2, 3, 4, 5];
var y = null;
and we check them as below, you can see the results in front of them as a comment:
isUndefined(stackoverflow); //false
isUndefined(stackoverflow.javascipt); //false
isUndefined(today); //true
isUndefined(self); //false
isUndefined(num); //false
isUndefined(list); //false
isUndefined(y); //false
isUndefined(stackoverflow.java); //true
isUndefined(stackoverflow.php); //true
isUndefined(stackoverflow && stackoverflow.css); //true
As you see we can check anything with using something like this in our code, as mentioned you can simply use typeof in your code, but if you are using it over and over, create a function like the angular sample which I share and keep reusing as following DRY code pattern.
Also one more thing, for checking property on an object in a real application which you not sure even the object exists or not, check if the object exists first.
If you check a property on an object and the object doesn't exist, will throw an error and stop the whole application running.
isUndefined(x.css);
VM808:2 Uncaught ReferenceError: x is not defined(…)
So simple you can wrap inside an if statement like below:
if(typeof x !== 'undefined') {
//do something
}
Which also equal to isDefined in Angular 1.x...
function isDefined(value) {return typeof value !== 'undefined';}
Also other javascript frameworks like underscore has similar defining check, but I recommend you use typeof if you already not using any frameworks.
I also add this section from MDN which has got useful information about typeof, undefined and void(0).
Strict equality and undefined You can use undefined and the strict equality and inequality operators to determine whether a variable has
a value. In the following code, the variable x is not defined, and the
if statement evaluates to true.
var x;
if (x === undefined) {
// these statements execute
}
else {
// these statements do not execute
}
Note: The strict equality operator rather than the standard equality
operator must be used here, because x == undefined also checks whether
x is null, while strict equality doesn't. null is not equivalent to
undefined. See comparison operators for details.
Typeof operator and undefined
Alternatively, typeof can be used:
var x;
if (typeof x === 'undefined') {
// these statements execute
}
One reason to use typeof is that it does not throw an error if the
variable has not been declared.
// x has not been declared before
if (typeof x === 'undefined') { // evaluates to true without errors
// these statements execute
}
if (x === undefined) { // throws a ReferenceError
}
However, this kind of technique should be avoided. JavaScript is a
statically scoped language, so knowing if a variable is declared can
be read by seeing whether it is declared in an enclosing context. The
only exception is the global scope, but the global scope is bound to
the global object, so checking the existence of a variable in the
global context can be done by checking the existence of a property on
the global object (using the in operator, for instance).
Void operator and undefined
The void operator is a third alternative.
var x;
if (x === void 0) {
// these statements execute
}
// y has not been declared before
if (y === void 0) {
// throws a ReferenceError (in contrast to `typeof`)
}
more > here
A: ECMAScript 10 introduced a new feature - optional chaining which you can use to use a property of an object only when an object is defined like this:
const userPhone = user?.contactDetails?.phone;
It will reference to the phone property only when user and contactDetails are defined.
Ref. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Optional_chaining
A: What does this mean: "undefined object property"?
Actually it can mean two quite different things! First, it can mean the property that has never been defined in the object and, second, it can mean the property that has an undefined value. Let's look at this code:
var o = { a: undefined }
Is o.a undefined? Yes! Its value is undefined. Is o.b undefined? Sure! There is no property 'b' at all! OK, see now how different approaches behave in both situations:
typeof o.a == 'undefined' // true
typeof o.b == 'undefined' // true
o.a === undefined // true
o.b === undefined // true
'a' in o // true
'b' in o // false
We can clearly see that typeof obj.prop == 'undefined' and obj.prop === undefined are equivalent, and they do not distinguish those different situations. And 'prop' in obj can detect the situation when a property hasn't been defined at all and doesn't pay attention to the property value which may be undefined.
So what to do?
1) You want to know if a property is undefined by either the first or second meaning (the most typical situation).
obj.prop === undefined // IMHO, see "final fight" below
2) You want to just know if object has some property and don't care about its value.
'prop' in obj
Notes:
*
*You can't check an object and its property at the same time. For example, this x.a === undefined or this typeof x.a == 'undefined' raises ReferenceError: x is not defined if x is not defined.
*Variable undefined is a global variable (so actually it is window.undefined in browsers). It has been supported since ECMAScript 1st Edition and since ECMAScript 5 it is read only. So in modern browsers it can't be redefined to true as many authors love to frighten us with, but this is still a true for older browsers.
Final fight: obj.prop === undefined vs typeof obj.prop == 'undefined'
Pluses of obj.prop === undefined:
*
*It's a bit shorter and looks a bit prettier
*The JavaScript engine will give you an error if you have misspelled undefined
Minuses of obj.prop === undefined:
*
*undefined can be overridden in old browsers
Pluses of typeof obj.prop == 'undefined':
*
*It is really universal! It works in new and old browsers.
Minuses of typeof obj.prop == 'undefined':
*
*'undefned' (misspelled) here is just a string constant, so the JavaScript engine can't help you if you have misspelled it like I just did.
Update (for server-side JavaScript):
Node.js supports the global variable undefined as global.undefined (it can also be used without the 'global' prefix). I don't know about other implementations of server-side JavaScript.
A: 'if (window.x) { }' is error safe
Most likely you want if (window.x). This check is safe even if x hasn't been declared (var x;) - browser doesn't throw an error.
Example: I want to know if my browser supports History API
if (window.history) {
history.call_some_function();
}
How this works:
window is an object which holds all global variables as its members, and it is legal to try to access a non-existing member. If x hasn't been declared or hasn't been set then window.x returns undefined. undefined leads to false when if() evaluates it.
A: Reading through this, I'm amazed I didn't see this. I have found multiple algorithms that would work for this.
Never Defined
If the value of an object was never defined, this will prevent from returning true if it is defined as null or undefined. This is helpful if you want true to be returned for values set as undefined
if(obj.prop === void 0) console.log("The value has never been defined");
Defined as undefined Or never Defined
If you want it to result as true for values defined with the value of undefined, or never defined, you can simply use === undefined
if(obj.prop === undefined) console.log("The value is defined as undefined, or never defined");
Defined as a falsy value, undefined,null, or never defined.
Commonly, people have asked me for an algorithm to figure out if a value is either falsy, undefined, or null. The following works.
if(obj.prop == false || obj.prop === null || obj.prop === undefined) {
console.log("The value is falsy, null, or undefined");
}
A: The solution is incorrect. In JavaScript,
null == undefined
will return true, because they both are "casted" to a boolean and are false. The correct way would be to check
if (something === undefined)
which is the identity operator...
A: "propertyName" in obj //-> true | false
A: You can get an array all undefined with path using the following code.
function getAllUndefined(object) {
function convertPath(arr, key) {
var path = "";
for (var i = 1; i < arr.length; i++) {
path += arr[i] + "->";
}
path += key;
return path;
}
var stack = [];
var saveUndefined= [];
function getUndefiend(obj, key) {
var t = typeof obj;
switch (t) {
case "object":
if (t === null) {
return false;
}
break;
case "string":
case "number":
case "boolean":
case "null":
return false;
default:
return true;
}
stack.push(key);
for (k in obj) {
if (obj.hasOwnProperty(k)) {
v = getUndefiend(obj[k], k);
if (v) {
saveUndefined.push(convertPath(stack, k));
}
}
}
stack.pop();
}
getUndefiend({
"": object
}, "");
return saveUndefined;
}
jsFiddle link
A: Compare with void 0, for terseness.
if (foo !== void 0)
It's not as verbose as if (typeof foo !== 'undefined')
A: I'm surprised I haven't seen this suggestion yet, but it gets even more specificity than testing with typeof. Use Object.getOwnPropertyDescriptor() if you need to know whether an object property was initialized with undefined or if it was never initialized:
// to test someObject.someProperty
var descriptor = Object.getOwnPropertyDescriptor(someObject, 'someProperty');
if (typeof descriptor === 'undefined') {
// was never initialized
} else if (typeof descriptor.value === 'undefined') {
if (descriptor.get || descriptor.set) {
// is an accessor property, defined via getter and setter
} else {
// is initialized with `undefined`
}
} else {
// is initialized with some other value
}
A: Introduced in ECMAScript 6, we can now deal with undefined in a new way using Proxies. It can be used to set a default value to any properties which doesn't exist so that we don't have to check each time whether it actually exists.
var handler = {
get: function(target, name) {
return name in target ? target[name] : 'N/A';
}
};
var p = new Proxy({}, handler);
p.name = 'Kevin';
console.log('Name: ' +p.name, ', Age: '+p.age, ', Gender: '+p.gender)
Will output the below text without getting any undefined.
Name: Kevin , Age: N/A , Gender: N/A
A: Version for the use of dynamic variables
Did you know?
var boo ='lala';
function check(){
if(this['foo']){
console.log('foo is here');}
else{
console.log('have no foo');
}
if(this['boo']){
console.log('boo is here');}
else{
console.log('have no boo');
}
}
check();
A: handle undefined
function isUndefined(variable,defaultvalue=''){
if (variable == undefined ) return defaultvalue;
return variable;
}
var obj={
und:undefined,
notundefined:'hi i am not undefined'
}
function isUndefined(variable,defaultvalue=''){
if (variable == undefined )
{
return defaultvalue;
}
return variable
}
console.log(isUndefined(obj.und,'i am print'))
console.log(isUndefined(obj.notundefined,'i am print'))
A: This is probably the only explicit form of determining if the existing property-name has an explicit and intended value of undefined; which is, nonetheless, a JavaScript type.
"propertyName" in containerObject && ""+containerObject["propertyName"] == "undefined";
>> true \ false
This expression will only return true if the property name of the given context exists (truly) and only if its intended value is explicitly undefined.
There will be no false positives like with empty or blank strings zeros nulls or empty arrays and alike. This does exactly that. Checks i.e., makes sure the property name exists (otherwise it would be a false positive), than it explicitly checks if its value is undefined e.g. of an undefined JavaScript type in it's string representation form (literally "undefined") therefore == instead of === because no further conversion is possible. And this expression will only return true if both, that is all conditions are met. E.g. if the property-name doesn't exist, - it will return false. Which is the only correct return since nonexistent properties can't have values, not even an undefined one.
Example:
containerObject = { propertyName: void "anything" }
>> Object { propertyName: undefined }
// Now the testing
"propertyName" in containerObject && ""+containerObject["propertyName"] == "undefined";
>> true
/* Which makes sure that nonexistent property will not return a false positive
* unless it is previously defined */
"foo" in containerObject && ""+containerObject["foo"] == "undefined";
>> false
A: You can use the JavaScript object function like this:
var ojb ={
age: 12
}
if(ojb.hasOwnProperty('name')){
console.log('property exists and is not undefined');
}
The above method returns true if it got that property or the property is not undefined.
A: There are a few little helpers in the Lodash library:
isUndefined - to check if value is undefined.
_.isUndefined(undefined) // => true
_.isUndefined(null) // => false
has - to check if object contains a property
const object = { 'a': { 'b': 2 } }
_.has(object, 'a.b') // => true
_.has(object, 'a.c') // => false
A: I found this article, 7 Tips to Handle undefined in JavaScript, which is showing really interesting things about undefined
like:
The existence of undefined is a consequence of JavaScript’s permissive nature that allows the usage of:
*
*uninitialized variables
*non-existing object properties or methods
*out of bounds indexes to access array elements
*the invocation result of a function that returns nothing
A: In JavaScript, there are truthy and falsy expressions. If you want to check if the property is undefined or not, there is a straight way of using an if condition as given,
*
*Using truthy/falsy concept.
if(!ob.someProp){
console.log('someProp is falsy')
}
However, there are several more approaches to check the object has property or not, but it seems long to me. Here are those.
*Using === undefined check in if condition
if(ob.someProp === undefined){
console.log('someProp is undefined')
}
*Using typeof
typeof acts as a combined check for the value undefined and for whether a variable exists.
if(typeof ob.someProp === 'undefined'){
console.log('someProp is undefined')
}
*Using hasOwnProperty method
The JavaScript object has built in the hasOwnProperty function in the object prototype.
if(!ob.hasOwnProperty('someProp')){
console.log('someProp is undefined')
}
Not going in deep, but the 1st way looks shortened and good to me. Here are the details on truthy/falsy values in JavaScript and undefined is the falsy value listed in there. So the if condition behaves normally without any glitch. Apart from the undefined, values NaN, false (Obviously), '' (empty string) and number 0 are also the falsy values.
Warning: Make sure the property value does not contain any falsy value, otherwise the if condition will return false. For such a case, you can use the hasOwnProperty method
A: Review
A lot of the given answers give a wrong result because they do not distinguish between the case when an object property does not exist and the case when a property has value undefined. Here is proof for most popular solutions:
let obj = {
a: 666,
u: undefined // The 'u' property has value 'undefined'
// The 'x' property does not exist
}
console.log('>>> good results:');
console.log('A', "u" in obj, "x" in obj);
console.log('B', obj.hasOwnProperty("u"), obj.hasOwnProperty("x"));
console.log('\n>>> bad results:');
console.log('C', obj.u === undefined, obj.x === undefined);
console.log('D', obj.u == undefined, obj.x == undefined);
console.log('E', obj["u"] === undefined, obj["x"] === undefined);
console.log('F', obj["u"] == undefined, obj["x"] == undefined);
console.log('G', !obj.u, !obj.x);
console.log('H', typeof obj.u === 'undefined', typeof obj.x === 'undefined');
A: if (somevariable == undefined) {
alert('the variable is not defined!');
}
You can also make it into a function, as shown here:
function isset(varname){
return(typeof(window[varname]) != 'undefined');
}
A: Object.hasOwnProperty(o, 'propertyname');
This doesn't look up through the prototype chain, however.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27509",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3178"
} |
Q: Generating gradients programmatically? Given 2 rgb colors and a rectangular area, I'd like to generate a basic linear gradient between the colors. I've done a quick search and the only thing I've been able to find is this blog entry, but the example code seems to be missing, or at least it was as of this posting. Anything helps, algorithms, code examples, whatever. This will be written in Java, but the display layer is already taken care of, I just need to figure out how to figure out what to display.
A: Using the basic AWT classes, you could do something like this:
import java.awt.Color;
import java.awt.Graphics;
import java.awt.Graphics2D;
import java.awt.geom.Rectangle2D;
import javax.swing.JPanel;
public class LinearGradient extends JPanel {
public void paint(Graphics g) {
Graphics2D g2 = (Graphics2D) g;
Color color1 = Color.RED;
Color color2 = Color.BLUE;
int steps = 30;
int rectWidth = 10;
int rectHeight = 10;
for (int i = 0; i < steps; i++) {
float ratio = (float) i / (float) steps;
int red = (int) (color2.getRed() * ratio + color1.getRed() * (1 - ratio));
int green = (int) (color2.getGreen() * ratio + color1.getGreen() * (1 - ratio));
int blue = (int) (color2.getBlue() * ratio + color1.getBlue() * (1 - ratio));
Color stepColor = new Color(red, green, blue);
Rectangle2D rect2D = new Rectangle2D.Float(rectWidth * i, 0, rectWidth, rectHeight);
g2.setPaint(stepColor);
g2.fill(rect2D);
}
}
}
A: you want an interpolation between the first and the second colour. Interpolating colours is easy by calculating the same interpolation for each of its components (R, G, B). There are many ways to interpolate. The easiest is to use linear interpolation: just take percentage p of the first colour and percentage 1 - p of the second:
R = firstCol.R * p + secondCol.R * (1 - p)
There's another question related to this.
There are other methods of interpolation that sometimes work better. For example, using a bell-shaped (sigmoidal) interpolation function makes the transition smoother.
/EDIT: Oops, you mean using a predefined function. OK, even easier. The blog post you linked now has an example code in Python.
In Java, you could use the GradientPaint.
A: Following up on the execllent answer of David Crow, here's a Kotlin example implementation
fun gradientColor(x: Double, minX: Double, maxX: Double,
from: Color = Color.RED, to: Color = Color.GREEN): Color {
val range = maxX - minX
val p = (x - minX) / range
return Color(
from.red * p + to.red * (1 - p),
from.green * p + to.green * (1 - p),
from.blue * p + to.blue * (1 - p),
1.0
)
}
A: You can use the built in GradientPaint class.
void Paint(Graphics2D g, Regtangle r, Color c1, Color c2)
{
GradientPaint gp = new GradientPaint(0,0,c1,r.getWidth(),r.getHeight(),c2);
g.setPaint(gp);
g.fill(rect);
}
A: I've been using RMagick for that. If you need to go further the simple gradient, ImageMagick and one of its wrappers (like RMagick or JMagick for Java) could be useful.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27532",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33"
} |
Q: ASP.NET AJAX and PageRequestManagerParserErrorException Has anyone run into this error message before when using a timer on an ASP.NET page to update a DataGrid every x seconds?
Searching google yielded this blog entry and many more but nothing that seems to apply to me yet.
The full text of the error message below:
Sys.WebForms.PageRequestManagerParserErrorException: The message received from the server could not be parsed. Common causes for this error are when the response is modified by calls to Response.Write(), response filters, HttpModules, or server trace is enabled.
A: Many issues can cause that error. It's usually a Response.Write call, but anything that modifies the response can cause it.
We probably won't be able to help you unless you post some pertinent code-behind.
A: The RoleProvider sets a cookie to cache role information in a cookie. When the cookie resets during an asynch post back from AJAX, you will get this error. The solution is to either set the cookieTimeout in the roleManager section of your web.config to a very large number of minutes, or set the cacheRolesInCookie=false.
This was a known bug in AJAX 1.0 Extensions. I'm not sure if it was fixed in future releases, and I should have mentioned that I was using AJAX 1.0 extensions in VS2008 targeting the 2.0 framework.
Happy programming!
A: Regarding the formatting of your post: If you use the quote-button instead of code-button, people do not have to scroll to see the complete error message.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27535",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Do you have any recommended File templates for resharper for VB.Net The ones that stick out and make life usefull.
A: Here is one for a testClass with Nunit support.
Imports Nunit.FrameWork
Namespace $NAMESPACE$
'''
''' A TestClass
'''
'''
_
Public Class $CLASSNAME$
#Region " Setup and TearDown "
'''
''' Sets up the Tests
'''
'''
_
Public Sub Setup()
End Sub
'''
''' Tears down the test. Is executed after the Test is Completed
'''
'''
_
Public Sub TearDown()
End Sub
#End Region
#Region " Tests "
'''
''' A Test
'''
'''
_
Public Sub $Test_Name$()
End Sub
#End Region
End Class
End Namespace
A: MSTest testclass. I like my testclass basic. Not like the one that is created by VS.
Imports Microsoft.VisualStudio.TestTools.UnitTesting
<TestClass()> _
Public Class $ClassName$
$END$
End Class
$Classname is "Current file name without extension" macro.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27559",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Oracle write to file I am running oracle and have a query which pulls some results from the database. I would like to write the results as a text file. How would I go about doing this?
My prefered way would be by using UTL_FILE. Would some one have an example of how to do this?
A: If you're using Sql Plus, is as easy as:
SQL> spool c:\temp\out.txt
SQL> SELECT * FROM USERS;
SQL> spool off
This three sentences will output the result of the query "SELECT * FROM USERS" to the file c:\temp\out.txt.
You can format this query using the string manipulation functions of Oracle.
A: If you are using PL/SQL then you can use the UTL_FILE package, the difference from using sql+ spool is that the files are written to the server file system. UTL_FILE has a number of limitations so an alternative on the server side would be to use Java stored procedures.
A: Use UTL_FILE in combination with CREATE DIRECTORY for ease of mapping a directory path with a name (it does not create the actual directory just a reference to it so ensure it is created first)
example
create directory logfile as 'd:\logfile'; -- must have priv to do this
declare
vFile utl_file.file_type;
begin
vFile := utl_file.fopen(logfile ,'syslog','w'); -- w is write. This returns file handle
utl_file.put(vFile,'Start Logfile'); -- note use of file handle vFile
utl_file.fclose(vFile); -- note use of file handle vFile
end;
A: If you're running the query from sqlplus you can use the spool command:
spool /tmp/test.spool
After executing the spool command within a session, all output is sent to the sqlplus console as well as the /tmp/test.spool text file.
A: This seems to be a reasonable tutorial with a few simple examples UTL_FILE example
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27562",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Where can I learn more about PyPy's translation function? I've been having a hard time trying to understand PyPy's translation. It looks like something absolutely revolutionary from simply reading the description, however I'm hard-pressed to find good documentation on actually translating a real world piece of code to something such as LLVM. Does such a thing exist? The official PyPy documentation on it just skims over the functionality, rather than providing anything I can try out myself.
A: This document seems to go into quite a bit of detail (and I think a complete description is out of scope for a stackoverflow answer):
*
*http://codespeak.net/pypy/dist/pypy/doc/translation.html
The general idea of translating from one language to another isn't particularly revolutionary, but it has only recently been gaining popularity / applicability in "real-world" applications. GWT does this with Java (generating Javascript) and there is a library for translating Haskell into various other languages as well (called YHC)
A: If you want some hand-on examples, PyPy's Getting Started document has a section titled "Trying out the translator".
A: PyPy translator is in general, not intended for more public use. We use it for translating
our own python interpreter (including JIT and GCs, both written in RPython, this restricted
subset of Python). The idea is that with good JIT and GC, you'll be able to speedups even
without knowing or using PyPy's translation toolchain (and more importantly, without
restricting yourself to RPython).
Cheers,
fijal
A: Are you looking for Python specific translation, or just the general "how do you compile some code to bytecode"? If the latter is your case, check the LLVM tutorial. I especially find chapter two, which teaches you to write a compiler for your own language, interesting.
A:
It looks like something absolutely revolutionary from simply reading the description,
As far as I know, PyPy is novel in the sense that it is the first system expressly designed for implementing languages. Other tools exist to help with much of the very front end, such as parser generators, or for the very back end, such as code generation, but not much existed for connecting the two.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27567",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Assembler IDE/Simulator for beginner I'd like to learn how to program in Assembler. I've done a bit of assembly before (during my A-Level Computing course) but that was very definitely a simplified 'pseudo-assembler'. I've borrowed my Dad's old Z80 Assembler reference manual, and that seems quite interesting so if possible I'd like to have a go with Z80 assembler.
However, I don't have a Z80 processor to hand, and would like to do it on my PC (I have windows or linux so either is good). I've found various assemblers around on the internet, but I'm not particularly interested in assembling down to a hex file, I want to just be able to assemble it to something that some kind of simulator on the PC can run. Preferably this simulator would show me the contents of all the registers, memory locations etc, and let me step through instructions. I've found a few bits of software that suggest they might do this - but they either refuse to compile, or don't seem to work properly. Has anyone got any suggestions? If there are good simulator/IDE things available for another type of assembler then I could try that instead (assuming there is a good online reference manual available).
A: You might want to check out the open source 8085 simulator "GnuSim8085", it's specifically meant to be used for educational purposes, and it was in fact written by student while preparing for his exams. It runs on both, Linux and Windows.
A: WinApe is a good emulator of an Amstrad CPC. The Amstrad CPC was a Home Computer produced in the 80's. It used a Z80 as its CPU. Using the emulator you can display a lot of the internals while programming. It includes a debugger and a disassembler for Z80 code.
A: MipSim is FREE
Main Features of MIPSim 2
*
*Built-in code editor with features like syntax highlighting and folding
*Display register and memory values in different representations (signed integer, unsigned integer, hexadecimal and ASCII)
*Set the block size (full-word, half-word, byte) of the memory cells for easier examination of the memory values
*Change values of registers and memory cells with a single click even during simulation and debugging
*Realtime user-interface updating allows you to see how values of registers and memory cells change during execution
*Built-in debugger with step-by-step instruction execution, instruction skipping and breakpoint features
*Tools for inserting ASCII, UNICODE strings and integer values to memory for testing of your code
*Tools for checking duplicate or missing labels and instruction parameters
*Save computer state (values of all registers and memory cells) so that next time you run the simulator you can continue from where you left!
*Set the simulation speed - low speeds are great to trace your code and to see how it behaves
*Encode instructions - produce machine code in either binary or hexadecimal representation
*Catch assembly time and runtime errors
*Easier debugging with descriptive error messages
*Multi-threaded design - MIPSim doesn't get stuck (hopefully ;) even if the assembly code executed is erroneous or contains infinite loops
*MIPSim API - make your own programs that can read from and write to the registers and memory of MIPSim, great flexibility for powerful testing!
A: If Your are on windows 8085 Simulator is the best choice.
It user interface is excellent than any other simulator. Also this simulator provide live view of memory map very time(also while in the execution).
But this one does not support Windows 98 or lower for that you need to check other simulators like GNUSim8085.
A: Aim higher! Try and get a simulator for a more powerful assembly language. Remember, Z80 and 808x were low-end processors with low-end and awkward instruction sets.
Something like VAX from DEC was regarded as the Rolls-Royce of instruction sets. And then there are crazy Risc instruction sets that do some really strange things. Maybe you can find definitions of those so that you can have a crack at implementing them.
A: You may be interested in this for a Z80 simulator, and I've had good experiences with WinAsm.
A: You might also consider learning x86 assembly language, which you could do using in-line assembler in Visual Studio - although it's a larger instruction set than Z80, you would have the advantage of being able to use much better tools than would be available for the Z80.
I've also just remembered that the Keil 8051 and Arm tools have a simulator in the IDE - there are size-restricted versions of these available for free download from www.keil.com
A: If you happen to already know .NET, then this may be of use:
http://www.viksoe.dk/code/asmil.htm
It's a little bit limited, and may only work with .NET 1.1, but you could atleast use a "modern" IDE for it, and there are plenty of docs around for it.
<%@ page language="Asm80386" %>
<%
Str: DB "Testing...", 0
mov eax, -2
cmp eax, 2
jle Label1
xor eax, eax
Label1:
lea esi, Str
push esi
call "Response.Write(string)"
pop esi
%>
<br>EAX: <%= eax %>
Another option, if you want to go "hard core" is get something like FreeDOS and VMWare, and use that. I'm sure a garage sale (car boot sale? yard sale?) or second hand book shop would have a copy of Peter Norton's old DOS interrupts bible. :)
Personally, I learned x86 asm by using Turbo Pascal (which I think is now free from Borland?), which had the ability to embed assembly instructions inside a function. Made it easier to setup the app, and I could focus on the stuff I wanted to do. I later used MacVAX at Auckland Uni, which was ok, but the VAX is very much dead - you may as well learn x86 :)
A: SimpSim is definitely worth a look. It's Windows only, but the feature set is pretty decent:
*
*Main memory and register display
*Built-in editor with syntax highlighting
*Run, step, and break functions
A: This wouldn't make any meaning to you now but just for people stopping by. This is the best assembly code simulator I have come across with. Truly worth it!
http://www.emu8086.com/
A:
I've found a few bits of software that suggest they might do this - but they either
refuse to compile, or don't seem to work properly. Has anyone got any suggestions?
Write one. You're best off picking a nice, simple instruction set (Z80 should be perfect). I remember doing this as a first-year undergraduate exercise - I think we wrote the simulator in C++ and simulated 6800 assembly, but really any language/instruction set will do.
The idea of "learning assembly language" these days is to get the idea of how computers work at the lowest level, only a select few (compiler writers, etc.) have any real reason to actually be writing assembly code these days. Modern processors are stuffed full of features designed to be used by compilers to help optimise code for speed/concurrent execution/power consumption/etc., and trying to write assembly by hand for a modern processor would be a nightmare.
Don't fret about getting your application production-ready unless you want to - in all likelihood the bits of software you've found so far were written by people exactly like you who wanted to figure out how assembly works and wrote their own simulator, then realised how much work would be involved in getting it "production ready" so the general public could use it.
A: Take a look at Thomas Scherrer Z80 Emulators for a listing of potential emulators you could use.
A: I write z80 asm for the ZX Spectrum (still, I know :) ) and use SJasmPlus to link to a spectrum emulator file. Lots of of the better Spectrum emulators like Fuse and ZXSpin have built in editors as well for on the fly debugging and patching.
A: When I was in college we used PIC microprocessors. They are made by a company called Microchip. They also have a great IDE with a chip emulator/simulator that can allow you to do things without actually having the chips.
A: Why use an emulator?
Download MASM or NASM and write good old 80386 architecture. Plenty of online samples and learning tools.
Plenty of real-world reason to use assembler!
A: there is a simulator which is Visual6502 for teaching fundemental of microprocessor architecture. It has a editor, assembler, I/O operation and animation of how to work a microprocessor. I is available at the following link.
http://www.pcsistem.net/visual/index.htm
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27568",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: Find number of files with a specific extension, in all subdirectories Is there a way to find the number of files of a specific type without having to loop through all results inn a Directory.GetFiles() or similar method? I am looking for something like this:
int ComponentCount = MagicFindFileCount(@"c:\windows\system32", "*.dll");
I know that I can make a recursive function to call Directory.GetFiles , but it would be much cleaner if I could do this without all the iterating.
EDIT: If it is not possible to do this without recursing and iterating yourself, what would be the best way to do it?
A: The slickest method woud be to use linq:
var fileCount = (from file in Directory.EnumerateFiles(@"H:\iPod_Control\Music", "*.mp3", SearchOption.AllDirectories)
select file).Count();
A: You can use this overload of GetFiles:
Directory.GetFiles Method (String,
String, SearchOption)
and this member of SearchOption:
AllDirectories - Includes the current
directory and all the subdirectories
in a search operation. This option
includes reparse points like mounted
drives and symbolic links in the
search.
GetFiles returns an array of string so you can just get the Length which is the number of files found.
A: I was looking for a more optimized version. Since I haven't found it, I decided to code it and share it here:
public static int GetFileCount(string path, string searchPattern, SearchOption searchOption)
{
var fileCount = 0;
var fileIter = Directory.EnumerateFiles(path, searchPattern, searchOption);
foreach (var file in fileIter)
fileCount++;
return fileCount;
}
All the solutions using the GetFiles/GetDirectories are kind of slow since all those objects need to be created. Using the enumeration, it doesn't create any temporary objects (FileInfo/DirectoryInfo).
see Remarks http://msdn.microsoft.com/en-us/library/dd383571.aspx for more information
A: You should use the Directory.GetFiles(path, searchPattern, SearchOption) overload of Directory.GetFiles().
Path specifies the path, searchPattern specifies your wildcards (e.g., *, *.format) and SearchOption provides the option to include subdirectories.
The Length property of the return array of this search will provide the proper file count for your particular search pattern and option:
string[] files = directory.GetFiles(@"c:\windows\system32", "*.dll", SearchOption.AllDirectories);
return files.Length;
EDIT: Alternatively you can use Directory.EnumerateFiles method
return Directory.EnumerateFiles(@"c:\windows\system32", "*.dll", SearchOption.AllDirectories).Count();
A: Using recursion your MagicFindFileCount would look like this:
private int MagicFindFileCount( string strDirectory, string strFilter ) {
int nFiles = Directory.GetFiles( strDirectory, strFilter ).Length;
foreach( String dir in Directory.GetDirectories( strDirectory ) ) {
nFiles += GetNumberOfFiles(dir, strFilter);
}
return nFiles;
}
Though Jon's solution might be the better one.
A: I have an app which generates counts of the directories and files in a parent directory. Some of the directories contain thousands of sub directories with thousands of files in each. To do this whilst maintaining a responsive ui I do the following ( sending the path to ADirectoryPathWasSelected method):
public class DirectoryFileCounter
{
int mDirectoriesToRead = 0;
// Pass this method the parent directory path
public void ADirectoryPathWasSelected(string path)
{
// create a task to do this in the background for responsive ui
// state is the path
Task.Factory.StartNew((state) =>
{
try
{
// Get the first layer of sub directories
this.AddCountFilesAndFolders(state.ToString())
}
catch // Add Handlers for exceptions
{}
}, path));
}
// This method is called recursively
private void AddCountFilesAndFolders(string path)
{
try
{
// Only doing the top directory to prevent an exception from stopping the entire recursion
var directories = Directory.EnumerateDirectories(path, "*.*", SearchOption.TopDirectoryOnly);
// calling class is tracking the count of directories
this.mDirectoriesToRead += directories.Count();
// get the child directories
// this uses an extension method to the IEnumerable<V> interface,
// which will run a function on an object. In this case 'd' is the
// collection of directories
directories.ActionOnEnumerable(d => AddCountFilesAndFolders(d));
}
catch // Add Handlers for exceptions
{
}
try
{
// count the files in the directory
this.mFilesToRead += Directory.EnumerateFiles(path).Count();
}
catch// Add Handlers for exceptions
{ }
}
}
// Extension class
public static class Extensions
{
// this runs the supplied method on each object in the supplied enumerable
public static void ActionOnEnumerable<V>(this IEnumerable<V> nodes,Action<V> doit)
{
foreach (var node in nodes)
{
doit(node);
}
}
}
A: Someone has to do the iterating part.
AFAIK, there is no such method present in .NET already, so I guess that someone has to be you.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27570",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: JavaServiceWrapper on 64bit linux, any problems? We've been using the 32bit linux version of the JavaServiceWrapper for quite a while now and it's working perfectly. We are now considering also using it on 64bit linux systems. There are downloads for 64bit binaries on the website, but looking into Makefile for the 64bit version I found the following comment, though:
# This makefile is in progess. It builds, but the resulting libwrapper.so does not yet work.
# If you know how to fix it then please help out.
Can anyone confirm, that this comment is just outdated and the resulting wrapper will work without flaws?
A: From http://wrapper.tanukisoftware.org/doc/english/introduction.html :
Binary distributions are provided for
the following list of platforms and
are available on the download page.
Only OS versions which are known to
work have been listed.
(snip...)
*
*linux - Linux kernels; 2.2.x 2.4.x, 2.6.x. Known to work with Debian and Red Hat, but should work with any
distribution. Currently supported on
both 32 and 64-bit x86, and 64-bit ppc
systems.
A: I've had it running in production on 64-bit red hat without any trouble for the last year or so.
A: Take a look at http://yajsw.sourceforge.net/.
It's free and rather compatible reimplementation of TanukiSoftware Java Service Wrapper featuring free 64-bit support.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27572",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: When to choose checked and unchecked exceptions In Java (or any other language with checked exceptions), when creating your own exception class, how do you decide whether it should be checked or unchecked?
My instinct is to say that a checked exception would be called for in cases where the caller might be able to recover in some productive way, where as an unchecked exception would be more for unrecoverable cases, but I'd be interested in other's thoughts.
A: You can call it a checked or unchecked exception; however, both types of exception can be caught by the programmer, so the best answer is: write all of your exceptions as unchecked and document them. That way the developer who uses your API can choose whether he or she wants to catch that exception and do something. Checked exceptions are a complete waste of everyone's time and it makes your code a shocking nightmare to look at. Proper unit testing will then bring up any exceptions that you may have to catch and do something with.
A: Checked Exception:
If client can recover from an exception and would like to continue, use checked exception.
Unchecked Exception:
If a client can't do any thing after the exception, then raise unchecked exception.
Example: If you are expected to do arithmetic operation in a method A() and based on the output from A(), you have to another operation. If the output is null from method A() which you are not expecting during the run time, then you are expected to throw Null pointer Exception which is Run time exception.
Refer here
A: From A Java Learner:
When an exception occurs, you have to
either catch and handle the exception,
or tell compiler that you can't handle
it by declaring that your method
throws that exception, then the code
that uses your method will have to
handle that exception (even it also
may choose to declare that it throws
the exception if it can't handle it).
Compiler will check that we have done
one of the two things (catch, or
declare). So these are called Checked
exceptions. But Errors, and Runtime
Exceptions are not checked for by
compiler (even though you can choose
to catch, or declare, it is not
required). So, these two are called
Unchecked exceptions.
Errors are used to represent those
conditions which occur outside the
application, such as crash of the
system. Runtime exceptions are
usually occur by fault in the
application logic. You can't do
anything in these situations. When
runtime exception occur, you have to
re-write your program code. So, these
are not checked by compiler. These
runtime exceptions will uncover in
development, and testing period. Then
we have to refactor our code to remove
these errors.
A:
The rule I use is: never use unchecked exceptions! (or when you don't see any way around it)
There’s a case for the opposite: never use checked exceptions. I’m reluctant to take sides in the debate (there’s definitely good arguments on both sides!) but a fair number of experts feel that checked exceptions were a wrong decision in hindsight.
For some discussion, check the WikiWikiWeb’s “Checked exceptions are of dubious value”. Another example of an early, extensive argument is Rod Waldhoff’s blog post.
A: On any large enough system, with many layers, checked exception are useless as, anyway, you need an architectural level strategy to handle how the exception will be handled (use a fault barrier)
With checked exceptions your error handling stategy is micro-managed and its unbearable on any large system.
Most of the time you don't know if an error is "recoverable" because you don't know in what layer the caller of your API is located.
Let's say that I create a StringToInt API that converts the string representation of an integer to an Int. Must I throw a checked exception if the API is called with the "foo" string ? Is it recoverable ? I don't know because in his layer the caller of my StringToInt API may already have validated the input, and if this exception is thrown it's either a bug or a data corruption and it isn't recoverable for this layer.
In this case the caller of the API does not want to catch the exception. He only wants to let the exception "bubble up". If I chose a checked exception, this caller will have plenty of useless catch block only to artificially rethrow the exception.
What is recoverable depends most of the time on the caller of the API, not on the writter of the API. An API should not use checked exceptions as only unchecked exceptions allows to choose to either catch or ignore an exception.
A: You're correct.
Unchecked exceptions are used to let the system fail fast which is a good thing. You should clearly state what is your method expecting in order to work properly. This way you can validate the input only once.
For instance:
/**
* @params operation - The operation to execute.
* @throws IllegalArgumentException if the operation is "exit"
*/
public final void execute( String operation ) {
if( "exit".equals(operation)){
throw new IllegalArgumentException("I told you not to...");
}
this.operation = operation;
.....
}
private void secretCode(){
// we perform the operation.
// at this point the opreation was validated already.
// so we don't worry that operation is "exit"
.....
}
Just to put an example. The point is, if the system fails fast, then you'll know where and why it did fail. You'll get an stacktrace like:
IllegalArgumentException: I told you not to use "exit"
at some.package.AClass.execute(Aclass.java:5)
at otherPackage.Otherlass.delegateTheWork(OtherClass.java:4569)
ar ......
And you'll know what happened. The OtherClass in the "delegateTheWork" method ( at line 4569 ) called your class with the "exit" value, even when it shouldn't etc.
Otherwise you would have to sprinkle validations all over your code and that's error prone. Plus, sometimes it is hard to track what went wrong and you may expect hours of frustrating debugging
Same thing happens with NullPointerExceptions. If you have a 700 lines class with some 15 methods, that uses 30 attributes and none of them can be null, instead of validating in each of those methods for nullability you could make all those attributes read-only and validate them in the constructor or factory method.
public static MyClass createInstane( Object data1, Object data2 /* etc */ ){
if( data1 == null ){ throw NullPointerException( "data1 cannot be null"); }
}
// the rest of the methods don't validate data1 anymore.
public void method1(){ // don't worry, nothing is null
....
}
public void method2(){ // don't worry, nothing is null
....
}
public void method3(){ // don't worry, nothing is null
....
}
Checked exceptions Are useful when the programmer ( you or your co-workers ) did everything right, validated the input, ran tests, and all the code is perfect, but the code connects to a third party webservice that may be down ( or a file you were using was deleted by another external process etc ) . The webservice may even be validate before the connection is attempted, but during the data transfer something went wrong.
In that scenario there is nothing that you or your co-workers can do to help it. But still you have to do something and not let the application just die and disappear in the eyes of the user. You use a checked exception for that and handle the exception, what can you do when that happens?, most of the time , just to attempt to log the error, probably save your work ( the app work ) and present a message to the user. ( The site blabla is down, please retry later etc. )
If the checked exception are overused ( by adding the "throw Exception" in the all the methods signatures ) , then your code will become very fragile, because everyone will ignore that exception ( because is too general ) and the quality of code will be seriously compromised.
If you overuse unchecked exception something similar will happen. The users of that code don't know if something may go wrong an a lot of try{...}catch( Throwable t ) will appear.
A: Here is I want to share my opinion I have after many years of development experience:
*
*Checked exception. This is a part of business use case or call flow, this is a part of application logic we expect or not expect. For example connection rejected, condition is not satisfied etc. We need to handle it and show corresponding message to user with instructions what happened and what to do next (try again later etc).
I usually call it post-processing exception or "user" exception.
*Unchecked exception. This is a part of programming exception, some mistake in software code programming (bug, defect) and reflects a way how programmers must use API as per documentation. If an external lib/framework doc says it expects to get data in some range and non null, because NPE or IllegalArgumentException will be thrown, programmer should expect it and use API correctly as per documentation. Otherwise the exception will be thrown.
I usually call it pre-processing exception or "validation" exception.
By target audience. Now let's talk about target audience or group of people the exceptions have been designed (as per my opinion):
*
*Checked exception. Target audience is users/clients.
*Unchecked exception. Target audience is developers. By other words unchecked exception are designed for developers only.
By application development lifecycle phase.
*
*Checked exception is designed to exist during whole production lifecycle as normal and expected mechanism an application handles exceptional cases.
*Unchecked exception is designed to exist only during application development/testing lifecycle, all of them should be fixed during that time and should not be thrown when an application is running on production already.
The reason why frameworks usually use unchecked exceptions (Spring for example) is that framework cannot determine the business logic of your application, this is up to developers to catch then and design own logic.
A: We have to distinguish these two types of exception based on whether it is programmer error or not.
*
*If an error is a programmer error, it must be an Unchecked Exception. For example:
SQLException/IOException/NullPointerException. These exceptions are
programming errors. They should be handled by programmer. While in
JDBC API, SQLException is Checked Exception, In Spring JDBCTemplate
it is an Unchecked Exception.Programmer doesn't worry about
SqlException, when use Spring.
*If an error is not a programmer error and the reason is coming from external, it must be a Checked Exception. For example: if the
file is deleted or file permission is changed by someone else, It
should be recovered.
FileNotFoundException is good example to understand subtle differences. FileNotFoundException is thrown in case file is not found. There are two reason for this exception. If the file path is defined by developer or taking from end user via GUI it should be an Unchecked Exception. If the file is deleted by someone else, it should be a Checked Exception.
Checked Exception can be handled in two ways. These are using try-catch or propagate the exception. In case of propagation of exception, all methods in call stack will be tightly coupled because of exception handling. That's why, we have to use Checked Exception carefully.
In case you develop an layered enterprise system, you have to choose mostly unchecked exception to throw, but don't forget to use checked exception for the case you cannot do anything.
A: Checked Exceptions are great, so long as you understand when they should be used. The Java core API fails to follow these rules for SQLException (and sometimes for IOException) which is why they are so terrible.
Checked Exceptions should be used for predictable, but unpreventable errors that are reasonable to recover from.
Unchecked Exceptions should be used for everything else.
I'll break this down for you, because most people misunderstand what this means.
*
*Predictable but unpreventable: The caller did everything within their power to validate the input parameters, but some condition outside their control has caused the operation to fail. For example, you try reading a file but someone deletes it between the time you check if it exists and the time the read operation begins. By declaring a checked exception, you are telling the caller to anticipate this failure.
*Reasonable to recover from: There is no point telling callers to anticipate exceptions that they cannot recover from. If a user attempts to read from an non-existing file, the caller can prompt them for a new filename. On the other hand, if the method fails due to a programming bug (invalid method arguments or buggy method implementation) there is nothing the application can do to fix the problem in mid-execution. The best it can do is log the problem and wait for the developer to fix it at a later time.
Unless the exception you are throwing meets all of the above conditions it should use an Unchecked Exception.
Reevaluate at every level: Sometimes the method catching the checked exception isn't the right place to handle the error. In that case, consider what is reasonable for your own callers. If the exception is predictable, unpreventable and reasonable for them to recover from then you should throw a checked exception yourself. If not, you should wrap the exception in an unchecked exception. If you follow this rule you will find yourself converting checked exceptions to unchecked exceptions and vice versa depending on what layer you are in.
For both checked and unchecked exceptions, use the right abstraction level. For example, a code repository with two different implementations (database and filesystem) should avoid exposing implementation-specific details by throwing SQLException or IOException. Instead, it should wrap the exception in an abstraction that spans all implementations (e.g. RepositoryException).
A: Here is my 'final rule of thumb'.
I use:
*
*unchecked exception within the code of my method for a failure due to the caller (that involves an explicit and complete documentation)
*checked exception for a failure due to the callee that I need to make explicit to anyone wanting to use my code
Compare to the previous answer, this is a clear rationale (upon which one can agree or disagree) for the use of one or the other (or both) kind of exceptions.
For both of those exceptions, I will create my own unchecked and checked Exception for my application (a good practice, as mentionned here), except for very common unchecked exception (like NullPointerException)
So for instance, the goal of this particular function below is to make (or get if already exist) an object,
meaning:
*
*the container of the object to make/get MUST exist (responsibility of the CALLER
=> unchecked exception, AND clear javadoc comment for this called function)
*the other parameters can not be null
(choice of the coder to put that on the CALLER: the coder will not check for null parameter but the coder DOES DOCUMENT IT)
*the result CAN NOT BE NULL
(responsibility and choice of the code of the callee, choice which will be of great interest for the caller
=> checked exception because every callers MUST take a decision if the object can not be created/found, and that decision must be enforced at the compilation time: they can not use this function without having to deal with this possibility, meaning with this checked exception).
Example:
/**
* Build a folder. <br />
* Folder located under a Parent Folder (either RootFolder or an existing Folder)
* @param aFolderName name of folder
* @param aPVob project vob containing folder (MUST NOT BE NULL)
* @param aParent parent folder containing folder
* (MUST NOT BE NULL, MUST BE IN THE SAME PVOB than aPvob)
* @param aComment comment for folder (MUST NOT BE NULL)
* @return a new folder or an existing one
* @throws CCException if any problems occurs during folder creation
* @throws AssertionFailedException if aParent is not in the same PVob
* @throws NullPointerException if aPVob or aParent or aComment is null
*/
static public Folder makeOrGetFolder(final String aFoldername, final Folder aParent,
final IPVob aPVob, final Comment aComment) throws CCException {
Folder aFolderRes = null;
if (aPVob.equals(aParent.getPVob() == false) {
// UNCHECKED EXCEPTION because the caller failed to live up
// to the documented entry criteria for this function
Assert.isLegal(false, "parent Folder must be in the same PVob than " + aPVob); }
final String ctcmd = "mkfolder " + aComment.getCommentOption() +
" -in " + getPNameFromRepoObject(aParent) + " " + aPVob.getFullName(aFolderName);
final Status st = getCleartool().executeCmd(ctcmd);
if (st.status || StringUtils.strictContains(st.message,"already exists.")) {
aFolderRes = Folder.getFolder(aFolderName, aPVob);
}
else {
// CHECKED EXCEPTION because the callee failed to respect his contract
throw new CCException.Error("Unable to make/get folder '" + aFolderName + "'");
}
return aFolderRes;
}
A: It's not just a matter of the ability to recover from the exception. What matter most, in my opinion, is whether the caller is interested in catching the exception or not.
If you write a library to be used elsewhere, or a lower-level layer in your application, ask yourself if the caller is interested in catching (knowing about) your exception. If he is not, then use an unchecked exception, so you don't burden him unnecessarily.
This is the philosophy used by many frameworks. Spring and hibernate, in particularly, come to mind - they convert known checked exception to unchecked exception precisely because checked exceptions are overused in Java. One example that I can think of is the JSONException from json.org, which is a checked exception and is mostly annoying - it should be unchecked, but the developer simply haven't thought it through.
By the way, most of the time the caller's interest in the exception is directly correlated to the ability to recover from the exception, but that is not always the case.
A: I agree with the preference for unchecked exceptions as a rule, especially when designing an API. The caller can always choose to catch a documented, unchecked exception. You're just not needlessly forcing the caller to.
I find checked exceptions useful at the lower-level, as implementation detail. It often seems like a better flow of control mechanism than having to manage a specified error "return code". It can sometimes help see the impact of an idea for a low level code change too... declare a checked exception downstream and see who would need to adjust. This last point doesn't apply if there are a lot of generic: catch(Exception e) or throws Exception which is usually not too well-thought out anyway.
A: Here is a very simple solution to your Checked/Unchecked dilemma.
Rule 1: Think of a Unchecked Exception as a testable condition before code executes.
for example…
x.doSomething(); // the code throws a NullPointerException
where x is null...
…the code should possibly have the following…
if (x==null)
{
//do something below to make sure when x.doSomething() is executed, it won’t throw a NullPointerException.
x = new X();
}
x.doSomething();
Rule 2: Think of a Checked Exception as an un-testable condition that may occur while the code executes.
Socket s = new Socket(“google.com”, 80);
InputStream in = s.getInputStream();
OutputStream out = s.getOutputStream();
…in the example above, the URL (google.com) may be unavailable to due the DNS server being down. Even at the instant the DNS server was working and resolved the ‘google.com’ name to an IP address, if the connection is made to google.com, at anytime afterword, the network could go down. You simply can not test the network all the time before reading and writing to streams.
There are times where the code simply must execute before we can know if there is a problem. By forcing developers to write their code in such a way to force them to handle these situations via Checked Exception, I have to tip my hat to the creator of Java that invented this concept.
In general, almost all the APIs in Java follow the 2 rules above. If you try to write to a file, the disk could fill up before completing the write. It is possible that other processes had caused the disk to become full. There is simply no way to test for this situation. For those who interact with hardware where at any time, using the hardware can fail, Checked Exceptions seem to be an elegant solution to this problem.
There is a gray area to this. In the event that many tests are needed (a mind blowing if statement with lots of && and ||), the exception thrown will be a CheckedException simply because it’s too much of a pain to get right — you simply can’t say this problem is a programming error. If there are much less than 10 tests (e.g. ‘if (x == null)’), then the programmer error should be a UncheckedException.
Things get interesting when dealing with language interpreters. According to the rules above, should a Syntax Error be considered a Checked or Unchecked Exception? I would argue that if the syntax of the language can be tested before it gets executed, it should be an UncheckedException. If the language can not be tested — similar to how assembly code runs on a personal computer, then the Syntax Error should be a Checked Exception.
The 2 rules above will probably remove 90% of your concern over which to choose from. To summarize the rules, follow this pattern…
1) if the code to be execute can be tested before it’s executed for it to run correctly and if an Exception occurs — a.k.a. a programmer error, the Exception should be an UncheckedException (a subclass of RuntimeException).
2) if the code to be executed can not be tested before it’s executed for it to run correctly, the Exception should be a Checked Exception (a subclass of Exception).
A: Checked exceptions are useful for recoverable cases where you want to provide information to the caller (i.e. insufficient permissions, file not found, etc).
Unchecked exceptions are used rarely, if at all, for informing the user or programmer of serious errors or unexpected conditions during run-time. Don't throw them if you're writing code or libraries that will be used by others, as they may not be expecting your software to throw unchecked exceptions since the compiler doesn't force them to be caught or declared.
A: Whnever an exception is less likely expected, and we can proceed even after catching that, and we can not do anything to avoid that exception then we can use checked exception.
Whenever we want to do something meaningful when a particular exceptions happens and when that exception is expected but not certain, then we can use checked exception.
Whenever exception navigating in different layers, we don't need to catch it in every layer, in that case, we can use runtime exception or wrap exception as unchecked exception.
Runtime exception is used when exception most likely to be happened, there is no way of going further and nothing can be recoverable. So in this case we can take precautions with respect to that exception. EX: NUllPointerException, ArrayOutofBoundsException. These are more likely to happen. In this scenario, we can take precautions while coding to avoid such exception. Otherwise we will have to write try catch blocks every where.
More general exceptions can be made Unchecked, less general are checked.
A: I think we can think about exeptions from several questions:
why does exeption happen? What can we do when it happens
by mistake, a bug. such as a method of null object is called.
String name = null;
... // some logics
System.out.print(name.length()); // name is still null here
This kind of exception should be fixed during test. Otherwise, it breaks the production, and you got a very high bug which needs to be fixed immediately. This kind of exceptions do not need be checked.
by input from external, you cannot control or trust the output of external service.
String name = ExternalService.getName(); // return null
System.out.print(name.length()); // name is null here
Here, you may need to check whether the name is null if you want to continue when it is null, otherwise, you can let it alone and it will stop here and give the caller the runtime exception.
This kind of exceptions do not need be checked.
by runtime exception from external, you cannot control or trust the external service.
Here, you may need to catch all exceptions from ExternalService if you want to continue when it happens, otherwise, you can let it alone and it will stop here and give the caller the runtime exception.
by checked exception from external, you cannot control or trust the external service.
Here, you may need to catch all exceptions from ExternalService if you want to continue when it happens, otherwise, you can let it alone and it will stop here and give the caller the runtime exception.
In this case, do we need to know what kind of exception happened in ExternalService? It depends:
*
*if you can handle some kinds of exceptions, you need to catch them and process. For others, bubble them.
*if you need log or response to user the specific execption, you can catch them. For others, bubble them.
A: I think when declaring Application Exception it should be Unchecked Exception i.e., subclass of RuntimeException.
The reason is it will not clutter application code with try-catch and throws declaration on method. If your application is using Java Api which throws checked exceptions that anyways need to be handle. For other cases, the application can throw unchecked exception. If the application caller still needs to handle unchecked exception, it can be done.
A: The rule I use is: never use unchecked exceptions! (or when you don't see any way around it)
From the point of view of the developer using your library or the end-user using your library/application it really sucks to be confronted with an application that crashes due to an uncought exception. And counting on a catch-all is no good either.
This way the end user can still be presented with an error message, instead of the application completely disappearing.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27578",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "260"
} |
Q: What issues should be considered when overriding equals and hashCode in Java? What issues / pitfalls must be considered when overriding equals and hashCode?
A: A clarification about the obj.getClass() != getClass().
This statement is the result of equals() being inheritance unfriendly. The JLS (Java language specification) specifies that if A.equals(B) == true then B.equals(A) must also return true. If you omit that statement inheriting classes that override equals() (and change its behavior) will break this specification.
Consider the following example of what happens when the statement is omitted:
class A {
int field1;
A(int field1) {
this.field1 = field1;
}
public boolean equals(Object other) {
return (other != null && other instanceof A && ((A) other).field1 == field1);
}
}
class B extends A {
int field2;
B(int field1, int field2) {
super(field1);
this.field2 = field2;
}
public boolean equals(Object other) {
return (other != null && other instanceof B && ((B)other).field2 == field2 && super.equals(other));
}
}
Doing new A(1).equals(new A(1)) Also, new B(1,1).equals(new B(1,1)) result give out true, as it should.
This looks all very good, but look what happens if we try to use both classes:
A a = new A(1);
B b = new B(1,1);
a.equals(b) == true;
b.equals(a) == false;
Obviously, this is wrong.
If you want to ensure the symmetric condition. a=b if b=a and the Liskov substitution principle call super.equals(other) not only in the case of B instance, but check after for A instance:
if (other instanceof B )
return (other != null && ((B)other).field2 == field2 && super.equals(other));
if (other instanceof A) return super.equals(other);
else return false;
Which will output:
a.equals(b) == true;
b.equals(a) == true;
Where, if a is not a reference of B, then it might be a be a reference of class A (because you extend it), in this case you call super.equals() too.
A: Logically we have:
a.getClass().equals(b.getClass()) && a.equals(b) ⇒ a.hashCode() == b.hashCode()
But not vice-versa!
A: One gotcha I have found is where two objects contain references to each other (one example being a parent/child relationship with a convenience method on the parent to get all children).
These sorts of things are fairly common when doing Hibernate mappings for example.
If you include both ends of the relationship in your hashCode or equals tests it's possible to get into a recursive loop which ends in a StackOverflowException.
The simplest solution is to not include the getChildren collection in the methods.
A: For an inheritance-friendly implementation, check out Tal Cohen's solution, How Do I Correctly Implement the equals() Method?
Summary:
In his book Effective Java Programming Language Guide (Addison-Wesley, 2001), Joshua Bloch claims that "There is simply no way to extend an instantiable class and add an aspect while preserving the equals contract." Tal disagrees.
His solution is to implement equals() by calling another nonsymmetric blindlyEquals() both ways. blindlyEquals() is overridden by subclasses, equals() is inherited, and never overridden.
Example:
class Point {
private int x;
private int y;
protected boolean blindlyEquals(Object o) {
if (!(o instanceof Point))
return false;
Point p = (Point)o;
return (p.x == this.x && p.y == this.y);
}
public boolean equals(Object o) {
return (this.blindlyEquals(o) && o.blindlyEquals(this));
}
}
class ColorPoint extends Point {
private Color c;
protected boolean blindlyEquals(Object o) {
if (!(o instanceof ColorPoint))
return false;
ColorPoint cp = (ColorPoint)o;
return (super.blindlyEquals(cp) &&
cp.color == this.color);
}
}
Note that equals() must work across inheritance hierarchies if the Liskov Substitution Principle is to be satisfied.
A: Still amazed that none recommended the guava library for this.
//Sample taken from a current working project of mine just to illustrate the idea
@Override
public int hashCode(){
return Objects.hashCode(this.getDate(), this.datePattern);
}
@Override
public boolean equals(Object obj){
if ( ! obj instanceof DateAndPattern ) {
return false;
}
return Objects.equal(((DateAndPattern)obj).getDate(), this.getDate())
&& Objects.equal(((DateAndPattern)obj).getDate(), this.getDatePattern());
}
A: There are some issues worth noticing if you're dealing with classes that are persisted using an Object-Relationship Mapper (ORM) like Hibernate, if you didn't think this was unreasonably complicated already!
Lazy loaded objects are subclasses
If your objects are persisted using an ORM, in many cases you will be dealing with dynamic proxies to avoid loading object too early from the data store. These proxies are implemented as subclasses of your own class. This means thatthis.getClass() == o.getClass() will return false. For example:
Person saved = new Person("John Doe");
Long key = dao.save(saved);
dao.flush();
Person retrieved = dao.retrieve(key);
saved.getClass().equals(retrieved.getClass()); // Will return false if Person is loaded lazy
If you're dealing with an ORM, using o instanceof Person is the only thing that will behave correctly.
Lazy loaded objects have null-fields
ORMs usually use the getters to force loading of lazy loaded objects. This means that person.name will be null if person is lazy loaded, even if person.getName() forces loading and returns "John Doe". In my experience, this crops up more often in hashCode() and equals().
If you're dealing with an ORM, make sure to always use getters, and never field references in hashCode() and equals().
Saving an object will change its state
Persistent objects often use a id field to hold the key of the object. This field will be automatically updated when an object is first saved. Don't use an id field in hashCode(). But you can use it in equals().
A pattern I often use is
if (this.getId() == null) {
return this == other;
}
else {
return this.getId().equals(other.getId());
}
But: you cannot include getId() in hashCode(). If you do, when an object is persisted, its hashCode changes. If the object is in a HashSet, you'll "never" find it again.
In my Person example, I probably would use getName() for hashCode and getId() plus getName() (just for paranoia) for equals(). It's okay if there are some risk of "collisions" for hashCode(), but never okay for equals().
hashCode() should use the non-changing subset of properties from equals()
A: There are two methods in super class as java.lang.Object. We need to override them to custom object.
public boolean equals(Object obj)
public int hashCode()
Equal objects must produce the same hash code as long as they are equal, however unequal objects need not produce distinct hash codes.
public class Test
{
private int num;
private String data;
public boolean equals(Object obj)
{
if(this == obj)
return true;
if((obj == null) || (obj.getClass() != this.getClass()))
return false;
// object must be Test at this point
Test test = (Test)obj;
return num == test.num &&
(data == test.data || (data != null && data.equals(test.data)));
}
public int hashCode()
{
int hash = 7;
hash = 31 * hash + num;
hash = 31 * hash + (null == data ? 0 : data.hashCode());
return hash;
}
// other methods
}
If you want get more, please check this link as http://www.javaranch.com/journal/2002/10/equalhash.html
This is another example,
http://java67.blogspot.com/2013/04/example-of-overriding-equals-hashcode-compareTo-java-method.html
Have Fun! @.@
A: There are a couple of ways to do your check for class equality before checking member equality, and I think both are useful in the right circumstances.
*
*Use the instanceof operator.
*Use this.getClass().equals(that.getClass()).
I use #1 in a final equals implementation, or when implementing an interface that prescribes an algorithm for equals (like the java.util collection interfaces—the right way to check with with (obj instanceof Set) or whatever interface you're implementing). It's generally a bad choice when equals can be overridden because that breaks the symmetry property.
Option #2 allows the class to be safely extended without overriding equals or breaking symmetry.
If your class is also Comparable, the equals and compareTo methods should be consistent too. Here's a template for the equals method in a Comparable class:
final class MyClass implements Comparable<MyClass>
{
…
@Override
public boolean equals(Object obj)
{
/* If compareTo and equals aren't final, we should check with getClass instead. */
if (!(obj instanceof MyClass))
return false;
return compareTo((MyClass) obj) == 0;
}
}
A: For equals, look into Secrets of Equals by Angelika Langer. I love it very much. She's also a great FAQ about Generics in Java. View her other articles here (scroll down to "Core Java"), where she also goes on with Part-2 and "mixed type comparison". Have fun reading them!
A: The theory (for the language lawyers and the mathematically inclined):
equals() (javadoc) must define an equivalence relation (it must be reflexive, symmetric, and transitive). In addition, it must be consistent (if the objects are not modified, then it must keep returning the same value). Furthermore, o.equals(null) must always return false.
hashCode() (javadoc) must also be consistent (if the object is not modified in terms of equals(), it must keep returning the same value).
The relation between the two methods is:
Whenever a.equals(b), then a.hashCode() must be same as b.hashCode().
In practice:
If you override one, then you should override the other.
Use the same set of fields that you use to compute equals() to compute hashCode().
Use the excellent helper classes EqualsBuilder and HashCodeBuilder from the Apache Commons Lang library. An example:
public class Person {
private String name;
private int age;
// ...
@Override
public int hashCode() {
return new HashCodeBuilder(17, 31). // two randomly chosen prime numbers
// if deriving: appendSuper(super.hashCode()).
append(name).
append(age).
toHashCode();
}
@Override
public boolean equals(Object obj) {
if (!(obj instanceof Person))
return false;
if (obj == this)
return true;
Person rhs = (Person) obj;
return new EqualsBuilder().
// if deriving: appendSuper(super.equals(obj)).
append(name, rhs.name).
append(age, rhs.age).
isEquals();
}
}
Also remember:
When using a hash-based Collection or Map such as HashSet, LinkedHashSet, HashMap, Hashtable, or WeakHashMap, make sure that the hashCode() of the key objects that you put into the collection never changes while the object is in the collection. The bulletproof way to ensure this is to make your keys immutable, which has also other benefits.
A: equals() method is used to determine the equality of two objects.
as int value of 10 is always equal to 10. But this equals() method is about equality of two objects. When we say object, it will have properties. To decide about equality those properties are considered. It is not necessary that all properties must be taken into account to determine the equality and with respect to the class definition and context it can be decided. Then the equals() method can be overridden.
we should always override hashCode() method whenever we override equals() method. If not, what will happen? If we use hashtables in our application, it will not behave as expected. As the hashCode is used in determining the equality of values stored, it will not return the right corresponding value for a key.
Default implementation given is hashCode() method in Object class uses the internal address of the object and converts it into integer and returns it.
public class Tiger {
private String color;
private String stripePattern;
private int height;
@Override
public boolean equals(Object object) {
boolean result = false;
if (object == null || object.getClass() != getClass()) {
result = false;
} else {
Tiger tiger = (Tiger) object;
if (this.color == tiger.getColor()
&& this.stripePattern == tiger.getStripePattern()) {
result = true;
}
}
return result;
}
// just omitted null checks
@Override
public int hashCode() {
int hash = 3;
hash = 7 * hash + this.color.hashCode();
hash = 7 * hash + this.stripePattern.hashCode();
return hash;
}
public static void main(String args[]) {
Tiger bengalTiger1 = new Tiger("Yellow", "Dense", 3);
Tiger bengalTiger2 = new Tiger("Yellow", "Dense", 2);
Tiger siberianTiger = new Tiger("White", "Sparse", 4);
System.out.println("bengalTiger1 and bengalTiger2: "
+ bengalTiger1.equals(bengalTiger2));
System.out.println("bengalTiger1 and siberianTiger: "
+ bengalTiger1.equals(siberianTiger));
System.out.println("bengalTiger1 hashCode: " + bengalTiger1.hashCode());
System.out.println("bengalTiger2 hashCode: " + bengalTiger2.hashCode());
System.out.println("siberianTiger hashCode: "
+ siberianTiger.hashCode());
}
public String getColor() {
return color;
}
public String getStripePattern() {
return stripePattern;
}
public Tiger(String color, String stripePattern, int height) {
this.color = color;
this.stripePattern = stripePattern;
this.height = height;
}
}
Example Code Output:
bengalTiger1 and bengalTiger2: true
bengalTiger1 and siberianTiger: false
bengalTiger1 hashCode: 1398212510
bengalTiger2 hashCode: 1398212510
siberianTiger hashCode: –1227465966
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27581",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "617"
} |
Q: How can I make "jconsole" work with Websphere 6.1? I've deployed some Managed Beans on WebSphere 6.1 and I've managed to invoke them through a standalone client, but when I try to use the application "jconsole" distributed with the standard JDK can can't make it works.
Has anyone achieved to connect the jconsole with WAS 6.1?
IBM WebSphere 6.1 it's supossed to support JSR 160 JavaTM Management Extensions (JMX) Remote API. Furthermore, it uses the MX4J implementation (http://mx4j.sourceforge.net). But I can't make it works with neither "jconsole" nor "MC4J".
I have the Classpath and the JAVA_HOME correctly setted, so the issue it's not there.
A: WebSphere's support for JMX is crap. Particularly, if you need to connect to any secured JMX beans. Here's an interesting tidbit, their own implementation of jConsole will not connect to their own JVM. I have had a PMR open with IBM for over a year to fix this issue, and have gotten nothing but the runaround. They clearly don't want to fix this issue.
The only way I have been able to invoke remote secured JMX beans hosted on WebSphere has been to implement a client using the "WebSphere application client". This is basically a stripped down app server used for stuff like this.
Open a PMR with IBM. Perhaps if more people report this issue, they will actually fix it.
Update: You can run your application as a WebSphere Application Client in RAD. Open the run menu, then choose "Run...". In the dialog that opens, towards the bottom on the left hand side, you will see "WebSphere v6.1 Application Client". I'm not sure how to start and Application Client outside of RAD.
A: IT WORKS !
http://issues.apache.org/jira/browse/GERONIMO-4534;jsessionid=FB20DD5973F01DD2D470FB9A1B45D209?page=com.atlassian.jira.plugin.system.issuetabpanels%3Aall-tabpanel
1) Change the config.xml and start the server.
-see here how to change config.xml: http://publib.boulder.ibm.com/wasce/V2.1.0/en/working-with-jconsole.html
2) start the jconsole with : jconsole -J-Djavax.net.ssl.keyStore=%GERONIMO_HOME%\var\security\keystores\geronimo-default -J-Djavax.net.ssl.keyStorePassword=secret -J-Djavax.net.ssl.trustStore=%GERONIMO_HOME%\var\security\keystores\geronimo-default -J-Djavax.net.ssl.trustStorePassword=secret -J-Djava.class.path=%JAVA_HOME%\lib\jconsole.jar;%JAVA_HOME%\lib\tools.jar;%GERONIMO_HOME%\repository\org\apache\geronimo\framework\geronimo-kernel\2.1.4\geronimo-kernel-2.1.4.jar
[or your version of geronimo-kernel jar]
3) in the jconsole interface->advanced, input:
JMX URL: service:jmx:rmi:///jndi/rmi://localhost:1099/JMXSecureConnector
user name: system
password: manager
4) click the connect button.
A: If you want the WebSphere MBeans this one works for me:
The key is to configure the classpath and the security properly.
in one line:
jconsole -J-Dwas.install.root=C:/was61 -J-Djava.ext.dirs=C:/was61/plugins;C:/was61/plugins/com.ibm.ws.security.crypto_6.1.0;C:/was61/lib;C:/was61/java/jre/lib/ext -J-Dcom.ibm.SSL.ConfigURL="file:../../properties/ssl.client.props" -J-Dcom.ibm.CORBA.ConfigURL="file:../../properties/sas.client.props" service:jmx:iiop://host:port/jndi/JMXConnector
where port = bootstrap port ex: (2809)
Be careful when setting the sas and the ssl props.
Robert
A: I have successfully connected to ActiveMQ and ServiceMix using the JConsole. Does WAS 6.1 use Java Management Extension (JMX) technology? JMX is required for JConsole.
If your path is set correctly it should work fine. On windows you go to System Properties -> Advanced Tab -> Environment Variables. Have your JAVA_HOME System variable set to the path of your JDK or JRE and your Path variable with %JAVA_HOME%/bin added somewhere in there. Then all you need to do is go to Start->Run->JConsole. Select the correct Process Name and your done.
Where are you having problems at? I hope this helps.
Edit:
Here is the Java Doc's on JConsole.
A: Hmm... I know that WebSphere is kind of hard to configure. Thats part of the reason we used ServiceMix for our ESB. Maybe its not enabled by default in WebSphere and you would have to turn it on in the config somewhere.
A: Websphere 6.1 does not support the JConsole for some reason even though it fully implements the JMS specs. Seems to be a week area at the moment. Your best bet is to look at the Admin client to implement you own console.
A: You all seem to be incorrect. I am running Websphere 6.1.041 , using JDK 1.5 , and I just started up Jconsole and used the "simple connect" tab to connect to localhost with port=0 and without a username and password and it works fine.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27598",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: 'Reliable' SMS Unicode & GSM Encoding in PHP (Updated a little)
I'm not very experienced with internationalization using PHP, it must be said, and a deal of searching didn't really provide the answers I was looking for.
I'm in need of working out a reliable way to convert only 'relevant' text to Unicode to send in an SMS message, using PHP (just temporarily, whilst service is rewritten using C#) - obviously, messages sent at the moment are sent as plain text.
I could conceivably convert everything to the Unicode charset (as opposed to using the standard GSM charset), but that would mean that all messages would be limited to 70 characters (instead of 160).
So, I guess my real question is: what is the most reliable way to detect the requirement for a message to be Unicode-encoded, so I only have to do it when it's absolutely necessary (e.g. for non-Latin-language characters)?
Added Info:
Okay, so I've spent the morning working on this, and I'm still no further on than when I started (certainly due to my complete lack of competency when it comes to charset conversion). So here's the revised scenario:
I have text SMS messages coming from an external source, this external source provides the responses to me in plain text + Unicode slash-escaped characters. E.g. the 'displayed' text:
Let's test öäü éàè אין תמיכה בעברית
Returns:
Let's test \u00f6\u00e4\u00fc \u00e9\u00e0\u00e8 \u05d0\u05d9\u05df \u05ea\u05de\u05d9\u05db\u05d4 \u05d1\u05e2\u05d1\u05e8\u05d9\u05ea
Now, I can send on to my SMS provider in plaintext, GSM 03.38 or Unicode. Obviously, sending the above as plaintext results in a lot of missing characters (they're replaced by spaces by my provider) - I need to adopt relating to what content there is. What I want to do with this is the following:
*
*If all text is within the GSM 03.38 codepage, send it as-is. (All but the Hebrew characters above fit into this category, but need to be converted.)
*Otherwise, convert it to Unicode, and send it over multiple messages (as the Unicode limit is 70 chars not 160 for an SMS).
As I said above, I'm stumped on doing this in PHP (C# wasn't much of an issue due to some simple conversion functions built-in), but it's quite probable I'm just missing the obvious, here. I couldn't find any pre-made conversion classes for 7-bit encoding in PHP, either - and my attempts to convert the string myself and send it on seemed futile.
Any help would be greatly appreciated.
A: To deal with it conceptually before getting into mechanisms, and apologies if any of this is obvious, a string can be defined as a sequence of Unicode characters, Unicode being a database that gives an id number known as a code point to every character you might need to work with. GSM-338 contains a subset of the Unicode characters, so what you're doing is extracting a set of codepoints from your string, and checking to see if that set is contained in GSM-338.
// second column of http://unicode.org/Public/MAPPINGS/ETSI/GSM0338.TXT
$gsm338_codepoints = array(0x0040, 0x0000, ..., 0x00fc, 0x00e0)
$can_use_gsm338 = true;
foreach(codepoints($mystring) as $codepoint){
if(!in_array($codepoint, $gsm338_codepoints)){
$can_use_gsm338 = false;
break;
}
}
That leaves the definition of the function codepoints($string), which isn't built in to PHP. PHP understands a string to be a sequence of bytes rather than a sequence of Unicode characters. The best way of bridging the gap is to get your strings into UTF8 as quickly as you can and keep them in UTF8 as long as you can - you'll have to use other encodings when dealing with external systems, but isolate the conversion to the interface to that system and deal only with utf8 internally.
The functions you need to convert between php strings in utf8 and sequences of codepoints can be found at http://hsivonen.iki.fi/php-utf8/ , so that's your codepoints() function.
If you're taking data from an external source that gives you Unicode slash-escaped characters ("Let's test \u00f6\u00e4\u00fc..."), that string escape format should be converted to utf8. I don't know offhand of a function to do this, if one can't be found, it's a matter of string/regex processing + the use of the hsivonen.iki.fi functions, for example when you hit \u00f6, replace it with the utf8 representation of the codepoint 0xf6.
A: Although this is an old thread I recently had to solve a very similar problem and wanted to post my answer. The PHP code is somewhat simple. It starts with a painstakingly large array of GSM valid character codes in an array, then simply checks if the current character is in that array using the ord($string) function which returns the ascii value of the first character of the string passed. Here is the code I use to validate if a string is GSM worth.
$valid_gsm_keycodes = Array(
0x0040, 0x0394, 0x0020, 0x0030, 0x00a1, 0x0050, 0x00bf, 0x0070,
0x00a3, 0x005f, 0x0021, 0x0031, 0x0041, 0x0051, 0x0061, 0x0071,
0x0024, 0x03a6, 0x0022, 0x0032, 0x0042, 0x0052, 0x0062, 0x0072,
0x00a5, 0x0393, 0x0023, 0x0033, 0x0043, 0x0053, 0x0063, 0x0073,
0x00e8, 0x039b, 0x00a4, 0x0034, 0x0035, 0x0044, 0x0054, 0x0064, 0x0074,
0x00e9, 0x03a9, 0x0025, 0x0045, 0x0045, 0x0055, 0x0065, 0x0075,
0x00f9, 0x03a0, 0x0026, 0x0036, 0x0046, 0x0056, 0x0066, 0x0076,
0x00ec, 0x03a8, 0x0027, 0x0037, 0x0047, 0x0057, 0x0067, 0x0077,
0x00f2, 0x03a3, 0x0028, 0x0038, 0x0048, 0x0058, 0x0068, 0x0078,
0x00c7, 0x0398, 0x0029, 0x0039, 0x0049, 0x0059, 0x0069, 0x0079,
0x000a, 0x039e, 0x002a, 0x003a, 0x004a, 0x005a, 0x006a, 0x007a,
0x00d8, 0x001b, 0x002b, 0x003b, 0x004b, 0x00c4, 0x006b, 0x00e4,
0x00f8, 0x00c6, 0x002c, 0x003c, 0x004c, 0x00d6, 0x006c, 0x00f6,
0x000d, 0x00e6, 0x002d, 0x003d, 0x004d, 0x00d1, 0x006d, 0x00f1,
0x00c5, 0x00df, 0x002e, 0x003e, 0x004e, 0x00dc, 0x006e, 0x00fc,
0x00e5, 0x00c9, 0x002f, 0x003f, 0x004f, 0x00a7, 0x006f, 0x00e0 );
for($i = 0; $i < strlen($string); $i++) {
if(!in_array($string[$i], $valid_gsm_keycodes)) return false;
}
return true;
A: function is_gsm0338( $utf8_string ) {
$gsm0338 = array(
'@','Δ',' ','0','¡','P','¿','p',
'£','_','!','1','A','Q','a','q',
'$','Φ','"','2','B','R','b','r',
'¥','Γ','#','3','C','S','c','s',
'è','Λ','¤','4','D','T','d','t',
'é','Ω','%','5','E','U','e','u',
'ù','Π','&','6','F','V','f','v',
'ì','Ψ','\'','7','G','W','g','w',
'ò','Σ','(','8','H','X','h','x',
'Ç','Θ',')','9','I','Y','i','y',
"\n",'Ξ','*',':','J','Z','j','z',
'Ø',"\x1B",'+',';','K','Ä','k','ä',
'ø','Æ',',','<','L','Ö','l','ö',
"\r",'æ','-','=','M','Ñ','m','ñ',
'Å','ß','.','>','N','Ü','n','ü',
'å','É','/','?','O','§','o','à'
);
$len = mb_strlen( $utf8_string, 'UTF-8');
for( $i=0; $i < $len; $i++)
if (!in_array(mb_substr($utf8_string,$i,1,'UTF-8'), $gsm0338))
return false;
return true;
}
A: I know this isnt php code, but I think it might help anyway. This is how I do it in an app I wrote to detect if its possible to send as GSM 03.38 (you could do something similar for plain text). It has two translation tables, one for normal GSM and one for the extended. And then a function that loops through all characters checking if it can be converted.
#define UCS2_TO_GSM_LOOKUP_TABLE_SIZE 0x100
#define NON_GSM 0x80
#define UCS2_GCL_RANGE 24
#define UCS2_GREEK_CAPITAL_LETTER_ALPHA 0x0391
#define EXTEND 0x001B
// note that the ` character is mapped to ' so that all characters that can be typed on
// a standard north american keyboard can be converted to the GSM default character set
static unsigned char Ucs2ToGsm[UCS2_TO_GSM_LOOKUP_TABLE_SIZE] =
{ /*+0x0 +0x1 +0x2 +0x3 +0x4 +0x5 +0x6 +0x7*/
/*0x00*/ NON_GSM, NON_GSM, NON_GSM, NON_GSM, NON_GSM, NON_GSM, NON_GSM, NON_GSM,
/*0x08*/ NON_GSM, NON_GSM, 0x0a, NON_GSM, NON_GSM, 0x0d, NON_GSM, NON_GSM,
/*0x10*/ NON_GSM, NON_GSM, NON_GSM, NON_GSM, NON_GSM, NON_GSM, NON_GSM, NON_GSM,
/*0x18*/ NON_GSM, NON_GSM, NON_GSM, NON_GSM, NON_GSM, NON_GSM, NON_GSM, NON_GSM,
/*0x20*/ 0x20, 0x21, 0x22, 0x23, 0x02, 0x25, 0x26, 0x27,
/*0x28*/ 0x28, 0x29, 0x2a, 0x2b, 0x2c, 0x2d, 0x2e, 0x2f,
/*0x30*/ 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37,
/*0x38*/ 0x38, 0x39, 0x3a, 0x3b, 0x3c, 0x3d, 0x3e, 0x3f,
/*0x40*/ 0x00, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46, 0x47,
/*0x48*/ 0x48, 0x49, 0x4a, 0x4b, 0x4c, 0x4d, 0x4e, 0x4f,
/*0x50*/ 0x50, 0x51, 0x52, 0x53, 0x54, 0x55, 0x56, 0x57,
/*0x58*/ 0x58, 0x59, 0x5a, EXTEND, EXTEND, EXTEND, EXTEND, 0x11,
/*0x60*/ 0x27, 0x61, 0x62, 0x63, 0x64, 0x65, 0x66, 0x67,
/*0x68*/ 0x68, 0x69, 0x6a, 0x6b, 0x6c, 0x6d, 0x6e, 0x6f,
/*0x70*/ 0x70, 0x71, 0x72, 0x73, 0x74, 0x75, 0x76, 0x77,
/*0x78*/ 0x78, 0x79, 0x7a, EXTEND, EXTEND, EXTEND, EXTEND, NON_GSM,
/*0x80*/ NON_GSM, NON_GSM, NON_GSM, NON_GSM, NON_GSM, NON_GSM, NON_GSM, NON_GSM,
/*0x88*/ NON_GSM, NON_GSM, NON_GSM, NON_GSM, NON_GSM, NON_GSM, NON_GSM, NON_GSM,
/*0x90*/ NON_GSM, NON_GSM, NON_GSM, NON_GSM, NON_GSM, NON_GSM, NON_GSM, NON_GSM,
/*0x98*/ NON_GSM, NON_GSM, NON_GSM, NON_GSM, NON_GSM, NON_GSM, NON_GSM, NON_GSM,
/*0xa0*/ NON_GSM, 0x40, NON_GSM, 0x01, 0x24, 0x03, NON_GSM, 0x5f,
/*0xa8*/ NON_GSM, NON_GSM, NON_GSM, NON_GSM, NON_GSM, NON_GSM, NON_GSM, NON_GSM,
/*0xb0*/ NON_GSM, NON_GSM, NON_GSM, NON_GSM, NON_GSM, NON_GSM, NON_GSM, NON_GSM,
/*0xb8*/ NON_GSM, NON_GSM, NON_GSM, NON_GSM, NON_GSM, NON_GSM, NON_GSM, 0x60,
/*0xc0*/ NON_GSM, NON_GSM, NON_GSM, NON_GSM, 0x5b, 0x0e, 0x1c, 0x09,
/*0xc8*/ NON_GSM, 0x1f, NON_GSM, NON_GSM, NON_GSM, NON_GSM, NON_GSM, 0x60,
/*0xd0*/ NON_GSM, 0x5d, NON_GSM, NON_GSM, NON_GSM, NON_GSM, 0x5c, NON_GSM,
/*0xd8*/ 0x0b, NON_GSM, NON_GSM, NON_GSM, 0x5e, NON_GSM, NON_GSM, 0x1e,
/*0xe0*/ 0x7f, NON_GSM, NON_GSM, NON_GSM, 0x7b, 0x0f, 0x1d, NON_GSM,
/*0xe8*/ 0x04, 0x05, NON_GSM, NON_GSM, 0x07, NON_GSM, NON_GSM, NON_GSM,
/*0xf0*/ NON_GSM, 0x7d, 0x08, NON_GSM, NON_GSM, NON_GSM, 0x7c, NON_GSM,
/*0xf8*/ 0x0c, 0x06, NON_GSM, NON_GSM, 0x7e, NON_GSM, NON_GSM, NON_GSM
};
static unsigned char Ucs2GclToGsm[UCS2_GCL_RANGE + 1] =
{
/*0x0391*/ 0x41, // Alpha A
/*0x0392*/ 0x42, // Beta B
/*0x0393*/ 0x13, // Gamma
/*0x0394*/ 0x10, // Delta
/*0x0395*/ 0x45, // Epsilon E
/*0x0396*/ 0x5A, // Zeta Z
/*0x0397*/ 0x48, // Eta H
/*0x0398*/ 0x19, // Theta
/*0x0399*/ 0x49, // Iota I
/*0x039a*/ 0x4B, // Kappa K
/*0x039b*/ 0x14, // Lambda
/*0x039c*/ 0x4D, // Mu M
/*0x039d*/ 0x4E, // Nu N
/*0x039e*/ 0x1A, // Xi
/*0x039f*/ 0x4F, // Omicron O
/*0x03a0*/ 0X16, // Pi
/*0x03a1*/ 0x50, // Rho P
/*0x03a2*/ NON_GSM,
/*0x03a3*/ 0x18, // Sigma
/*0x03a4*/ 0x54, // Tau T
/*0x03a5*/ 0x59, // Upsilon Y
/*0x03a6*/ 0x12, // Phi
/*0x03a7*/ 0x58, // Chi X
/*0x03a8*/ 0x17, // Psi
/*0x03a9*/ 0x15 // Omega
};
bool Gsm0338Encoding::IsNotGSM( wchar_t szUnicodeChar )
{
bool result = true;
if( szUnicodeChar < UCS2_TO_GSM_LOOKUP_TABLE_SIZE )
{
result = ( Ucs2ToGsm[szUnicodeChar] == NON_GSM );
}
else if( (szUnicodeChar >= UCS2_GREEK_CAPITAL_LETTER_ALPHA) &&
(szUnicodeChar <= (UCS2_GREEK_CAPITAL_LETTER_ALPHA + UCS2_GCL_RANGE)) )
{
result = ( Ucs2GclToGsm[szUnicodeChar - UCS2_GREEK_CAPITAL_LETTER_ALPHA] == NON_GSM );
}
else if( szUnicodeChar == 0x20AC ) // €
{
result = false;
}
return result;
}
bool Gsm0338Encoding::IsGSM( const std::wstring& str )
{
bool result = true;
if( std::find_if( str.begin(), str.end(), IsNotGSM ) != str.end() )
{
result = false;
}
return result;
}
A: PHP6 will have better unicode support but there are a few functions you can use.
My first thought was mb_convert_encoding but as you said this will shorten messages to 70 chars - so perhaps you can use this in conjunction with mb_detect_encoding?
See: Multibyte Functions
A: preg_match('/^[\x0A\x0C\x0D\x20-\x5F\x61-\x7E\xA0\xA1\xA3-\xA5\xA7'.
'\xBF\xC4-\xC6\xC9\xD1\xD6\xD8\xDC\xDF\xE0\xE4-\xE9\xEC\xF1'.
'\xF2\xF6\xF8\xF9\xFC'.
json_decode('"\u0393\u0394\u0398\u039B\u039E\u03A0\u03A3\u03A6\u03A8\u03A9\u20AC"').
']*$/u', $text)
or
preg_match('/^[\x0A\x0C\x0D\x20-\x5F\x61-\x7E\xA0\xA1\xA3-\xA5\xA7\xBF\xC4-\xC6\xC9\xD1\xD6\xD8\xDC\xDF\xE0\xE4-\xE9\xEC\xF1\xF2\xF6\xF8\xF9\xFCΓΔΘΛΞΠΣΦΨΩ€]*$/u', $text)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27599",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: Is this code an abuse of STL's find_if? Let's say I have a list of server names stored in a vector, and I would like to contact them one at a time until one has successfully responded. I was thinking about using STL's find_if algorithm in the following way:
find_if(serverNames.begin(), serverNames.end(), ContactServer());
Where ContactServer is a predicate function object.
On one hand, there's a problem since the predicate will not always return the same result for the same server name (because of server downtime, network problems, etc...). However, the same result will be returned regardless of which copy of the predicate is used (i.e. the predicate has no real state), so the original problem with state-keeping predicates is not relevant in this case.
What do you say?
A: I think I would go for it.
The only thing I would worry about is the readability (and therefore maintainability) of it. To me, it reads something like "Find the first server I can contact", which makes perfect sense.
You might want to rename ContactServer to indicate that it is a predicate; CanContactServer? (But then people would complain about hidden side effects. Hmm...)
A: This is exactly what the STL algorithms are for. This is not an abuse at all. Furthermore, it is very readable. Redirect to null anyone who tells you otherwise.
A: In my opinion, this use of std::find_if is a bit misleading. When I read this piece of code, I don't expect any side effects to occur, I just expect a server name to be found. The fact that the result of find_if is discarded would also make me wonder if the code is really correct. Perhaps another name for the predicate would make the intent clearer, but I think the problem is more fundamental.
For most people, find_if is a querying algorithm, not a modifying algorithm. Even though you are not actually modifying the values iterated upon, you are modifying the global state of your application (in this case, you are even possibly modifying the state of distant servers).
In such a case, I would probably stick with a manual loop, especially now that C++11 introduced range-based for loops:
for (std::string const & name : serverNames)
{
if (ContactServer(name)) break;
}
Another solution would be to encapsulate this in a function with a name conveying more clearly the intent, such as apply_until or something like that:
template <typename InputIterator, typename Function>
void apply_until(InputIterator first, InputIterator last, Function f)
{
std::find_if(first, last, f);
// or
// while (first != last)
// {
// if (f(*first)) break;
//
// ++first;
// }
}
}
But perhaps I'm being over-puristic :)!
A: Isn't that what find_if is for?
Note though, that it will find all the servers, if you iterate over the iterator - but you aren't going to do that (according to OP).
A:
However, the same result will be returned regardless of which copy of the predicate is used (i.e. the predicate has no real state), so the original problem with state-keeping predicates is not relevant in this case.
So where's the problem? Function objects don't necessarily have to be stateful. It's actually best practice to use function objects instead of function pointers in such situations because compilers are better at inlining them. In your case, the instantiation and call of the function object may have no overhead at all since find_if is a function template and the compiler will generate an own version for your functor.
On the other hand, using a function pointer would incur an indirection.
A: In the upcoming version of the C++ standard, I could not find any explicit restriction with respect to the fact that a predicate should always return the same value for the same input. I looked into section 25 (paragraphs 7 to 10).
Methods returning values that may change from one call to another, as in your case, should be volatile (from 7.1.6.1/11: "volatile is a hint to the implementation to avoid aggressive optimization involving the object because the value of the object might be changed by means undetectable by an implementation").
Predicates "shall not apply any non-constant function through the dereferenced iterators" (paragraphs 7 and 8). I take this to mean that they are not required to use non-volatile methods, and that your use-case is thus ok with respect to the standard.
If the wording was "predicates should apply const functions..." or something like that, then I would have concluded that 'const volatile' functions were not ok. But this is not the case.
A: std::for_each might be a better candidate for this.
1) After being copied in the same function object is used on each element and after all elements are processed a copy of the potentially updated function object is returned to the user.
2) It would improve the readability of the call in my opinion as well.
The function object and the for_each call would look something like:
struct AttemptServerContact {
bool server_contacted;
std::string active_server; // name of server contacted
AttemptServerContact() : server_contacted(false) {}
void operator()(Server& s) {
if (!server_contacted) {
//attempt to contact s
//if successful, set server_contacted and active_server
}
}
};
AttemptServerContact func;
func = std::for_each(serverNames.begin(), serverNames.end(), func);
//func.server_contacted and func.active_server contain server information.
A: find_if appears to be the right choice here. The predicate isn't stateful in this situation.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27607",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How can I add (simple) tracing in C#? I want to introduce some tracing to a C# application I am writing. Sadly, I can never really remember how it works and would like a tutorial with reference qualities to check up on every now and then. It should include:
*
*App.config / Web.config stuff to add for registering TraceListeners
*how to set it up in the calling application
Do you know the über tutorial that we should link to?
Glenn Slaven pointed me in the right direction. Add this to your App.config/Web.config inside <configuration/>:
<system.diagnostics>
<trace autoflush="true">
<listeners>
<add type="System.Diagnostics.TextWriterTraceListener" name="TextWriter"
initializeData="trace.log" />
</listeners>
</trace>
</system.diagnostics>
This will add a TextWriterTraceListener that will catch everything you send to with Trace.WriteLine, etc.
@DanEsparza pointed out that you should use Trace.TraceInformation, Trace.TraceWarning and Trace.TraceError instead of Trace.WriteLine, as they allow you to format messages the same way as string.Format.
Tip: If you don't add any listeners, then you can still see the trace output with the Sysinternals program DebugView (Dbgview.exe):
A: I followed around five different answers as well as all the blog posts in the previous answers and still had problems. I was trying to add a listener to some existing code that was tracing using the TraceSource.TraceEvent(TraceEventType, Int32, String) method where the TraceSource object was initialised with a string making it a 'named source'.
For me the issue was not creating a valid combination of source and switch elements to target this source. Here is an example that will log to a file called tracelog.txt. For the following code:
TraceSource source = new TraceSource("sourceName");
source.TraceEvent(TraceEventType.Verbose, 1, "Trace message");
I successfully managed to log with the following diagnostics configuration:
<system.diagnostics>
<sources>
<source name="sourceName" switchName="switchName">
<listeners>
<add
name="textWriterTraceListener"
type="System.Diagnostics.TextWriterTraceListener"
initializeData="tracelog.txt" />
</listeners>
</source>
</sources>
<switches>
<add name="switchName" value="Verbose" />
</switches>
</system.diagnostics>
A: DotNetCoders has a starter article on it: http://www.dotnetcoders.com/web/Articles/ShowArticle.aspx?article=50. They talk about how to set up the switches in the configuration file and how to write the code, but it is pretty old (2002).
There's another article on CodeProject: A Treatise on Using Debug and Trace classes, including Exception Handling, but it's the same age.
CodeGuru has another article on custom TraceListeners: Implementing a Custom TraceListener
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27610",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "123"
} |
Q: Unix shell file copy flattening folder structure On the UNIX bash shell (specifically Mac OS X Leopard) what would be the simplest way to copy every file having a specific extension from a folder hierarchy (including subdirectories) to the same destination folder (without subfolders)?
Obviously there is the problem of having duplicates in the source hierarchy. I wouldn't mind if they are overwritten.
Example: I need to copy every .txt file in the following hierarchy
/foo/a.txt
/foo/x.jpg
/foo/bar/a.txt
/foo/bar/c.jpg
/foo/bar/b.txt
To a folder named 'dest' and get:
/dest/a.txt
/dest/b.txt
A: In bash:
find /foo -iname '*.txt' -exec cp \{\} /dest/ \;
find will find all the files under the path /foo matching the wildcard *.txt, case insensitively (That's what -iname means). For each file, find will execute cp {} /dest/, with the found file in place of {}.
A: If you really want to run just one command, why not cons one up and run it? Like so:
$ find /foo -name '*.txt' | xargs echo | sed -e 's/^/cp /' -e 's|$| /dest|' | bash -sx
But that won't matter too much performance-wise unless you do this a lot or have a ton of files. Be careful of name collusions, however. I noticed in testing that GNU cp at least warns of collisions:
cp: will not overwrite just-created `/dest/tubguide.tex' with `./texmf/tex/plain/tugboat/tubguide.tex'
I think the cleanest is:
$ find /foo -name '*.txt' | xargs -i cp {} /dest
Less syntax to remember than the -exec option.
A: The only problem with Magnus' solution is that it forks off a new "cp" process for every file, which is not terribly efficient especially if there is a large number of files.
On Linux (or other systems with GNU coreutils) you can do:
find . -name "*.xml" -print0 | xargs -0 echo cp -t a
(The -0 allows it to work when your filenames have weird characters -- like spaces -- in them.)
Unfortunately I think Macs come with BSD-style tools. Anyone know a "standard" equivalent to the "-t" switch?
A: The answers above don't allow for name collisions as the asker didn't mind files being over-written.
I do mind files being over-written so came up with a different approach. Replacing each / in the path with - keep the hierarchy in the names, and puts all the files in one flat folder.
We use find to get the list of all files, then awk to create a mv command with the original filename and the modified filename then pass those to bash to be executed.
find ./from -type f | awk '{ str=$0; sub(/\.\//, "", str); gsub(/\//, "-", str); print "mv " $0 " ./to/" str }' | bash
where ./from and ./to are directories to mv from and to.
A: As far as the man page for cp on a FreeBSD box goes, there's no need for a -t switch. cp will assume the last argument on the command line to be the target directory if more than two names are passed.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27621",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "44"
} |
Q: How to enable the TRACE macro in Release mode? The TRACE macro can be used to output diagnostic messages to the debugger when the code is compiled in Debug mode. I need the same messages while in Release mode. Is there a way to achieve this?
(Please do not waste your time discussing why I should not be using TRACE in Release mode :-)
A: Actually, the TRACE macro is a lot more flexible than OutputDebugString. It takes a printf() style format string and parameter list whereas OutputDebugString just takes a single string. In order to implement the full TRACE functionality in release mode you need to do something like this:
void trace(const char* format, ...)
{
char buffer[1000];
va_list argptr;
va_start(argptr, format);
wvsprintf(buffer, format, argptr);
va_end(argptr);
OutputDebugString(buffer);
}
A: A few years back I needed similar functionality so I cobbled together the following code. Just save it into a file, e.g. rtrace.h, include it at the end of your stdafx.h, and add _RTRACE to the release mode Preprocessor defines.
Maybe someone will find a use for it :-)
John
#pragma once
//------------------------------------------------------------------------------------------------
//
// Author: John Cullen
// Date: 2006/04/12
// Based On: MSDN examples for variable argument lists and ATL implementation of TRACE.
//
// Description: Allows the use of TRACE statements in RELEASE builds, by overriding the
// TRACE macro definition and redefining in terms of the RTRACE class and overloaded
// operator (). Trace output is generated by calling OutputDebugString() directly.
//
//
// Usage: Add to the end of stdafx.h and add _RTRACE to the preprocessor defines (typically
// for RELEASE builds, although the flag will be ignored for DEBUG builds.
//
//------------------------------------------------------------------------------------------------
#ifdef _DEBUG
// NL defined as a shortcut for writing FTRACE(_T("\n")); for example, instead write FTRACE(NL);
#define NL _T("\n")
#define LTRACE TRACE(_T("%s(%d): "), __FILE__, __LINE__); TRACE
#define FTRACE TRACE(_T("%s(%d): %s: "), __FILE__, __LINE__, __FUNCTION__); TRACE
#else // _DEBUG
#ifdef _RTRACE
#undef TRACE
#define TRACE RTRACE()
#define LTRACE RTRACE(__FILE__, __LINE__)
#define FTRACE RTRACE(__FILE__, __LINE__, __FUNCTION__)
#define NL _T("\n")
class RTRACE
{
public:
// default constructor, no params
RTRACE(void) : m_pszFileName( NULL ), m_nLineNo( 0 ), m_pszFuncName( NULL ) {};
// overloaded constructor, filename and lineno
RTRACE(PCTSTR const pszFileName, int nLineNo) :
m_pszFileName(pszFileName), m_nLineNo(nLineNo), m_pszFuncName(NULL) {};
// overloaded constructor, filename, lineno, and function name
RTRACE(PCTSTR const pszFileName, int nLineNo, PCTSTR const pszFuncName) :
m_pszFileName(pszFileName), m_nLineNo(nLineNo), m_pszFuncName(pszFuncName) {};
virtual ~RTRACE(void) {};
// no arguments passed, e.g. RTRACE()()
void operator()() const
{
// no arguments passed, just dump the file, line and function if requested
OutputFileAndLine();
OutputFunction();
}
// format string and parameters passed, e.g. RTRACE()(_T("%s\n"), someStringVar)
void operator()(const PTCHAR pszFmt, ...) const
{
// dump the file, line and function if requested, followed by the TRACE arguments
OutputFileAndLine();
OutputFunction();
// perform the standard TRACE output processing
va_list ptr; va_start( ptr, pszFmt );
INT len = _vsctprintf( pszFmt, ptr ) + 1;
TCHAR* buffer = (PTCHAR) malloc( len * sizeof(TCHAR) );
_vstprintf( buffer, pszFmt, ptr );
OutputDebugString(buffer);
free( buffer );
}
private:
// output the current file and line
inline void OutputFileAndLine() const
{
if (m_pszFileName && _tcslen(m_pszFileName) > 0)
{
INT len = _sctprintf( _T("%s(%d): "), m_pszFileName, m_nLineNo ) + 1;
PTCHAR buffer = (PTCHAR) malloc( len * sizeof(TCHAR) );
_stprintf( buffer, _T("%s(%d): "), m_pszFileName, m_nLineNo );
OutputDebugString( buffer );
free( buffer );
}
}
// output the current function name
inline void OutputFunction() const
{
if (m_pszFuncName && _tcslen(m_pszFuncName) > 0)
{
INT len = _sctprintf( _T("%s: "), m_pszFuncName ) + 1;
PTCHAR buffer = (PTCHAR) malloc( len * sizeof(TCHAR) );
_stprintf( buffer, _T("%s: "), m_pszFuncName );
OutputDebugString( buffer );
free( buffer );
}
}
private:
PCTSTR const m_pszFuncName;
PCTSTR const m_pszFileName;
const int m_nLineNo;
};
#endif // _RTRACE
#endif // NDEBUG
A: TRACE is just a macro for OutputDebugString. So you can easily just make your own TRACE macro (or call it something else) that will call OutputDebugString.
A: It's most simply code that I had see
#undef ATLTRACE
#undef ATLTRACE2
#define ATLTRACE2 CAtlTrace(__FILE__, __LINE__, __FUNCTION__)
#define ATLTRACE ATLTRACE2
see
http://alax.info/blog/1351
A: In MFC, TRACE is defined as ATLTRACE. And in release mode that is defined as:
#define ATLTRACE __noop
So, using the out-the-box TRACE from MFC, you won't actually be able to read any TRACE text, because it won't even be written out. You could write your own TRACE function instead, then re-define the TRACE macro. You could do something like this:
void MyTrace(const CString& text)
{
::OutputDebugString(text); // Outputs to console, same as regular TRACE
// TODO: Do whatever output you need here. Write to event log / write to text file / write to pipe etc.
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27622",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: Integrating Qt into legacy MFC applications We currently maintain a suit of MFC applications that are fairly well designed, however the user interface is beginning to look tired and a lot of the code is in need quite a bit of refactoring to tidy up some duplication and/or performance problems. We make use of quite a few custom controls that handle all their own drawing (all written using MFC).
Recently I've been doing more research into Qt and the benefits it provides (cross-platform and supports what you might call a more "professional" looking framework for UI development).
My question is - what would be the best approach to perhaps moving to the Qt framework? Does Qt play nice with MFC? Would it be better to start porting some of our custom controls to Qt and gradually integrate more and more into our existing MFC apps? (is this possible?).
Any advice or previous experience is appreciated.
A: In my company, we are currently using Qt and are very happy with it.
I personnally never had to move a MFC-app into using the Qt framework, but here is something which might be of some interest for you :
Qt/MFC Migration Framework
Qt/MFC Migration Framework
It's part of Qt-Solutions, so this means you'll have to buy a Qt license along with a Qt-Solutions license. (edit: not any more)
I hope this helps !
A: (This doesn't really answer your specific questions but...)
I haven't personally used Qt, but it's not free for commercial Windows development.
Have you looked at wxWindows which is free? Nice article here. Just as an aside, if you wanted a single code base for all platforms, then you may have to migrate away from MFC - I am pretty sure (someone will correct if wrong) that MFC only targets Windows.
One other option would be to look at the Feature Pack update to MFC in SP1 of VS2008 - it includes access to new controls, including the Office style ribbon controls.
A: It's a tricky problem, and I suspect that the answer depends on how much time you have. You will get a much better result if you port your custom controls to Qt - if you use the QStyle classes to do the actual drawing then you'll end up with theme-able code right out of the box.
In general, my advice would be to bite the bullet and go the whole way at once. Sure, it might take longer, but the alternative is to spend an age trying to debug code that doesn't quite play ball, and end up writing more code to deal with minor incompatibilities between the two systems (been there, done that).
So, to summarise, my advice is to start a branch and rip out all your old MFC code and replace it with Qt. You'll get platform independence (almost) for free, and while it will take a while, you'll end up with a much nicer product at the end of it.
One final word of warning: make sure you take the time to understand the "Qt way of doing things" - in some cases it can be quite different to the MFC approach - the last thing you want to do is to end up with MFC-style Qt code.
A: I have lead a team doing this kind of thing before (not MFC to QT but the principles should work).
First we documented the dialogs and what their inputs, controls and outputs were. Also, we create several test cases especially for any clever logic inside the GUI.
Sometimes we had to refactor some business logic to provide a clean interface the GUIs but this is the way it should have been done in the first place tbh.
Now we had a list of GUIs, inputs, outputs, tests and an interface that the encapsulated GUI had to match.
We began, project by project, to create equivilant GUIs to the old ones. Once we did that we could slot the GUI in where the old one was, rebuild and test it. At first we tripped a lot but we soon worked out the common errors and fixed them. We navigated (I think) 612 dialogs although there was a team of about a dozen of us working on it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27640",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Is there a library for rendering basic flow diagrams in Javascript/CSS? On a web page I want to dynamically render very basic flow diagrams, i.e. a few boxes joined by lines. Ideally the user could then click on one of these boxes (DIVs?) and be taken to a different page. Resorting to Flash seems like an overkill. Is anyone aware of any client-side (i.e. server agnostic) Javascript or CSS library/technique that may help achieve this?
A: Does the rendering have to be client side?
If yes, you could try Processing:
http://ejohn.org/blog/processingjs/
If you can do it server side, then Graphviz is a good choice.
http://www.graphviz.org/
A: The best and simplest I found is js-graph.it.
It has also this useful feature: deciding the orientation of the flow. For example, in my case I have a document workflow, so I need it to flow towards the right side.
An even simpler alternative is wz_jsGraphics. In my case I draw the arrows like this:
/**Draw an arrow made of 3 lines.
* Requires wz_jsGraphics (http://www.walterzorn.de/en/jsgraphics/jsgraphics_e.htm).
* @canvas a jsGraphics object used as canvas
* @blockFrom id of the object from which the arrow starts
* @blockTo id of the object where the arrow ends with a arrowhead
*/
function drawArrow(canvas, blockFrom, blockTo){
//blocks
var f = $("#" + blockFrom);
var t = $("#" + blockTo);
//lines positions and measures
var p1 = { left: f.position().left + f.outerWidth(), top: f.position().top + f.outerHeight()/2 };
var p4 = { left: t.position().left, top: t.position().top + t.outerHeight()/2 };
var mediumX = Math.abs(p4.left - p1.left)/2;
var p2 = { left: p1.left + mediumX, top: p1.top };
var p3 = { left: p1.left + mediumX, top: p4.top };
//line A
canvas.drawLine(p1.left, p1.top, p2.left, p2.top);
//line B
canvas.drawLine(p2.left, p2.top, p3.left, p3.top);
//line C
canvas.drawLine(p3.left, p3.top, p4.left, p4.top);
//arrowhead
canvas.drawLine(p4.left - 7, p4.top - 4, p4.left, p4.top);
canvas.drawLine(p4.left - 7, p4.top + 4, p4.left, p4.top);
}
var jg = new jsGraphics('myCanvasDiv');
drawArrow(jg, 'myFirstBlock', 'mySecondBlock');
jg.paint();
A: This kind of flowchart can be accomplished using CSS, resorting to JavaScript graphing libraries (canvas) might be overkill. You may wish to checkout how some Genealogy sites do this to get a family tree.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27643",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Cannot delete, a file with that name may already exist This is starting to vex me. I recently decided to clear out my FTP, and stumbled across an old Wordpress install I forgot I had (oh yes, very security conscious me). Anyway, for some reason deleting the directory failed so I investigated to see what was causing the blockage and I've narrowed it down to a file in wp-content.
Now when I try to delete this file I can get two errors. I've tried in Windowx Explorer (FTP) and the Web Control Panel's File Manager. Here's some error shots:
As you can see my File manager thinks the file is a Symbolic Link. While it scares me that
my web server is host to an obviously religoious artifact I'm also heavily confused by the situation.
*
*I've tried renaming the file.
*I've refreshed the FTP view.
*I've tried moving the file to another dir (which worked, no success on deletion though).
*I've tried editing the file and then deletion.
And I'm at a loss. Is there a special way to delete SymLinks? I've never heard of them, until now.
edit
Oho Windows you really are a magician of sorts. I decided to take a look at my FTP via command prompt and guess what? The file doesn't exist. Whether ftp ignores symlinks I don't know but I'm about to give up :P
A: First of all, try emailing your webhost either for SSH-access or to remove the symlink for you.
If you get SSH-access, use:
unlink index.php
Or if neither works: Create a PHP file there (for instance remove.php) that contains:
<?php unlink("./index.php") ?>
Then open that file in your browser, afterwards remove the remove.php file.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27655",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: What kind of database refactoring tools are there? I am looking for something to integrate to my CI workflow.
I've heard of dbdeploy but I'm looking for something else. The reason I don't like dbdeploy is I don't want to install java on my server.
I would prefer of course that the solution doesn't involve stringing some shell scripts together.
A: Redgate will probably do everything you need. Expensive though.
EDIT - Specifically: http://www.red-gate.com/products/sql-development/readyroll/
A: It's not a tool, but Ambler and Sadalage's book, Refactoring Databases: Evolutionary Database Design is quite good.
A: You mentioned that you like dbDeploy and the fact that you do not want to install java on your server. Are you aware of the .NET port of this tool?
I used this recently with a team and we were very happy with it. In our case we were targeting SQL 2000, but it could easily be configured to run against other DB platforms, including MySQL. Of course it will require you to have the .NET Framework installed on the server... if that's an acceptable prerequisite vs. the java runtime.
A: Possible it's not your case, but if you decide to use Java take a look at liquibase
A: for those people who are interested in liquibase, but don’t like xml migrations.
Take a look at groovy-liquibase, a plugin that supports groovy migrations
Liquibase is great in structure, but misses with xml migrations. This plugin solves that problem
A: Here is a feature comparison between
*
*Flyway
*Liquibase
*c5-db-migration
*dbdeploy
*mybatis
*MIGRATEdb
*migrate4j
*dbmaintain
*AutoPatch
A: Yep, Redgate is magic. And Not that expenssive for what it provides.
A: Try Agile DBRIRE for Continuous Integration workflow. It's easy to set and allows to generate test db from Dev DB. Also it allows to generate incremental DB updates for Staging and Production. The tool can compare DEV and Staging/Production DB and generate metadata and data update SQL scripts. The tool is free.
A: Visual Studio Team system (database edition) does some refactoring.
I read the Refactoring databases book. I think it's helpful.
But in software dev, you build tests so that you are safe refactoring. They don't touch on tests in the Refactoring Databases book, which was my big disappointment with it.
A: I think those tools are very good, but for my purpose I have written a custom own. The main reason for this was because of I'm working on a SQL Server Compact 3.5 database, so none of the listed tools worked.
Of course it isn't as powerful as the tools from Redgate but you get the most important features very quick.
It's able to rename all kinds of database objects and migrating columns to other tables and create a diff script for 2 databases.
A: An important part of Refactoring Databases is the migrations part.
A .NET migrations solution that does not require EF or Java is Rob Reynold's Roundhouse
Might be worth checking out.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27663",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25"
} |
Q: Microsoft .Net framework 3.5 SP1 Setup Fails On my Vista machine I cannot install the .Net framework 3.5 SP1. Setup ends few moments after ending the download of the required files, stating in the log that:
[08/26/08,09:46:11] Microsoft .NET Framework 2.0SP1 (CBS): [2] Error: Installation failed for component Microsoft .NET Framework 2.0SP1 (CBS). MSI returned error code 1
[08/26/08,09:46:13] WapUI: [2] DepCheck indicates Microsoft .NET Framework 2.0SP1 (CBS) is not installed.
First thing I did was trying to install 2.0 SP1, but this time setup states that the "product is not supported on Vista system". Uhm.
The real big problem is that this setup fails also when it is called by the Visual Studio 2008 SP1.
Now, I searched the net for this, but I'm not finding a real solution... Any idea / hint? Did anybody have problems during SP1 install?
Thanks
A: Here is an article describing what might be your problem.
A: I also experienced it on my XP.
I searched for it, and the result was that some kind of beta .NET remained on my PC.
There is a tool to remove all .NET framework from the system. I run it and after it I successfully installed 3.5 SP1.
A: I had a similar error, though it listed error code 1060 rather than 1. I read elsewhere that having IIS turned off may be the issue, but this didn't seem to help. Then I read that windows update needed to be turned on. Mine was turned on, but when I tired to run it, it failed. I ran: regsvr32 wuaueng.dll which seemed to fix windows update. After windows update was working again I was able to install .Net 3.5 Sp 1. Note that I didn't actually run windows update, I just made sure that it could run.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27670",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Dynamic top down list of controls in WindowsForms and C#? In our project, SharpWired, we're trying to create a download component similar to the download windows in Firefox or Safari. That is, one single top down list of downloads which are custom controls containing progress bars, buttons and what not.
The requirements are that there should be one single list, with one element on each row. Each element must be a custom control. The whole list should be dynamically re-sizable, so that when you make it longer / shorter the list adds a scroll bar when needed and when you make it thinner / wider the custom controls should resize to the width of the list.
We've tried using a FlowLayoutPanel but haven't gotten resizing to work the way we want to. Preferably we should only have to set anchoring of the custom controls to Left & Right. We've also thought about using a TableLayoutPanel but found adding rows dynamically to be a too big overhead so far.
This must be quite a common use case, and it seems a bit weird to me that the FlowLayoutPanel has no intuitive way of doing this. Has anyone done something similar or have tips or tricks to get us under way?
Cheers!
/Adam
A: If you don't want to use databinding (via the DataRepeater control, as mentioned above), you could use a regular Panel control and set its AutoScroll property to true (to enable scrollbars).
Then, you could manually add your custom controls, and set the Dock property of each one to Top.
A: .NET 3.5 SP1 introduced a DataRepeater Windows Forms control which sounds like it'd do what you want. Bind it to the list of "downloads" (or whatever your list represents) and customize each item panel to include the controls you need.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27674",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How to "unversion" a file in either svn and/or git It happens to me all the time. I accidentally version a file, I do not want to be versioned (i.e. developer/machine specific config-files).
If I commit this file, I will mess up the paths on all the other developer machines - they will be unhappy.
If I do delete the file from versioning, it will be deleted from the other developers machines - they will be unhappy.
If I choose to never commit the file, I always have a "dirty" checkout - I am unhappy.
Is a clean way to "unversion" a file from revision-control, that will result in no-one being unhappy?
edit: trying to clarify a bit: I have already commited the file to the repository and I want to only remove it from versioning - I specifically do not want it to be physically deleted from everyone doing a checkout. I initially wanted it to be ignored.
Answer: If I could accept a second answer, it would be this. It answers my question with respect to git - the accepted answer is about svn.
A: If you accidentally 'add' a file in svn & you haven't committed it, you can revert that file & it will remove the add.
A: In Git, in order to delete it from the tree, but NOT from the working directory, which I think is what you want, you can use the --cached flag, that is:
git rm --cached <filename>
A: Without having tried it...
In git, if your changes haven't been propagated to another repository, you should be able to git rm the affected file(s), git rebase --interactive to reorder the deletion commit to be just after the commit in which you accidentally added the offending files, and then squash those two commits together.
Of course, this won't help if someone else has pulled your changes.
A: It sounds like you have already added and committed the file to subversion (I assume that you are using Subversion). If that is the case, then there are only two ways to remove that file:
*
*Mark the file as deleted and commit.
*Perform an svnadmin dump, filter out the revision where you accidentally committed the file and perform an svnadmin load.
Trust me, you don't really want to do number 2. It will invalidate all working copies of the repository. The best is to do number 1, mark the file as ignored and apologise.
A: SVN version 1.5 supports removing/deleting a file from a repository with out losing the local file
taken from http://subversion.tigris.org/svn_1.5_releasenotes.html
New --keep-local option retains path after delete..
Delete (remove) now takes a --keep-local option to retain its targets locally, so paths will not be removed even if unmodified.
A: Look up svn:ignore and .gitignore - these features allow you to have extra files in your checkout that are ignored by your RCS (when doing a "status" operation or whatever).
For machine-specific config files, a good option is to check in a file named with an extra ".sample" extension, ie. config.xml.sample. Individual developers would make a copy of this file in config.xml and tweak it for their system. With svn:ignore or .gitignore you can ensure that the unversioned config.xml file doesn't show up as dirty all the time.
In response to your edit: If you remove the file from the repository now, then your developers will get a conflict next time they do an update (assuming they have all changed the file for their system). They won't lose their local changes, they will be recoverable from somewhere. If they happen not to have made any local changes, then their config file will vanish but they can just re-get the previous one out of source control and use that.
A: To remove a file entirely from a git repository (Say you commited a file with a password in it, or accidently commited temporary files)
git filter-branch --index-filter 'git update-index --remove filename' HEAD
Then I think you have to commit, and push -f if it's in remote branches (remember it might annoy people if you start changing the repository's history.. and if they have pulled from you before, they could still have the file)
A: To remove a file already in source control:
git rm <filename>
and then
git commit -m ...
You should add every file you want to ignore to the .gitignore file. I additionally always check the .gitignore file to my repository, so if someone checks out the code on his machine, and the file gets generated again, he won't 'see' it as 'dirty'.
Of course if you already committed the file and someone else got your changes on another machine, you would have to alter every local repository to modify the history. At least that's a possible solution with git. I don't think svn would let you do that.
If the file is already on the master repository (git) or in the server (svn), I don't think there is a better solution than just deleting the file in another commit.
A: For SVN you can revert files you haven't committed yet. In TortoiseSVN you just right click the file in the commit window and choose Revert...
On command line use svn revert [file]
Don't know about GIT since I've never used it.
A: Two simple steps in SVN:
1. Add this directory in parent directory's svn:ignore property:
svn propedit svn:ignore .
2. Remove directory:
svn rm mydir
3. Commit
Please note that when other developers do a svn update, that directory will not get deleted. SVN just unversions it instead.
A: As far as I know there is no easy way to remove an added file from versioning control in svn once it is committed.
You will have to save the file somewhere else and delete it from version control. Than copy the backup back again.
It's a version control system after all... ;)
A: You can exclude files from subversion with the global-ignore setting
http://svnbook.red-bean.com/en/1.1/ch07.html#svn-ch-7-sect-1.3.2
check out the documentation for details
A:
I there f I choose to never commit the file, I always have a "dirty" checkout - I am unhappy.
With regard to this particular point, you might want to .gitignore
the file as other have suggested, or to use a scheme like the one
described in this
answer
of mine.
A: You can unversion all files in current directory with this command. The sed bit reverses the order so svn can process it:
find . | sed '1!G;h;$!d'| awk '{print "svn rm --keep-local " $1}'
As already stated in other answers, single file is unversioned with this:
svn rm --keep-local yourFileNameXXX
A: In Windows, if what you are looking for is copying the folder to another location and removing it from git, so that you don't see the icons anymore, you just delete the .git folder. The .git folder is hidden, so you have to go to Organize / Folder and Search Options, Display hidden Files option, under your main folder. It should unversion it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27695",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "77"
} |
Q: C++ : Opening a file in non exclusive mode I have to develop an application which parses a log file and sends specific data to a server. It has to run on both Linux and Windows.
The problem appears when I want to test the log rolling system (which appends .1 to the name of the creates a new one with the same name). On Windows (haven't tested yet on Linux) I can't rename a file that I have opened with std::ifstream() (exclusive access?) even if I open it in "input mode" (ios::in).
Is there a cross-platform way to open file in a non-exclusive way?
A:
Is there a way to open file in a non-exclusive way,
Yes, using Win32, passing the various FILE_SHARE_Xxxx flags to CreateFile.
is it cross platform?
No, it requires platform-specific code.
Due to annoying backwards compatibility concerns (DOS applications, being single-tasking, assume that nothing can delete a file out from under them, i.e. that they can fclose() and then fopen() without anything going amiss; Win16 preserved this assumption to make porting DOS applications easier, Win32 preserved this assumption to make porting Win16 applications easier, and it's awful), Windows defaults to opening files exclusively.
The underlying OS infrastructure supports deleting/renaming open files (although I believe it does have the restriction that memory-mapped files cannot be deleted, which I think isn't a restriction found on *nix), but the default opening semantics do not.
C++ has no notion of any of this; the C++ operating environment is much the same as the DOS operating environment--no other applications running concurrently, so no need to control file sharing.
A: It's not the reading operation that's requiring the exclusive mode, it's the rename, because this is essentially the same as moving the file to a new location.
I'm not sure but I don't think this can be done. Try copying the file instead, and later delete/replace the old file when it is no longer read.
A: Win32 filesystem semantics require that a file you rename not be open (in any mode) at the time you do the rename. You will need to close the file, rename it, and then create the new log file.
Unix filesystem semantics allow you to rename a file that's open because the filename is just a pointer to the inode.
A: If you are only reading from the file I know it can be done with windows api CreateFile. Just specify FILE_SHARE_DELETE | FILE_SHARE_READ | FILE_SHARE_WRITE as the input to dwShareMode.
Unfortunally this is not crossplatform. But there might be something similar for Linux.
See msdn for more info on CreateFile.
EDIT: Just a quick note about Greg Hewgill comment. I've just tested with the FILE_SHARE* stuff (too be 100% sure). And it is possible to both delete and rename files in windows if you open read only and specify the FILE_SHARE* parameters.
A: I'd make sure you don't keep files open. This leads to weird stuff if your app crashes for example.
What I'd do:
*
*Abstract (reading / writing / rolling over to a new file) into one class, and arrange closing of the file when you want to roll over to a new one in that class. (this is the neatest way, and since you already have the roll-over code you're already halfway there.)
*If you must have multiple read/write access points, need all features of fstreams and don't want to write that complete a wrapper then the only cross platform solution I can think of is to always close the file when you don't need it, and have the roll-over code try to acquire exclusive access to the file a few times when it needs to roll-over before giving up.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27700",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Total row count in GridView control using LinqDataSource and paging I'm having a problem obtaining the total row count for items displayed in a Gridview using Paging and with a LinqDataSource as the source of data.
I've tried several approaches:
protected void GridDataSource_Selected(object sender, LinqDataSourceStatusEventArgs e)
{
totalLabel.Text = e.TotalRowCount.ToString();
}
returns -1 every time.
protected void LinqDataSource1_Selected(object sender, LinqDataSourceStatusEventArgs e)
{
System.Collections.Generic.List<country> lst = e.Result as System.Collections.Generic.List<country>;
int count = lst.Count;
}
only gives me the count for the current page, and not the total.
Any other suggestions?
A: The LinqDataSourceEventArgs returned in those events return -1 on these occasions:
-1 if the LinqDataSourceStatusEventArgs object was created during a data modification operation; -1 if you enabled customized paging by setting AutoPage to true and by setting RetrieveTotalRowCount to false.
Check here for more information - the table towards the bottom, shows different properties to set to get the rowcount back, but it looks like you either have to set AutoPage and AllowPage properties to either both true or both false.
Judging by the table in the link above and the example you provide you have Autopage set to false, but AllowPaging set to true, therefore it is returning the amount of rows in the page.
HTH
A: The TotalRowCount property is only valid for certain values of AutoPage and AllowPaging. They should both be true (in your case) or both be false.
chech out the following page for an explanation of the TotalRowCount property.
http://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.linqdatasourcestatuseventargs.totalrowcount.aspx
A: Well, I've already set AutoPage and AllowPaging to true.
I've confirmed that RetrieveTotalRowCount is set to true by checking its value in debug mode (couldn't find where to change its value).
And it still returns -1.
The only thing missing is:
-1 if the LinqDataSourceStatusEventArgs object was created during a data modification operation;
and I'm not quite sure what this means.
I am using a modified version of the LinqDataSource to enable some custom filtering, so that might be the problem. On the other hand, while messing around in debug mode I did manage to check the value of the arguments.TotalRowCount and it was correct. But the value that comes out in the Selected event is always -1.
A: I was stuck with the same problem. I solved my problem with the following line of code
protected void LinqDataSourcePoints_Selected(object sender, LinqDataSourceStatusEventArgs e)
{
totalRecords = (e.Result as List).Count;
}
Explanation:
1-Parse the e.Result as your data source
2-Get the count.
Work perfectly for me.
A: try this, i have tested and it returns all the rows.
protected void LinqDataSource1_Selecting(object sender, LinqDataSourceStatusEventArgs e)
{
System.Collections.Generic.List<country> lst = e.Result as System.Collections.Generic.List<country>;
int count = lst.Count;
}
make sure your event is "Selecting"
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27711",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: What are some instances in which expression trees are useful? I completely understand the concept of expression trees, but I am having a hard time trying to find situations in which they are useful. Is there a specific instance in which expression trees can be applied? Or is it only useful as a transport mechanism for code? I feel like I am missing something here. Thanks!
A: Some unit test mocking frameworks make use of expression trees in order to set up strongly typed expectations/verifications. Ie:
myMock.Verify(m => m.SomeMethod(someObject)); // tells moq to verify that the method
// SomeMethod was called with
// someObject as the argument
Here, the expression is never actually executed, but the expression itself holds the interesting information. The alternative without expression trees would be
myMock.Verify("SomeMethod", someObject) // we've lost the strong typing
A:
Or is it only useful as a transport mechanism for code?
It's useful as an execution mechanism for code. Using the interpreter pattern, expression trees can directly be interpreted. This is useful because it's very easy and fast to implement. Such interpreters are ubiquitous and used even in cases that don't seem to “interpret” anything, e.g. for printing nested structures.
A: Expression trees are useful when you need to access function logic in order to alter or reapply it in some way.
Linq to SQL is a good example:
//a linq to sql statement
var recs (
from rec in LinqDataContext.Table
where rec.IntField > 5
select rec );
If we didn't have expression trees this statement would have to return all the records, and then apply the C# where logic to each.
With expression trees that where rec.IntField > 5 can be parsed into SQL:
--SQL statment executed
select *
from [table]
where [table].[IntField] > 5
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27726",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Flex tools for Mac I'm starting developing with Flex on my Mac but i don't find good tools to ease the development (apart from Flex Builder).
What is your favourite choice for Flex development on Mac?
A: TextMate + the Flex and ActionScript 3 bundles is a great combo. Throw in ProjectPlus and you have an almost full featured development environment. What's missing is visual design tools (which I'm sceptical of anyway), debugger (the command line version isn't very easy to work with) and a profiler.
I've long used TextMate and the additions mentioned above for all my Flex development, but lately the lack of debugger and profiler has made me use FlexBuilder too, just to get those tools.
A: Unfortunately, you're pretty much limited to Flex Builder or some text editor combined with the Flex SDK. I've been hoping that someone would port FlashDevelop, my favorite AS/Flex IDE over to the Mac (at least via Mono), but no dice as of yet.
If you can wait X number of years, perhaps my CocoAS IDE will be complete ;-)
A: TextMate is great, but if you're looking for something free, you can hack as3 onto XCode (I've used it, and it is fine, but some of the highlighting is off, and auto-completion is weak).
As for a debugging environment, I would recommend XTrace (http://mabblog.com/xtrace.html). The library that comes with it is as3, but you can easily port it to as3 (as I did).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27729",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: So what am I missing with this here WPF? Background: I have a little video playing app with a UI inspired by the venerable Sasami2k, just updated to use VMR9 (i.e. Direct3D9 with DirectShow) and be less unstable. Currently, it's a C++ app using raw Win32, through necessity: none of the various toolkits are worth a damn. WPF, in particular, was not possible, due to its airspace restrictions.
OK, so, now that D3DImage exists it might be viable to mix and match D3D/VMR9/DirectShow and WPF. Given past frustrations with Win32's inextensibility, this seems like a good thing.
But y'know, I'm falling at the first hurdle here.
With Win32 I have created (very easily) a borderless window that's resizable, resizes proportionately, snaps to the screen edges, and takes up the whole screen (including taskbar area) when maximized. It's a video app, so these are all pretty desirable properties.
OK, so, how to do the same with WPF?
In Win32, I use:
WM_GETMINMAXINFO to control the maximize behaviour
WM_NCHITTEST to control the resize borders
WM_MOVING to control the snap-to-screen-edges
WM_SIZING to control the resize aspect ratio
However, looking at WPF it seems that the various events arrive too late, unless I'm misunderstanding the documentation?
For example, I don't know when I'm mid-move, as LocationChanged says it fires only once the window has moved (which is too late).
Similarly, it appears that StateChanged only fires once the window has been restored/maximized (when I need the information prior to the maximize, to tell the system the correct maximize size).
And I seem to be completely overlooking where the system tells me about resizes. Likewise the hit testing.
So, uh, am I missing something here, or do I have no choice but to drop back to hooking the wndproc of this thing anyway? Can I do what I want without hooking the WndProc?
If I have to use the WndProc I might as well stick with my existing codebase; I want to have simpler, cleaner UI code, and moving away from the WndProc is fundamental to this.
If I do have to hook the WndProc, I have to wonder--why? Win32 has got the sizing/sized, moving/moved, poschanging/poschanged window messages, and they're all useful. Why wouldn't WPF replicate the same set of events? It seems like an unnecessary gap in functionality.
Plus, it means that WPF is tied to a specific USER32-dependent implementation. This means that MS can't (in Windows 7 or 8, say) invert the display layer to make WPF "native" and emulate HWNDs and WndProcs for legacy apps--even though this is precisely what MS should be doing.
A: OK, to answer my own question, I was missing Adorners (never came back in any of the searches I did, so it doesn't seem that they're as widely known as they perhaps should be).
They seem rather more complex than the WndProc overrides, unfortunately, but I think it should be possible to manhandle them into doing what I want.
A: In code you can set the WindowStyle property to "None" and WindowsState to "Maximized"
Im not sure what the Xaml would look like.
A:
And I seem to be completely overlooking where the system tells me about resizes. Likewise the hit testing.
For the resizing you're indeed missing the SizeChanged event.
AFAIK there is sadly no OnSizeChanging, OnLocationChanging and OnStateChanging event on a Window in .NET
I saw that one, but as far as I can tell it only fires after the size has changed, whereas I need the event to fire during the resize. Unless I'm misreading the docs and it
actually fires continuously?
It does not fire continuously but you can probably use the ResizeBegin and ResizeEnd events and be able to do that.
Aren't they WinForms events?
Hmm, you're right.
A: Can you perhaps override the ArrangeOverride and/or MeasureOverride to make up for those missing resize events? Measure is the first pass, and occurs when a layout needs to adjust for a new size, so it's kind of like a size changing event.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27733",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Finding the crash dump files for a C# app An app I'm writing always crashes on a clients computer, but I don't get an exception description, or a stack trace.
The only thing I get is a crash report that windows wants to send to Microsoft.
I would like to get that dump file and investigate it myself, but I cannot find it.
When I "View the contents of the error report" I can see the different memory dumps, but I cannot copy it or save it.
A: You can use the Windows debugging tools to view the crash dump. To get the most use out of it, you'll need an exact copy of the symbols for that application (i.e. same version).
Have a look at Tess's blog for tutorials on how to use the Windows debugging tools. I refer to her blog constantly whenever I'm in need of analysing crash dumps.
A: Tess' blog was a great resource. Eventually I managed to figure out how to do remote debugging which means I didn't have to look at the crash dump.
For the general community, here are some links I found useful:
*
*Remote debugging, how to set up and run it.
*Crash dumps, how to save and debug them.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27742",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: How do I gracefully shut down a Mongrel web server My RubyOnRails app is set up with the usual pack of mongrels behind Apache configuration. We've noticed that our Mongrel web server memory usage can grow quite large on certain operations and we'd really like to be able to dynamically do a graceful restart of selected Mongrel processes at any time.
However, for reasons I won't go into here it can sometimes be very important that we don't interrupt a Mongrel while it is servicing a request, so I assume a simple process kill isn't the answer.
Ideally, I want to send the Mongrel a signal that says "finish whatever you're doing and then quit before accepting any more connections".
Is there a standard technique or best practice for this?
A: Look at using monit. You can dynamically restart mongrel based on memory or CPU usage. Here's a line from a config file that I wrote for a client of mine.
check process mongrel-8000 with pidfile /var/www/apps/fooapp/current/tmp/pids/mongrel.8000.pid
start program = "/usr/local/bin/mongrel_rails cluster::start --only 8000"
stop program = "/usr/local/bin/mongrel_rails cluster::stop --only 8000"
if totalmem is greater than 150.0 MB for 5 cycles then restart # eating up memory?
if cpu is greater than 50% for 8 cycles then alert # send an email to admin
if cpu is greater than 80% for 5 cycles then restart # hung process?
if loadavg(5min) greater than 10 for 3 cycles then restart # bad, bad, bad
if 3 restarts within 5 cycles then timeout # something is wrong, call the sys-admin
if failed host 192.168.106.53 port 8000 protocol http request /monit_stub
with timeout 10 seconds
then restart
group mongrel
You'd then repeat this configuration for all of your mongrel cluster instances. The monit_stub line is just an empty file that monit tries to download. If it can't, it tries to restart the instance as well.
Note: the resource monitoring seems not to work on OS X with the Darwin kernel.
A: I've done a little more investigation into the Mongrel source and it turns out that Mongrel installs a signal handler to catch an standard process kill (TERM) and do a graceful shutdown, so I don't need a special procedure after all.
You can see this working from the log output you get when killing a Mongrel while it's processing a request. For example:
** TERM signal received.
Thu Aug 28 00:52:35 +0000 2008: Reaping 2 threads for slow workers because of 'shutdown'
Waiting for 2 requests to finish, could take 60 seconds.Thu Aug 28 00:52:41 +0000 2008: Reaping 2 threads for slow workers because of 'shutdown'
Waiting for 2 requests to finish, could take 60 seconds.Thu Aug 28 00:52:43 +0000 2008 (13051) Rendering layoutfalsecontent_typetext/htmlactionindex within layouts/application
A: Better question is how to keep your app from consuming so much memory that it requires you to reboot mongrels from time to time.
www.modrails.com reduced our memory footprint significantly
A: Boggy:
If you have one process running, it will gracefully shut down (service all the requests in its queue which should only be 1 if you are using proper load balancing). The problem is you can't start the new server until the old one dies, so your users will queue up in the load balancer. What I've found successful is a 'cascade' or rolling restart of the mongrels. Instead of stopping them all and starting them all (therefore queuing requests until the one mongrel is done, stopped, restarted and accepting connections), you can stop then start each mongrel sequentially, blocking the call to restart the next mongrel until the previous one is back up (use a real HTTP check to a /status controller). As your mongrels roll, only one at a time is down and you are serving across two code bases - if you can't do this you should throw up a maintenance page for a minute. You should be able to automate this with capistrano or whatever your deploy tool is.
So I have 3 tasks:
cap:deploy - which does the traditional restart all at the same time method with a hook that puts up a maintenance page and then takes it down after an HTTP check.
cap:deploy:rolling - which does this cascade across the machine (I pull from a iClassify to know how many mongrels are on the given machine) without a maintenance page.
cap deploy:migrations - which does maintenance page + migrations since its usually a bad idea to run migrations 'live'.
A: Try using:
mongrel_cluster_ctl stop
You can also use:
mongrel_cluster_ctl restart
A: got a question
what happens when /usr/local/bin/mongrel_rails cluster::start --only 8000 is triggered ?
are all of the requests served by this particular process, to their end ? or are they aborted ?
I curious if this whole start/restart thing can be done without affecting the end users...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27743",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Getting parts of a URL (Regex) Given the URL (single line):
http://test.example.com/dir/subdir/file.html
How can I extract the following parts using regular expressions:
*
*The Subdomain (test)
*The Domain (example.com)
*The path without the file (/dir/subdir/)
*The file (file.html)
*The path with the file (/dir/subdir/file.html)
*The URL without the path (http://test.example.com)
*(add any other that you think would be useful)
The regex should work correctly even if I enter the following URL:
http://example.example.com/example/example/example.html
A: I realize I'm late to the party, but there is a simple way to let the browser parse a url for you without a regex:
var a = document.createElement('a');
a.href = 'http://www.example.com:123/foo/bar.html?fox=trot#foo';
['href','protocol','host','hostname','port','pathname','search','hash'].forEach(function(k) {
console.log(k+':', a[k]);
});
/*//Output:
href: http://www.example.com:123/foo/bar.html?fox=trot#foo
protocol: http:
host: www.example.com:123
hostname: www.example.com
port: 123
pathname: /foo/bar.html
search: ?fox=trot
hash: #foo
*/
A: Propose a much more readable solution (in Python, but applies to any regex):
def url_path_to_dict(path):
pattern = (r'^'
r'((?P<schema>.+?)://)?'
r'((?P<user>.+?)(:(?P<password>.*?))?@)?'
r'(?P<host>.*?)'
r'(:(?P<port>\d+?))?'
r'(?P<path>/.*?)?'
r'(?P<query>[?].*?)?'
r'$'
)
regex = re.compile(pattern)
m = regex.match(path)
d = m.groupdict() if m is not None else None
return d
def main():
print url_path_to_dict('http://example.example.com/example/example/example.html')
Prints:
{
'host': 'example.example.com',
'user': None,
'path': '/example/example/example.html',
'query': None,
'password': None,
'port': None,
'schema': 'http'
}
A: subdomain and domain are difficult because the subdomain can have several parts, as can the top level domain, http://sub1.sub2.domain.co.uk/
the path without the file : http://[^/]+/((?:[^/]+/)*(?:[^/]+$)?)
the file : http://[^/]+/(?:[^/]+/)*((?:[^/.]+\.)+[^/.]+)$
the path with the file : http://[^/]+/(.*)
the URL without the path : (http://[^/]+/)
(Markdown isn't very friendly to regexes)
A: Try the following:
^((ht|f)tp(s?)\:\/\/|~/|/)?([\w]+:\w+@)?([a-zA-Z]{1}([\w\-]+\.)+([\w]{2,5}))(:[\d]{1,5})?((/?\w+/)+|/?)(\w+\.[\w]{3,4})?((\?\w+=\w+)?(&\w+=\w+)*)?
It supports HTTP / FTP, subdomains, folders, files etc.
I found it from a quick google search:
Link
A: This improved version should work as reliably as a parser.
// Applies to URI, not just URL or URN:
// http://en.wikipedia.org/wiki/Uniform_Resource_Identifier#Relationship_to_URL_and_URN
//
// http://labs.apache.org/webarch/uri/rfc/rfc3986.html#regexp
//
// (?:([^:/?#]+):)?(?://([^/?#]*))?([^?#]*)(?:\?([^#]*))?(?:#(.*))?
//
// http://en.wikipedia.org/wiki/URI_scheme#Generic_syntax
//
// $@ matches the entire uri
// $1 matches scheme (ftp, http, mailto, mshelp, ymsgr, etc)
// $2 matches authority (host, user:pwd@host, etc)
// $3 matches path
// $4 matches query (http GET REST api, etc)
// $5 matches fragment (html anchor, etc)
//
// Match specific schemes, non-optional authority, disallow white-space so can delimit in text, and allow 'www.' w/o scheme
// Note the schemes must match ^[^\s|:/?#]+(?:\|[^\s|:/?#]+)*$
//
// (?:()(www\.[^\s/?#]+\.[^\s/?#]+)|(schemes)://([^\s/?#]*))([^\s?#]*)(?:\?([^\s#]*))?(#(\S*))?
//
// Validate the authority with an orthogonal RegExp, so the RegExp above won’t fail to match any valid urls.
function uriRegExp( flags, schemes/* = null*/, noSubMatches/* = false*/ )
{
if( !schemes )
schemes = '[^\\s:\/?#]+'
else if( !RegExp( /^[^\s|:\/?#]+(?:\|[^\s|:\/?#]+)*$/ ).test( schemes ) )
throw TypeError( 'expected URI schemes' )
return noSubMatches ? new RegExp( '(?:www\\.[^\\s/?#]+\\.[^\\s/?#]+|' + schemes + '://[^\\s/?#]*)[^\\s?#]*(?:\\?[^\\s#]*)?(?:#\\S*)?', flags ) :
new RegExp( '(?:()(www\\.[^\\s/?#]+\\.[^\\s/?#]+)|(' + schemes + ')://([^\\s/?#]*))([^\\s?#]*)(?:\\?([^\\s#]*))?(?:#(\\S*))?', flags )
}
// http://en.wikipedia.org/wiki/URI_scheme#Official_IANA-registered_schemes
function uriSchemesRegExp()
{
return 'about|callto|ftp|gtalk|http|https|irc|ircs|javascript|mailto|mshelp|sftp|ssh|steam|tel|view-source|ymsgr'
}
A: const URI_RE = /^(([^:\/\s]+):\/?\/?([^\/\s@]*@)?([^\/@:]*)?:?(\d+)?)?(\/[^?]*)?(\?([^#]*))?(#[\s\S]*)?$/;
/**
* GROUP 1 ([scheme][authority][host][port])
* GROUP 2 (scheme)
* GROUP 3 (authority)
* GROUP 4 (host)
* GROUP 5 (port)
* GROUP 6 (path)
* GROUP 7 (?query)
* GROUP 8 (query)
* GROUP 9 (fragment)
*/
URI_RE.exec("https://john:[email protected]:123/forum/questions/?tag=networking&order=newest#top");
URI_RE.exec("/forum/questions/?tag=networking&order=newest#top");
URI_RE.exec("ldap://[2001:db8::7]/c=GB?objectClass?one");
URI_RE.exec("mailto:[email protected]");
Above you can find javascript implementation with modified regex
A: /^((?P<scheme>https?|ftp):\/)?\/?((?P<username>.*?)(:(?P<password>.*?)|)@)?(?P<hostname>[^:\/\s]+)(?P<port>:([^\/]*))?(?P<path>(\/\w+)*\/)(?P<filename>[-\w.]+[^#?\s]*)?(?P<query>\?([^#]*))?(?P<fragment>#(.*))?$/
From my answer on a similar question. Works better than some of the others mentioned because they had some bugs (such as not supporting username/password, not supporting single-character filenames, fragment identifiers being broken).
A: I found the highest voted answer (hometoast's answer) doesn't work perfectly for me. Two problems:
*
*It can not handle port number.
*The hash part is broken.
The following is a modified version:
^((http[s]?|ftp):\/)?\/?([^:\/\s]+)(:([^\/]*))?((\/\w+)*\/)([\w\-\.]+[^#?\s]+)(\?([^#]*))?(#(.*))?$
Position of parts are as follows:
int SCHEMA = 2, DOMAIN = 3, PORT = 5, PATH = 6, FILE = 8, QUERYSTRING = 9, HASH = 12
Edit posted by anon user:
function getFileName(path) {
return path.match(/^((http[s]?|ftp):\/)?\/?([^:\/\s]+)(:([^\/]*))?((\/[\w\/-]+)*\/)([\w\-\.]+[^#?\s]+)(\?([^#]*))?(#(.*))?$/i)[8];
}
A: You can get all the http/https, host, port, path as well as query by using Uri object in .NET.
just the difficult task is to break the host into sub domain, domain name and TLD.
There is no standard to do so and can't be simply use string parsing or RegEx to produce the correct result. At first, I am using RegEx function but not all URL can be parse the subdomain correctly. The practice way is to use a list of TLDs. After a TLD for a URL is defined the left part is domain and the remaining is sub domain.
However the list need to maintain it since new TLDs is possible. The current moment I know is publicsuffix.org maintain the latest list and you can use domainname-parser tools from google code to parse the public suffix list and get the sub domain, domain and TLD easily by using DomainName object: domainName.SubDomain, domainName.Domain and domainName.TLD.
This answers also helpfull:
Get the subdomain from a URL
CaLLMeLaNN
A: Here is one that is complete, and doesnt rely on any protocol.
function getServerURL(url) {
var m = url.match("(^(?:(?:.*?)?//)?[^/?#;]*)");
console.log(m[1]) // Remove this
return m[1];
}
getServerURL("http://dev.test.se")
getServerURL("http://dev.test.se/")
getServerURL("//ajax.googleapis.com/ajax/libs/jquery/1.8.3/jquery.min.js")
getServerURL("//")
getServerURL("www.dev.test.se/sdas/dsads")
getServerURL("www.dev.test.se/")
getServerURL("www.dev.test.se?abc=32")
getServerURL("www.dev.test.se#abc")
getServerURL("//dev.test.se?sads")
getServerURL("http://www.dev.test.se#321")
getServerURL("http://localhost:8080/sads")
getServerURL("https://localhost:8080?sdsa")
Prints
http://dev.test.se
http://dev.test.se
//ajax.googleapis.com
//
www.dev.test.se
www.dev.test.se
www.dev.test.se
www.dev.test.se
//dev.test.se
http://www.dev.test.se
http://localhost:8080
https://localhost:8080
A: None of the above worked for me. Here's what I ended up using:
/^(?:((?:https?|s?ftp):)\/\/)([^:\/\s]+)(?::(\d*))?(?:\/([^\s?#]+)?([?][^?#]*)?(#.*)?)?/
A: I like the regex that was published in "Javascript: The Good Parts".
Its not too short and not too complex.
This page on github also has the JavaScript code that uses it.
But it an be adapted for any language.
https://gist.github.com/voodooGQ/4057330
A:
A single regex to parse and breakup a
full URL including query parameters
and anchors e.g.
https://www.google.com/dir/1/2/search.html?arg=0-a&arg1=1-b&arg3-c#hash
^((http[s]?|ftp):\/)?\/?([^:\/\s]+)((\/\w+)*\/)([\w\-\.]+[^#?\s]+)(.*)?(#[\w\-]+)?$
RexEx positions:
url: RegExp['$&'],
protocol:RegExp.$2,
host:RegExp.$3,
path:RegExp.$4,
file:RegExp.$6,
query:RegExp.$7,
hash:RegExp.$8
you could then further parse the host ('.' delimited) quite easily.
What I would do is use something like this:
/*
^(.*:)//([A-Za-z0-9\-\.]+)(:[0-9]+)?(.*)$
*/
proto $1
host $2
port $3
the-rest $4
the further parse 'the rest' to be as specific as possible. Doing it in one regex is, well, a bit crazy.
A: I needed a regular Expression to match all urls and made this one:
/(?:([^\:]*)\:\/\/)?(?:([^\:\@]*)(?:\:([^\@]*))?\@)?(?:([^\/\:]*)\.(?=[^\.\/\:]*\.[^\.\/\:]*))?([^\.\/\:]*)(?:\.([^\/\.\:]*))?(?:\:([0-9]*))?(\/[^\?#]*(?=.*?\/)\/)?([^\?#]*)?(?:\?([^#]*))?(?:#(.*))?/
It matches all urls, any protocol, even urls like
ftp://user:[email protected]:8080/dir1/dir2/file.php?param1=value1#hashtag
The result (in JavaScript) looks like this:
["ftp", "user", "pass", "www.cs", "server", "com", "8080", "/dir1/dir2/", "file.php", "param1=value1", "hashtag"]
An url like
mailto://[email protected]
looks like this:
["mailto", "admin", undefined, "www.cs", "server", "com", undefined, undefined, undefined, undefined, undefined]
A: I was trying to solve this in javascript, which should be handled by:
var url = new URL('http://a:[email protected]:890/path/wah@t/foo.js?foo=bar&bingobang=&[email protected]#foobar/bing/bo@ng?bang');
since (in Chrome, at least) it parses to:
{
"hash": "#foobar/bing/bo@ng?bang",
"search": "?foo=bar&bingobang=&[email protected]",
"pathname": "/path/wah@t/foo.js",
"port": "890",
"hostname": "example.com",
"host": "example.com:890",
"password": "b",
"username": "a",
"protocol": "http:",
"origin": "http://example.com:890",
"href": "http://a:[email protected]:890/path/wah@t/foo.js?foo=bar&bingobang=&[email protected]#foobar/bing/bo@ng?bang"
}
However, this isn't cross browser (https://developer.mozilla.org/en-US/docs/Web/API/URL), so I cobbled this together to pull the same parts out as above:
^(?:(?:(([^:\/#\?]+:)?(?:(?:\/\/)(?:(?:(?:([^:@\/#\?]+)(?:\:([^:@\/#\?]*))?)@)?(([^:\/#\?\]\[]+|\[[^\/\]@#?]+\])(?:\:([0-9]+))?))?)?)?((?:\/?(?:[^\/\?#]+\/+)*)(?:[^\?#]*)))?(\?[^#]+)?)(#.*)?
Credit for this regex goes to https://gist.github.com/rpflorence who posted this jsperf http://jsperf.com/url-parsing (originally found here: https://gist.github.com/jlong/2428561#comment-310066) who came up with the regex this was originally based on.
The parts are in this order:
var keys = [
"href", // http://user:[email protected]:81/directory/file.ext?query=1#anchor
"origin", // http://user:[email protected]:81
"protocol", // http:
"username", // user
"password", // pass
"host", // host.com:81
"hostname", // host.com
"port", // 81
"pathname", // /directory/file.ext
"search", // ?query=1
"hash" // #anchor
];
There is also a small library which wraps it and provides query params:
https://github.com/sadams/lite-url (also available on bower)
If you have an improvement, please create a pull request with more tests and I will accept and merge with thanks.
A: I'm a few years late to the party, but I'm surprised no one has mentioned the Uniform Resource Identifier specification has a section on parsing URIs with a regular expression. The regular expression, written by Berners-Lee, et al., is:
^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?
12 3 4 5 6 7 8 9
The numbers in the second line above are only to assist readability;
they indicate the reference points for each subexpression (i.e., each
paired parenthesis). We refer to the value matched for subexpression
as $. For example, matching the above expression to
http://www.ics.uci.edu/pub/ietf/uri/#Related
results in the following subexpression matches:
$1 = http:
$2 = http
$3 = //www.ics.uci.edu
$4 = www.ics.uci.edu
$5 = /pub/ietf/uri/
$6 = <undefined>
$7 = <undefined>
$8 = #Related
$9 = Related
For what it's worth, I found that I had to escape the forward slashes in JavaScript:
^(([^:\/?#]+):)?(\/\/([^\/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?
A: Java offers a URL class that will do this. Query URL Objects.
On a side note, PHP offers parse_url().
A: I would recommend not using regex. An API call like WinHttpCrackUrl() is less error prone.
http://msdn.microsoft.com/en-us/library/aa384092%28VS.85%29.aspx
A: I tried a few of these that didn't cover my needs, especially the highest voted which didn't catch a url without a path (http://example.com/)
also lack of group names made it unusable in ansible (or perhaps my jinja2 skills are lacking).
so this is my version slightly modified with the source being the highest voted version here:
^((?P<protocol>http[s]?|ftp):\/)?\/?(?P<host>[^:\/\s]+)(?P<path>((\/\w+)*\/)([\w\-\.]+[^#?\s]+))*(.*)?(#[\w\-]+)?$
A: I build this one. Very permissive it's not to check url juste divide it.
^((http[s]?):\/\/)?([a-zA-Z0-9-.]*)?([\/]?[^?#\n]*)?([?]?[^?#\n]*)?([#]?[^?#\n]*)$
*
*match 1 : full protocole with :// (http or https)
*match 2 : protocole without ://
*match 3 : host
*match 4 : slug
*match 5 : param
*match 6 : anchor
work
http://
https://
www.demo.com
/slug
?foo=bar
#anchor
https://demo.com
https://demo.com/
https://demo.com/slug
https://demo.com/slug/foo
https://demo.com/?foo=bar
https://demo.com/?foo=bar#anchor
https://demo.com/?foo=bar&bar=foo#anchor
https://www.greate-demo.com/
crash
#anchor#
?toto?
A: I needed some REGEX to parse the components of a URL in Java.
This is what I'm using:
"^(?:(http[s]?|ftp):/)?/?" + // METHOD
"([^:^/^?^#\\s]+)" + // HOSTNAME
"(?::(\\d+))?" + // PORT
"([^?^#.*]+)?" + // PATH
"(\\?[^#.]*)?" + // QUERY
"(#[\\w\\-]+)?$" // ID
Java Code Snippet:
final Pattern pattern = Pattern.compile(
"^(?:(http[s]?|ftp):/)?/?" + // METHOD
"([^:^/^?^#\\s]+)" + // HOSTNAME
"(?::(\\d+))?" + // PORT
"([^?^#.*]+)?" + // PATH
"(\\?[^#.]*)?" + // QUERY
"(#[\\w\\-]+)?$" // ID
);
final Matcher matcher = pattern.matcher(url);
System.out.println(" URL: " + url);
if (matcher.matches())
{
System.out.println(" Method: " + matcher.group(1));
System.out.println("Hostname: " + matcher.group(2));
System.out.println(" Port: " + matcher.group(3));
System.out.println(" Path: " + matcher.group(4));
System.out.println(" Query: " + matcher.group(5));
System.out.println(" ID: " + matcher.group(6));
return matcher.group(2);
}
System.out.println();
System.out.println();
A: Using http://www.fileformat.info/tool/regex.htm hometoast's regex works great.
But here is the deal, I want to use different regex patterns in different situations in my program.
For example, I have this URL, and I have an enumeration that lists all supported URLs in my program. Each object in the enumeration has a method getRegexPattern that returns the regex pattern which will then be used to compare with a URL. If the particular regex pattern returns true, then I know that this URL is supported by my program. So, each enumeration has it's own regex depending on where it should look inside the URL.
Hometoast's suggestion is great, but in my case, I think it wouldn't help (unless I copy paste the same regex in all enumerations).
That is why I wanted the answer to give the regex for each situation separately. Although +1 for hometoast. ;)
A: I know you're claiming language-agnostic on this, but can you tell us what you're using just so we know what regex capabilities you have?
If you have the capabilities for non-capturing matches, you can modify hometoast's expression so that subexpressions that you aren't interested in capturing are set up like this:
(?:SOMESTUFF)
You'd still have to copy and paste (and slightly modify) the Regex into multiple places, but this makes sense--you're not just checking to see if the subexpression exists, but rather if it exists as part of a URL. Using the non-capturing modifier for subexpressions can give you what you need and nothing more, which, if I'm reading you correctly, is what you want.
Just as a small, small note, hometoast's expression doesn't need to put brackets around the 's' for 'https', since he only has one character in there. Quantifiers quantify the one character (or character class or subexpression) directly preceding them. So:
https?
would match 'http' or 'https' just fine.
A: regexp to get the URL path without the file.
url = 'http://domain/dir1/dir2/somefile'
url.scan(/^(http://[^/]+)((?:/[^/]+)+(?=/))?/?(?:[^/]+)?$/i).to_s
It can be useful for adding a relative path to this url.
A: The regex to do full parsing is quite horrendous. I've included named backreferences for legibility, and broken each part into separate lines, but it still looks like this:
^(?:(?P<protocol>\w+(?=:\/\/))(?::\/\/))?
(?:(?P<host>(?:(?:&(?:amp|apos|gt|lt|nbsp|quot|bull|hellip|[lr][ds]quo|[mn]dash|permil|\#[1-9][0-9]{1,3}|[A-Za-z][0-9A-Za-z]+);)|[^\/?#:]+)(?::(?P<port>[0-9]+))?)\/)?
(?:(?P<path>(?:(?:&(?:amp|apos|gt|lt|nbsp|quot|bull|hellip|[lr][ds]quo|[mn]dash|permil|\#[1-9][0-9]{1,3}|[A-Za-z][0-9A-Za-z]+);)|[^?#])+)\/)?
(?P<file>(?:(?:&(?:amp|apos|gt|lt|nbsp|quot|bull|hellip|[lr][ds]quo|[mn]dash|permil|\#[1-9][0-9]{1,3}|[A-Za-z][0-9A-Za-z]+);)|[^?#])+)
(?:\?(?P<querystring>(?:(?:&(?:amp|apos|gt|lt|nbsp|quot|bull|hellip|[lr][ds]quo|[mn]dash|permil|\#[1-9][0-9]{1,3}|[A-Za-z][0-9A-Za-z]+);)|[^#])+))?
(?:#(?P<fragment>.*))?$
The thing that requires it to be so verbose is that except for the protocol or the port, any of the parts can contain HTML entities, which makes delineation of the fragment quite tricky. So in the last few cases - the host, path, file, querystring, and fragment, we allow either any html entity or any character that isn't a ? or #. The regex for an html entity looks like this:
$htmlentity = "&(?:amp|apos|gt|lt|nbsp|quot|bull|hellip|[lr][ds]quo|[mn]dash|permil|\#[1-9][0-9]{1,3}|[A-Za-z][0-9A-Za-z]+);"
When that is extracted (I used a mustache syntax to represent it), it becomes a bit more legible:
^(?:(?P<protocol>(?:ht|f)tps?|\w+(?=:\/\/))(?::\/\/))?
(?:(?P<host>(?:{{htmlentity}}|[^\/?#:])+(?::(?P<port>[0-9]+))?)\/)?
(?:(?P<path>(?:{{htmlentity}}|[^?#])+)\/)?
(?P<file>(?:{{htmlentity}}|[^?#])+)
(?:\?(?P<querystring>(?:{{htmlentity}};|[^#])+))?
(?:#(?P<fragment>.*))?$
In JavaScript, of course, you can't use named backreferences, so the regex becomes
^(?:(\w+(?=:\/\/))(?::\/\/))?(?:((?:(?:&(?:amp|apos|gt|lt|nbsp|quot|bull|hellip|[lr][ds]quo|[mn]dash|permil|\#[1-9][0-9]{1,3}|[A-Za-z][0-9A-Za-z]+);)|[^\/?#:]+)(?::([0-9]+))?)\/)?(?:((?:(?:&(?:amp|apos|gt|lt|nbsp|quot|bull|hellip|[lr][ds]quo|[mn]dash|permil|\#[1-9][0-9]{1,3}|[A-Za-z][0-9A-Za-z]+);)|[^?#])+)\/)?((?:(?:&(?:amp|apos|gt|lt|nbsp|quot|bull|hellip|[lr][ds]quo|[mn]dash|permil|\#[1-9][0-9]{1,3}|[A-Za-z][0-9A-Za-z]+);)|[^?#])+)(?:\?((?:(?:&(?:amp|apos|gt|lt|nbsp|quot|bull|hellip|[lr][ds]quo|[mn]dash|permil|\#[1-9][0-9]{1,3}|[A-Za-z][0-9A-Za-z]+);)|[^#])+))?(?:#(.*))?$
and in each match, the protocol is \1, the host is \2, the port is \3, the path \4, the file \5, the querystring \6, and the fragment \7.
A: //USING REGEX
/**
* Parse URL to get information
*
* @param url the URL string to parse
* @return parsed the URL parsed or null
*/
var UrlParser = function (url) {
"use strict";
var regx = /^(((([^:\/#\?]+:)?(?:(\/\/)((?:(([^:@\/#\?]+)(?:\:([^:@\/#\?]+))?)@)?(([^:\/#\?\]\[]+|\[[^\/\]@#?]+\])(?:\:([0-9]+))?))?)?)?((\/?(?:[^\/\?#]+\/+)*)([^\?#]*)))?(\?[^#]+)?)(#.*)?/,
matches = regx.exec(url),
parser = null;
if (null !== matches) {
parser = {
href : matches[0],
withoutHash : matches[1],
url : matches[2],
origin : matches[3],
protocol : matches[4],
protocolseparator : matches[5],
credhost : matches[6],
cred : matches[7],
user : matches[8],
pass : matches[9],
host : matches[10],
hostname : matches[11],
port : matches[12],
pathname : matches[13],
segment1 : matches[14],
segment2 : matches[15],
search : matches[16],
hash : matches[17]
};
}
return parser;
};
var parsedURL=UrlParser(url);
console.log(parsedURL);
A: I tried this regex for parsing url partitions:
^((http[s]?|ftp):\/)?\/?([^:\/\s]+)(:([^\/]*))?((\/?(?:[^\/\?#]+\/+)*)([^\?#]*))(\?([^#]*))?(#(.*))?$
URL: https://www.google.com/my/path/sample/asd-dsa/this?key1=value1&key2=value2
Matches:
Group 1. 0-7 https:/
Group 2. 0-5 https
Group 3. 8-22 www.google.com
Group 6. 22-50 /my/path/sample/asd-dsa/this
Group 7. 22-46 /my/path/sample/asd-dsa/
Group 8. 46-50 this
Group 9. 50-74 ?key1=value1&key2=value2
Group 10. 51-74 key1=value1&key2=value2
A: The best answer suggested here didn't work for me because my URLs also contain a port.
However modifying it to the following regex worked for me:
^((http[s]?|ftp):\/)?\/?([^:\/\s]+)(:\d+)?((\/\w+)*\/)([\w\-\.]+[^#?\s]+)(.*)?(#[\w\-]+)?$
A: For browser / nodejs environment there is a built in URL class which share the same signature it seems. but check out the respective focus for your case.
https://nodejs.org/api/url.html#urlhost
https://developer.mozilla.org/en-US/docs/Web/API/URL
This is how it may be used though.
let url = new URL('https://test.example.com/cats?name=foofy')
url.protocall; // https:
url.hostname; // test.example.com
url.pathname; // /cats
url.search; // ?name=foofy
let params = url.searchParams
let name = params.get('name');// always string I think so parse accordingly
for more on parameters also see https://developer.mozilla.org/en-US/docs/Web/API/URL/searchParams
A: String s = "https://www.thomas-bayer.com/axis2/services/BLZService?wsdl";
String regex = "(^http.?://)(.*?)([/\\?]{1,})(.*)";
System.out.println("1: " + s.replaceAll(regex, "$1"));
System.out.println("2: " + s.replaceAll(regex, "$2"));
System.out.println("3: " + s.replaceAll(regex, "$3"));
System.out.println("4: " + s.replaceAll(regex, "$4"));
Will provide the following output:
1: https://
2: www.thomas-bayer.com
3: /
4: axis2/services/BLZService?wsdl
If you change the URL to
String s = "https://www.thomas-bayer.com?wsdl=qwerwer&ttt=888";
the output will be the following :
1: https://
2: www.thomas-bayer.com
3: ?
4: wsdl=qwerwer&ttt=888
enjoy..
Yosi Lev
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27745",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "154"
} |
Q: How can I discover the "path" of an embedded resource? I am storing a PNG as an embedded resource in an assembly. From within the same assembly I have some code like this:
Bitmap image = new Bitmap(typeof(MyClass), "Resources.file.png");
The file, named "file.png" is stored in the "Resources" folder (within Visual Studio), and is marked as an embedded resource.
The code fails with an exception saying:
Resource MyNamespace.Resources.file.png cannot be found in class MyNamespace.MyClass
I have identical code (in a different assembly, loading a different resource) which works. So I know the technique is sound. My problem is I end up spending a lot of time trying to figure out what the correct path is. If I could simply query (eg. in the debugger) the assembly to find the correct path, that would save me a load of headaches.
A: I use the following method to grab embedded resources:
protected static Stream GetResourceStream(string resourcePath)
{
Assembly assembly = Assembly.GetExecutingAssembly();
List<string> resourceNames = new List<string>(assembly.GetManifestResourceNames());
resourcePath = resourcePath.Replace(@"/", ".");
resourcePath = resourceNames.FirstOrDefault(r => r.Contains(resourcePath));
if (resourcePath == null)
throw new FileNotFoundException("Resource not found");
return assembly.GetManifestResourceStream(resourcePath);
}
I then call this with the path in the project:
GetResourceStream(@"DirectoryPathInLibrary/Filename");
A: I find myself forgetting how to do this every time as well so I just wrap the two one-liners that I need in a little class:
public class Utility
{
/// <summary>
/// Takes the full name of a resource and loads it in to a stream.
/// </summary>
/// <param name="resourceName">Assuming an embedded resource is a file
/// called info.png and is located in a folder called Resources, it
/// will be compiled in to the assembly with this fully qualified
/// name: Full.Assembly.Name.Resources.info.png. That is the string
/// that you should pass to this method.</param>
/// <returns></returns>
public static Stream GetEmbeddedResourceStream(string resourceName)
{
return Assembly.GetExecutingAssembly().GetManifestResourceStream(resourceName);
}
/// <summary>
/// Get the list of all emdedded resources in the assembly.
/// </summary>
/// <returns>An array of fully qualified resource names</returns>
public static string[] GetEmbeddedResourceNames()
{
return Assembly.GetExecutingAssembly().GetManifestResourceNames();
}
}
A: The name of the resource is the name space plus the "pseudo" name space of the path to the file. The "pseudo" name space is made by the sub folder structure using \ (backslashes) instead of . (dots).
public static Stream GetResourceFileStream(String nameSpace, String filePath)
{
String pseduoName = filePath.Replace('\\', '.');
Assembly assembly = Assembly.GetExecutingAssembly();
return assembly.GetManifestResourceStream(nameSpace + "." + pseduoName);
}
The following call:
GetResourceFileStream("my.namespace", "resources\\xml\\my.xml")
will return the stream of my.xml located in the folder-structure resources\xml in the name space: my.namespace.
A: This will get you a string array of all the resources:
System.Reflection.Assembly.GetExecutingAssembly().GetManifestResourceNames();
A: I'm guessing that your class is in a different namespace. The canonical way to solve this would be to use the resources class and a strongly typed resource:
ProjectNamespace.Properties.Resources.file
Use the IDE's resource manager to add resources.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27757",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "118"
} |
Q: Notify Developer of a "DO NOT USE" Method OK, I know what you're thinking, "why write a method you do not want people to use?" Right?
Well, in short, I have a class that needs to be serialized to XML. In order for the XmlSerializer to do its magic, the class must have a default, empty constructor:
public class MyClass
{
public MyClass()
{
// required for xml serialization
}
}
So, I need to have it, but I don't want people to use it, so is there any attribute that can be use to mark the method as "DO NOT USE"?
I was thinking of using the Obsolete attribute (since this can stop the build), but that just seems kinda "wrong", is there any other way of doing this, or do I need to go ahead and bite the bullet? :)
Update
OK, I have accepted Keith's answer, since I guess in my heart of hearts, I totally agree. This is why I asked the question in the first place, I don't like the notion of having the Obsolete attribute.
However...
There is still a problem, while we are being notified in intellisense, ideally, we would like to break the build, so is there any way to do this? Perhaps create a custom attribute?
More focused question has been created here.
A: throw new ISaidDoNotUseException();
A: I would actually be inclined to disagree with everyone that is advocating the use of the ObsoleteAttribute as the MSDN documentation says that:
Marking an element as obsolete informs the users that the element will be removed in future versions of the product.
Since the generic constructors for XML serialization should not be removed from the application I wouldn't apply it just in case a maintenance developer down the road is not familiar with how XML serialization works.
I have actually been using Keith's method of just noting that the constructor is used for serialization in XML documentation so that it shows up in Intellisense.
A: You could build your own Attribute derived class, say NonCallableAttribute to qualify methods, and then add to your build/CI code analysis task the check to monitor if any code is using those methods.
In my opinion, you really cannot force developers to not use the method, but you could detect when someone broke the rule as soon as possible and fix it.
A: Prior to VS2013 you could use:
[System.ComponentModel.EditorBrowsable(System.ComponentModel.EditorBrowsableState.Never)]
so that it doesn't show up in IntelliSense. If the consumer still wants to use it they can, but it won't be as discoverable.
Keith's point about over-engineering still stands though.
Since VS2013 this feature has been removed. As noted in https://github.com/dotnet/roslyn/issues/37478 this was "by design" and apparently will not be brought back.
A: I read the heading and immediately thought "obsolete atribute". How about
/// <summary>
/// do not use
/// </summary>
/// <param name="item">don't pass it anything -- you shouldn't use it.</param>
/// <returns>nothing - you shouldn't use it</returns>
public bool Include(T item) {
....
A: Nowadays you can use code analyzers for such need - thanks to Roslyn compiler in modern .NET.
Either you can write your own code analyzer. Here are some tips to start with:
*
*https://learn.microsoft.com/en-us/dotnet/csharp/roslyn-sdk/tutorials/how-to-write-csharp-analyzer-code-fix
*https://andrewlock.net/creating-a-roslyn-analyzer-in-visual-studio-2017/
*https://www.meziantou.net/writing-a-roslyn-analyzer.htm
Or use some already existing - this is way I have choosen for my needs:
*
*BannedApiAnalzyers - this one is exactly for the "DO NOT USE" warning
Here is another nice "homepage" for Roslyn Analyzers:
Cybermaxs/awesome-analyzers: A curated list of .NET Compiler Platform ("Roslyn") diagnostic analyzers and code fixes. Everyone can contribute here!
https://github.com/Cybermaxs/awesome-analyzers
A: If a class is [Serialisable] (i.e. it can be copied around the place as needed) the param-less constructor is needed to deserialise.
I'm guessing that you want to force your code's access to pass defaults for your properties to a parameterised constructor.
Basically you're saying that it's OK for the XmlSerializer to make a copy and then set properties, but you don't want your own code to.
To some extent I think this is over-designing.
Just add XML comments that detail what properties need initialising (and what to).
Don't use [Obsolete], because it isn't. Reserve that for genuinely deprecated methods.
A: Separate your serializable object from your domain object.
A: ObsoleteAttribute will probably work in your situation - you can even cause the build to break if that method is used.
Since obsolete warnings occur at compile time, and since the reflection needed for serialization occurs at runtime, marking that method obsolete won't break serialization, but will warn developers that the method is not there to be used.
A: What you're looking for is the ObsoleteAttribute class:
using System;
public sealed class App {
static void Main() {
// The line below causes the compiler to issue a warning:
// 'App.SomeDeprecatedMethod()' is obsolete: 'Do not call this method.'
SomeDeprecatedMethod();
}
// The method below is marked with the ObsoleteAttribute.
// Any code that attempts to call this method will get a warning.
[Obsolete("Do not call this method.")]
private static void SomeDeprecatedMethod() { }
}
A: Yep there is.
I wrote this blogpost about it Working with the designer.
And here is the code:
public class MyClass
{
[Obsolete("reason", true)]
public MyClass()
{
// required for xml serialization
}
}
A: I'm using the ObsoleteAttribute.
But also you can have some comments of course.
And finally remove it completely if you can (don't have to maintain the compatibility with something old). That's the best way.
A: Wow, that problem is bugging me too.
You also need default constructors for NHibernate, but I want to force people to NOT use C# 3.0 object initializers so that classes go through constructor code.
A: There is already quite a lot of good answers.
But I think the best option is to use serializer that can use parametrized constructor.
It is not serializer you need but for example Entity Framework Core knows how to use parametrized constructors.
Entity Framework Core documentation:
If EF Core finds a parameterized constructor with parameter names and
types that match those of mapped properties, then it will instead call
the parameterized constructor with values for those properties and
will not set each property explicitly.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27758",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33"
} |
Q: Using VLOOKUP in an array formula on Google Spreadsheets Effectively I want to give numeric scores to alphabetic grades and sum them. In Excel, putting the LOOKUP function into an array formula works:
{=SUM(LOOKUP(grades, scoringarray))}
With the VLOOKUP function this does not work (only gets the score for the first grade). Google Spreadsheets does not appear to have the LOOKUP function and VLOOKUP fails in the same way using:
=SUM(ARRAYFORMULA(VLOOKUP(grades, scoresarray, 2, 0)))
or
=ARRAYFORMULA(SUM(VLOOKUP(grades, scoresarray, 2, 0)))
Is it possible to do this (but I have the syntax wrong)? Can you suggest a method that allows having the calculation in one simple cell like this rather than hiding the lookups somewhere else and summing them afterwards?
A: I'm afraid I think the answer is no. From the help text on
http://docs.google.com/support/spreadsheets/bin/answer.py?answer=71291&query=arrayformula&topic=&type=
The real power of ARRAYFORMULA comes when you take the result from one of those computations and wrap it inside a formula that does take array or range arguments: SUM, MAX, MIN, CONCATENATE,
As vlookup takes a single cell to lookup (in the first argument) I don't think you can get it to work, without using a separate range of lookups.
A:
Google Spreadsheets does not appear to have the LOOKUP function
Presumably not then but it does have now:
grades Sheet1!A2:A4
scoringarray Sheet1!A2:B4
A: I still can't see the formulae in your example (just values), but that is exactly what I'm trying to do in terms of the result; obviously I can already do it "by the side" and sum separately - the key for me is doing it in one cell.
I have looked at it again this morning - using the MATCH function for the lookup works in an array formula. But then the INDEX function does not. I have also tried using it with OFFSET and INDIRECT without success. Finally, the CHOOSE function does not seem to accept a cell range as its list to choose from - the range degrades to a single value (the first cell in the range). It should also be noted that the CHOOSE function only accepts 30 values to choose from (according to the documentation). All very annoying. However, I do now have a working solution in one cell: using the CHOOSE function and explicitly listing the result cells one by one in the arguments like this:
=ARRAYFORMULA(SUM(CHOOSE(MATCH(D1:D8,Lookups!$A$1:$A$3,0),
Lookups!$B$1,Lookups!$B$2,Lookups!$B$3)))
Obviously this doesn't extend very well but hopefully the lookup tables are by nature quite fixed. For larger lookup tables it's a pain to type all the cells individually and some people may exceed the limit of 30 cells.
I would certainly welcome a more elegant solution!
A: I know this thread is quite old, but I'd been struggling with this same problem for some time. I finally came across a solution (well, Frankenstiened one together). It's only slightly more elegant, but should be able to work with large data sets without trouble.
The solution uses the following:
=ARRAYFORMULA(SUM(INDIRECT(ADDRESS(MATCH(), MATCH())))
as a surrogate for the vlookup function.
I hope this helps someone!
A: you can do so easily like this by hardcoding it in VR table:
=SUM(IFERROR(ARRAYFORMULA(VLOOKUP(A2:A, {{"A", 6};
{"B", 5};
{"C", 4};
{"D", 3};
{"E", 2};
{"F", 1}}, 2, 0)), ))
or you can use some side cells with rules:
=SUM(IFERROR(ARRAYFORMULA(VLOOKUP(A2:A, E2:F, 2, 0)), ))
alternatives: https://webapps.stackexchange.com/a/123741/186471
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27774",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: MFC resources / links I am about to reenter the MFC world after years away for a new job. What resources to people recommend for refreshing the memory? I have been doing mainly C# recently.
Also any MFC centric websites or blogs that people recommend?
A: The best: The Code Project
A: *
*For blogs: Your best bet would be the Visual C++ Team Blog.
*For books: Programming Windows with MFC is one of the best book on the subject.
*For tutorials: Simply search google for various tutorials on MFC.
A: I would highly recommend my all-time favorite book: MFC Internals: Inside the Microsoft© Foundation Class Architecture
It is not a 'how-to' book — it is a 'how does it work' book.
A: There's lots of useful information here:
http://www.flounder.com/mvp_tips.htm
A: Its been a long time since i did any MFC but back then it used to be
"MFC internals" + debug into the MFC code and find what happens which used to be the best resources on MFC
Samples used to be available from Code Project to quickly get you going.
A: The vital "how do I?" book is http://www.amazon.com/gp/reader/0201185377/ref=sib_dp_pt#reader-link
Codeproject is also invaluable, although many of the 3rd party controls there nowhave counterparts in the new MFC feature pack.
A: Books are one thing, but I always found that practice was the key with MFC. CodeGuru was my favourite destination to answer anything MFC-related.
There's also that new website. What's it called...that's it - StackOverflow!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27779",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: Web server statics repository -or- ZFS vs. NTFS My goal is to maintain a web file server separately from my main ASP.NET application server for better scalability. The web file server will store a lot of files downloaded by users.
So the question is: Is it worth to adopt FreeBSD + Apache + ZFS, or will good old IIS be сonvenient enough?
A: I understand you will serve only static files. In this case, lightweight HTTP servers will give you a higher performance for a given machine. The following are well known:
*
*Lighttpd
*Thttpd
*Nginx
Many more are listed on Wikipedia. There's a more recent article on IBM DeveloperWorks.
A: It all depends on your skill level and how much load you are getting on your servers.
If you have spare (physical) resources and have the technical skills and experience to maintain production machines running different operating systems, I'd recommend going running lighttpd on either Linux or FreeBSD. A light OS install with a static file optimized server will perform faster than Apache or IIS on a heavy OS.
However, unless you are extremely comfortable with these solutions, just stick to IIS on Windows. Move the static files to their own machine if you have sufficient load. If you aren't currently thinking about multiple ASP.Net frontends, there's probably no need to spin off the static files yet unless we're talking multiple gigabytes of files.
A: If you're serving files over the Internet, you might also consider Amazon's S3 service. I've found the rates and reliability to be better than anything I could do (or find) on my own.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27784",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How can I manage OSGi build dependencies? We've embedded an OSGi runtime (Equinox) into out custom client-server application to facilitate plugin development and so far things are going great. We've been using Eclipse to build plugins due to the built-in manifest editor, dependency management, and export wizard. Using Eclipse to manager builds isn't very conducive to continuous integration via Hudson.
We have OSGi bundles which depend on other OSGi bundles. I'd really hate to hardcode build order in a custom ANT build. We've done this is the past and it's pretty horrible. Is there any build tool that can EASILY manage OSGi dependencies, if not automatically resolve them? Are there any DECENT examples of how to this?
CLARIFICATION:
The generated build scripts are only usable via Eclipse. They require manually running pieces of Eclipse. We've also got some standard targets which the Eclipse build won't have, and I don't want to modify the generated file since I may regenerate (I know I can do includes, but I want to avoid the Eclipse gen file all together)
Here is my project layout:
/
-PluginA
-PluginB
-PluginC
.
.
.
In using the Eclipse PDE, each plugin has a Manifest, but no build.xml as the PDE does that for me. Hard to automate a gui driven process w/ Hudson. I'd like to setup my own build.xml to build each, BUT there are dependencies and build order issues. These issues are driven by the Manifest files (which describe OSGi imports). For example, PluginC depends on PluginB which depends on PluginA. They must be built in the correct order. I realize that I can manually control the build order, I'm looking for a tool to help automate the build order dependency management.
A: Maven2 all the way; has an Eclipse plugin called m2eclipse to help with managing it, solves exactly the dependency problem and then some. Has a free online book as documentation.
Specifically look at multi-module projects for bundling many components together and have Maven work out the build order and dependencies.
There is also a chapter on the Eclipse integration.
And that is just Eclipse and Maven, next you get some cool goodies for OSGi:
*
*The Apache Felix BND Maven plugin will auto-generate your manifests or at the very least help you
*The PAX OPS4J project and their Maven plugins can be a great help in bootstrapping projects, providing launchers, etc
And just fundamentally, the Maven module model fits perfectly with OSGi's bundle model. We've been building and managing multiple products with hundreds of bundles using Maven for more than 3 years now and it's great.
A: Seconding Maven2. Look into the Tycho plugins for building - they use Eclipse's JDT compiler so it implements all of the OSGi rules at compile-time, the same way Eclipse does at runtime.
Alternatively, the Apache Felix BND plugins also seem popular. I prefer Tycho because it more closely seems to unify the Maven and Eclipse development environments.
A: We use Buckminster. It's a build and assembly framework, which takes care of the resolution of dependencies, the fetching from various repositories, building and packaging of the product.
It's an Eclipse Tools project. It integrates well with PDE.
This means that all the meta-data we use to build the RCP is useful to Buckminster to resolve and build. For example, feature.xml and the Require-Bundle header in the Manifest.MF, .product.
We haven't got any build scripts in each bundle now; we now have a single build per product. Buckminster takes care walking the dependency graph.
It took a little bit of effort to get our existing cruise-control/ant system working with it, though they (the Buckminster team) have started using Hudson to host the project itself. I believe that their build setup is also available for download.
We're really impressed with it, despite it's relative infancy.
We also looked into Pax-Construct but we didn't want to use Maven.
We're also currently looking at Spring DM testing framework to augment the unit testing effort.
A: Closing out some old questions...
Our setup was not conducive to maven due to lack of network connectivity and timing. I know there are offline maven setups, but it was all too much given the time. Hopefully we'll get to use a proper setup when we've got time to reorganize the build process.
The solution involved Ant, BND, and some custom ant tasks. The various bundle dependencies are manually managed. We were already using Ant; BND and custom tasks tied it all together. The custom tasks just made sure our bnd/eclipse projects were in sync.
A: PDE Headless build. It's well documented by Eclipse. If you're building Eclipse plugins, and you want to do it via command line, The Eclipse PDE headless build is THE way to go.
A: Can you please elaborate where the problem occurs? You mention OSGi bundle dependencies. Is this during runtime? Or during compile-time? In the first case you should consider Declarative Services (see OSGi Spec).
A: We use Hudson combined with PluginBuilder to build our Eclipse-based OSGi bundles/plugins. This builds upon Eclipse's standard PDE process for building plugins. This means using Eclipse as the compiler.
A: I use maven 3.0.2
mvn generate:archetype
select 252 - osgi-archetype
mvn idea:idea
see http://felix.apache.org/site/apache-felix-maven-bundle-plugin-bnd.html
to add your dependencies into the bundle use this short example in the pom.xml
<Export-Package>org.foo.myproject.api</Export-Package>
or
<Import-Package>org.foo.myproject.api</Import-Package>
A: Maven does not require internet connectivity! Use the -o switch, for Christ's sake.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27818",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: How can I reverse engineer a DirectShow graph? I have a DirectShow graph to render MPEG2/4 movies from a network stream. When I assemble the graph by connecting the pins manually it doesn't render. But when I call Render on the GraphBuilder it renders fine.
Obviously there is some setup step that I'm not performing on some filter in the graph that GraphBuilder is performing.
Is there any way to see debug output from GraphBuilder when it assembles a graph?
Is there a way to dump a working graph to see how it was put together?
Any other ideas for unraveling the mystery that lives in the DirectShow box?
Thanks!
-Z
A: IGraphBuilder::SetLogFile (see http://msdn.microsoft.com/en-us/library/dd390091(v=vs.85).aspx) will give you lots of useful diagnostic information about what happens during graph building. Pass in a file handle (e.g. opened by CreateFile) and cast it to a DWORD_PTR. Call again with NULL to finish logging before you close the file handle.
The code in the following blog post for dumping a graph will give you some extra information to interpret the numbers in the log file.
http://rxwen.blogspot.com/2010/04/directshow-debugging-tips.html
A: You can watch the graph you created using GraphEdit, a tool from the DirectShow SDK. In GraphEdit, select File->Connect to remote Graph...
In order to find your graph in the list, you have to register it in the running object table:
void AddToRot( IUnknown *pUnkGraph, DWORD *pdwRegister )
{
IMoniker* pMoniker;
IRunningObjectTable* pROT;
GetRunningObjectTable( 0, &pROT );
WCHAR wsz[256];
swprintf_s( wsz, L"FilterGraph %08p pid %08x", (DWORD_PTR)pUnkGraph, GetCurrentProcessId() );
CreateItemMoniker( L"!", wsz, &pMoniker );
pROT->Register( 0, pUnkGraph, pMoniker, pdwRegister );
// Clean up any COM stuff here ...
}
After destroying your graph, you should remove it from the ROT by calling IRunningObjectTable::Revoke
A: Roman Ryltsov has created a DirectShow Filter Graph Spy tool (http://alax.info/blog/777), a wrapper COM dll over the FilterGraph interface, which logs all the calls to FilterGraph methods.
Also it will add all the created graphs into Running Object Table (ROT) which you can then visualize using tools like GraphEdit or GraphStudio. Very useful when you need to see how a Windows Media Player graph looks like.
A: There is a detailed MSDN entry on this.
http://msdn.microsoft.com/en-us/library/windows/desktop/dd390650(v=vs.85).aspx
A: You need to:
*
*Register you filter graph to the "Running Objects Table" - ROT - Using the code below
*Connect to your filter graph using GraphEdit (File->Connect to Remote Graph) or even better - With GraphEditPlus
To register your filter graph as a "connectable" graph, call this with your filter graph:
void AddToROT( IUnknown *pUnkGraph, DWORD *pdwRegister )
{
IMoniker * pMoniker;
IRunningObjectTable *pROT;
WCHAR wsz[128];
HRESULT hr;
if (FAILED(GetRunningObjectTable(0, &pROT)))
return;
wsprintfW(wsz, L"FilterGraph %08x pid %08x", (DWORD_PTR)pUnkGraph, GetCurrentProcessId());
hr = CreateItemMoniker(L"!", wsz, &pMoniker);
if (SUCCEEDED(hr))
{
hr = pROT->Register(0, pUnkGraph, pMoniker, pdwRegister);
pMoniker->Release();
}
pROT->Release();
}
And call this before you release the graph:
void RemoveFromROT(DWORD pdwRegister)
{
IRunningObjectTable *pROT;
if (SUCCEEDED(GetRunningObjectTable(0, &pROT)))
{
pROT->Revoke(pdwRegister);
pROT->Release();
}
}
`
A: Older versions of DirectX, I belive 9a, but not 9b had a "debug mode" for dshow. It would output logs of debug info into the debug console.
So download an older version, set it to debug. then open up debugview or load graphedt.exe in visual studio to see the debug info.
A: You could "save" the graph (serialize it) to a .grf graphedit file, possibly: https://stackoverflow.com/a/10612735/32453
Also it appears that graphedit can "remote attach" to a running graph? http://rxwen.blogspot.com/2010/04/directshow-debugging-tips.html
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27832",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: Does MS-SQL support in-memory tables? Recently, I started changing some of our applications to support MS SQL Server as an alternative back end.
One of the compatibility issues I ran into is the use of MySQL's CREATE TEMPORARY TABLE to create in-memory tables that hold data for very fast access during a session with no need for permanent storage.
What is the equivalent in MS SQL?
A requirement is that I need to be able to use the temporary table just like any other, especially JOIN it with the permanent ones.
A: You can declare a "table variable" in SQL Server 2005, like this:
declare @foo table (
Id int,
Name varchar(100)
);
You then refer to it just like a variable:
select * from @foo f
join bar b on b.Id = f.Id
No need to drop it - it goes away when the variable goes out of scope.
A: It is possible with MS SQL Server 2014.
See: http://msdn.microsoft.com/en-us/library/dn133079.aspx
Here is an example of SQL generation code (from MSDN):
-- create a database with a memory-optimized filegroup and a container.
CREATE DATABASE imoltp
GO
ALTER DATABASE imoltp ADD FILEGROUP imoltp_mod CONTAINS MEMORY_OPTIMIZED_DATA
ALTER DATABASE imoltp ADD FILE (name='imoltp_mod1', filename='c:\data\imoltp_mod1') TO FILEGROUP imoltp_mod
ALTER DATABASE imoltp SET MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT=ON
GO
USE imoltp
GO
-- create a durable (data will be persisted) memory-optimized table
-- two of the columns are indexed
CREATE TABLE dbo.ShoppingCart (
ShoppingCartId INT IDENTITY(1,1) PRIMARY KEY NONCLUSTERED,
UserId INT NOT NULL INDEX ix_UserId NONCLUSTERED HASH WITH (BUCKET_COUNT=1000000),
CreatedDate DATETIME2 NOT NULL,
TotalPrice MONEY
) WITH (MEMORY_OPTIMIZED=ON)
GO
-- create a non-durable table. Data will not be persisted, data loss if the server turns off unexpectedly
CREATE TABLE dbo.UserSession (
SessionId INT IDENTITY(1,1) PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT=400000),
UserId int NOT NULL,
CreatedDate DATETIME2 NOT NULL,
ShoppingCartId INT,
INDEX ix_UserId NONCLUSTERED HASH (UserId) WITH (BUCKET_COUNT=400000)
) WITH (MEMORY_OPTIMIZED=ON, DURABILITY=SCHEMA_ONLY)
GO
A: You can create table variables (in memory), and two different types of temp table:
--visible only to me, in memory (SQL 2000 and above only)
declare @test table (
Field1 int,
Field2 nvarchar(50)
);
--visible only to me, stored in tempDB
create table #test (
Field1 int,
Field2 nvarchar(50)
)
--visible to everyone, stored in tempDB
create table ##test (
Field1 int,
Field2 nvarchar(50)
)
Edit:
Following feedback I think this needs a little clarification.
#table and ##table will always be in TempDB.
@Table variables will normally be in memory, but are not guaranteed to be. SQL decides based on the query plan, and uses TempDB if it needs to.
A: @Keith
This is a common misconception: Table variables are NOT necessarily stored in memory. In fact SQL Server decides whether to keep the variable in memory or to spill it to TempDB. There is no reliable way (at least in SQL Server 2005) to ensure that table data is kept in memory. For more detailed info look here
A: A good blog post here but basically prefix local temp tables with # and global temp with ## - eg
CREATE TABLE #localtemp
A: I understand what you're trying to achieve. Welcome to the world of a variety of databases!
SQL server 2000 supports temporary tables created by prefixing a # to the table name, making it a locally accessible temporary table (local to the session) and preceding ## to the table name, for globally accessible temporary tables e.g #MyLocalTable and ##MyGlobalTable respectively.
SQL server 2005 and above support both temporary tables (local, global) and table variables - watch out for new functionality on table variables in SQL 2008 and release two! The difference between temporary tables and table variables is not so big but lies in the the way the database server handles them.
I would not wish to talk about older versions of SQL server like 7, 6, though I have worked with them and it's where I came from anyway :-)
It’s common to think that table variables always reside in memory but this is wrong. Depending on memory usage and the database server volume of transactions, a table variable's pages may be exported from memory and get written in tempdb and the rest of the processing takes place there (in tempdb).
Please note that tempdb is a database on an instance with no permanent objects in nature but it’s responsible for handling workloads involving side transactions like sorting, and other processing work which is temporary in nature. On the other hand, table variables (usually with smaller data) are kept in memory (RAM) making them faster to access and therefore less disk IO in terms of using the tempdb drive when using table variables with smaller data compared to temporary tables which always log in tempdb.
Table variables cannot be indexed while temporary tables (both local and global) can be indexed for faster processing in case the amount of data is large. So you know your choice in case of faster processing with larger data volumes by temporary transactions. It's also worth noting that transactions on table variables alone are not logged and can't be rolled back while those done on temporary tables can be rolled back!
In summary, table variables are better for smaller data while temporary tables are better for larger data being processed temporarily. If you also want proper transaction control using transaction blocks, table variables are not an option for rolling back transactions so you're better off with temporary tables in this case.
Lastly, temporary tables will always increase disk IO since they always use tempdb while table variables may not increase it, depending on the memory stress levels.
Let me know if you want tips on how to tune your tempdb to earn much faster performance to go above 100%!
A: CREATE TABLE #tmptablename
Use the hash/pound sign prefix
A: The syntax you want is:
create table #tablename
The # prefix identifies the table as a temporary table.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27835",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: Is there a Box Plot graph available for Reporting Services 2005? Is there a Box Plot graph, or box and whisker graph available for Reporting Services 2005? From the looks of the documentation there doesn't seem to be one out of the box; so I am wondering if there is a third party that has the graph, or a way to build my own?
A: There definitely isn't a Box Plot built into SSRS 2005, though it's possible that 2008 has one. SSRS 2005 does have a robust extension model. If you can implement a chart in System.Drawing/GDI+, you can make it into a custom report item for SSRS.
There are a few third-party vendors with fairly feature-rich products, but the only one I've ever evaluated was Dundas Chart, which isn't cheap, but gives you about 100x more charting capability than SSRS 2005 built in (for SSRS 2008, Microsoft incorporated a great deal of Dundas's charting technology). I can't say from experience that I know Dundas Chart supports the Box Plot, but this support forum post says so.
A: ZedGraph is a good open source alternative.
A: We implement a box-plot with Dundas Chart for Reporting Services 2005.
Unlike the default chart tool, Dundas allows you to chart multiple values using different chart types on the one chart.
This means that you can create complex graph types by adding different chart types. E.g. you can plot a line, and a floating column chart.
A: Nevron Chart for SSRS has Box and Whiskers and Range charts out of the box for both SSRS 2005 and 2008
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27836",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How do you change the default homepage in a Grails application? What is the configuration setting for modifying the default homepage in a Grails application to no longer be appName/index.gsp? Of course you can set that page to be a redirect but there must be a better way.
A: Add this in UrlMappings.groovy
"/" {
controller = "yourController"
action = "yourAction"
}
By configuring the URLMappings this way, the home-page of the app will be yourWebApp/yourController/yourAction.
(cut/pasted from IntelliGrape Blog)
A: Use controller, view and action parameter by the following syntax:
class UrlMappings {
static mappings = {
"/" (controller:'dashboard', view: 'index', action: 'index')
"500"(view:'/error')
}
}
A: Simple and Neat
*
*Go to File: grails-app/conf/UrlMappings.groovy.
*Replace the line : "/"(view:"/index") with
"/"(controller:'home', action:"/index").
Home is Your Controller to Run(Like in spring security you can use 'login' ) and action is the grails view page associated with your controller(In Spring Security '/auth').
Add redirection of pages as per your application needs.
A: You may try as follows
in the UrlMappings.groovy class which is inside the configuration folder:
class UrlMappings {
static mappings = {
"/$controller/$action?/$id?"{
constraints {
// apply constraints here
}
}
//"/"(view:"/index")
"/" ( controller:'Item', action:'index' ) // Here i have changed the desired action to show the desired page while running the application
"500"(view:'/error')
}
}
hope this helps,
Rubel
A: Edit UrlMappings.groovy
Add for example add this rule, to handle the root with a HomeController.
"/"(controller:'home')
A: All the answers are correct!
But let's imagine a scenario:
I mapped path "/" with the controller: "Home" and action: "index", so when i access "/app-name/" the controller Home gets executed, but if i type the path "/app-name/home/index", it will still be executed! so there are 2 paths for one resources. it would work until some one finds out "home/index" path.
another thing is if I have a form without any action attribute specified, so by default it will be POST to the same controller and action! so if the form is mapped to "/" path and no action attribute is specified then it will be submitted to the same controller, but this time the path will be "home/index" in your address-bar, not "/", because it's being submitted to the controller/action not to the URI.
To solve this problem what you have to do, is to remove or comment out these lines.
// "/$controller/$action?/$id?(.$format)?"{
// constraints {
// // apply constraints here
// }
// }
So now when you access "/", will work. but "home/index" will not. But there's a one flaw, now you have to map all the paths to the controllers manually by explicitly writing into URLMapping file. I guess this would help!
A: If anyone is looking for the answer for gails 3.x, they moved UrlMappings.groovy to grails-app/controllers/appname
As the below answers say, just ed it the line starting with "/".
In my case its:
"/"(controller:"dashboard", view:"/index")
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27846",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "36"
} |
Q: What are models for storing tree structures and what are their characteristics? So far I have encountered adjacency list, nested sets and nested intervals as models for storing tree structures in a database. I know these well enough and have migrated trees from one to another.
What are other popular models? What are their characteristics? What are good resources (books, web, etc) on this topic?
I'm not only looking for db storage but would like to expand my knowledge on trees in general. For example, I understand that nested sets/intervals are especially favorable for relational database storage and have asked myself, are they actually a bad choice in other contexts?
A: A variation is where you use a direct hierarchical representation (ie. parent link in node), but also store a path value.
ie. for a directory tree consisting of the following:
C:\
Temp
Windows
System32
You would have the following nodes
Key Name Parent Path
1 C: *1*
2 Temp 1 *1*2*
3 Windows 1 *1*3*
4 System32 3 *1*3*4*
Path is indexed, and will allow you to quickly do a query that picks up a node and all its children, without having to manipulate ranges.
ie. to find C:\Temp and all its children:
WHERE Path LIKE '*1*2*%'
This representation is the only place I can think of where storing id's in a string like this is ok.
A: The seminal resource for this are chapters 28-30 of SQL for Smarties.
(I've recommended this book so much I figure Celko owes me royalties by now!)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27850",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: ModalPopupExtender adding scrollbars in SharePoint Whenever I show a ModalPopupExtender on my Sharepoint site, the popup shown creates both horizontal and vertical scrollbars. If you scroll all the way to the end of the page, the scrollbar refreshes, and there is more page to scroll through. Basically, I think the popup is setting its bounds beyond the end of the page. Has anyone run into this? Searching Google, it seems this may be a known problem, but I haven't found a good solution that doesn't include recompiling AJAX, which my boss will not allow.
A: Hacky answer would be to grab the IE Developer Toolbar, find the tag that has the scrollbar, and alter your CSS file to add the overflow:hidden property to it.
A: I assume the TargetControl is of sufficient size to hold everything you put in it? If so, try:
*
*Set CSS overflow:hidden;
*If the target control is a Panel, set scrollbars="none". Otherwise, put it in a panel and try it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27853",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: C/C++ source code visualization? Basically I want tools which generate source code visualization like:
*
*function call graph
*dependency graph
*...
A: Doxygen is really excellent for this, although you will need to install GraphViz to get the the graphs to draw.
Once you've got everything installed, it's really rather simple to draw the graphs. Make sure you set EXTRACT_ALL and CALL_GRAPH to true and you should be good to go.
The full documentation on this function for doxygen is here.
A: You can look at different tools for software design and modelling (Rational Rose, Sparx Enterprise Architect, Umbrello, etc). Majority of them have some functionality to reverse modeling by source code, and getting UML class diagrams, and sometimes even sequence diagrams (and this is very close to functions call graph).
But after you get some pictures on really big project code base you could realise that such graphs are rather hard to read and understand. Unfortunally visualization capabilities of complexity are very limited.
As for me, using a "divide and rule" idiom is more convinient approach. You can extract different functionality blocks or layers from your some code base (just sorting cpp-files by different folders sometimes enough). Another way is to use some scripts (bash, python) to create simple csv tables with interested parameters of files, classes or functions like "number of dependencies" etc).
A: Try doxygen
Example output from Xerces
A: If you use Visual Studio, the 2010 Ultimate release lets you generate sequence diagrams and dependency graphs. However, the release currently supports only .NET application projects.
The team has gotten lots of interest in supporting C++ in a future release, so you might want stay tuned. In the meantime, you can post in the VS 2010 Architectural Discovery & Modeling Tools forum at http://social.msdn.microsoft.com/Forums/en-US/vsarch/threads to request an update. I know the product team loves hearing customer feedback about the tools.
In the meantime, you can learn more about creating sequence diagrams and dependency diagrams from .NET code in the following topics:
How to: Find Code Using Architecture Explorer: http://msdn.microsoft.com/en-us/library/dd409431%28VS.100%29.aspx
How to: Generate Graph Documents from Code: http://msdn.microsoft.com/en-us/library/dd409453%28VS.100%29.aspx#SeeSpecificSource
How to: Explore Code with Sequence Diagrams: http://msdn.microsoft.com/en-us/library/ee317485%28VS.100%29.aspx
To try the RC release and provide feedback, download it at http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=457bab91-5eb2-4b36-b0f4-d6f34683c62a
A: In addition to written tools above, you may try understand. But, it is not free.
A: I strongly recommend BOUML. It's a free UML modelling application, which:
*
*is extremely fast (fastest UML tool ever created, check out benchmarks),
*has rock solid C++ import support,
*has great SVG export support, which is important, because viewing large graphs in vector format, which scales fast in e.g. Firefox, is very convenient (you can quickly switch between "birds eye" view and class detail view),
*is full featured, impressively intensively developed (look at development history, it's hard to believe that so fast progress is possible).
So: import your code into BOUML and view it there, or export to SVG and view it in Firefox.
For the free version:
*
*source is on Github as DoUML
*Installers can be downloaded from http://www.bouml.fr/download.html
A: Might be a duplication, but check out ollydbg, IDA Pro and this website has a whole bunch of resources with some very sexy images.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27857",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "67"
} |
Q: How to get SpecUnit to run within a TeamCity CI build I am trying to get SpecUnit to run in a continuous integration build using Nant. At the moment the files are in the correct place but no output is generated from SpecUnit.Report.exe. Here is the relevant task from the nant build script:
<echo message="**** Starting SpecUnit report generation ****" />
<copy file="${specunit.exe}" tofile="${output.dir}SpecUnit.Report.exe" />
<exec program="${output.dir}SpecUnit.Report.exe" failonerror="false">
<arg value="${acceptance.tests.assembly}" />
</exec>
Please note:
*
*${specunit.exe} is the full path to where “SpecUnit.Report.exe” is located.
*${output.dir} is the teamcity output directory for the current build agent.
*${acceptance.tests.assembly} is "AcceptanceTests.dll"
Anyone tried this before?
A: You need to specify the full path to the assembly argument I think...
<exec program="${output.dir}SpecUnit.Report.exe" verbose="true">
<arg value="${output.dir}${acceptance.tests.assembly}" />
</exec>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27889",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: What's the difference between a temp table and table variable in SQL Server? In SQL Server 2005, we can create temp tables one of two ways:
declare @tmp table (Col1 int, Col2 int);
or
create table #tmp (Col1 int, Col2 int);
What are the differences between these two? I have read conflicting opinions on whether @tmp still uses tempdb, or if everything happens in memory.
In which scenarios does one out-perform the other?
A: The other main difference is that table variables don't have column statistics, where as temp tables do. This means that the query optimiser doesn't know how many rows are in the table variable (it guesses 1), which can lead to highly non-optimal plans been generated if the table variable actually has a large number of rows.
A: Another difference:
A table var can only be accessed from statements within the procedure that creates it, not from other procedures called by that procedure or nested dynamic SQL (via exec or sp_executesql).
A temp table's scope, on the other hand, includes code in called procedures and nested dynamic SQL.
If the table created by your procedure must be accessible from other called procedures or dynamic SQL, you must use a temp table. This can be very handy in complex situations.
A: It surprises me that no one mentioned the key difference between these two is that the temp table supports parallel insert while the table variable doesn't. You should be able to see the difference from the execution plan. And here is the video from SQL Workshops on Channel 9 and MSDN doc.
This also explains why you should use a table variable for smaller tables, otherwise a temp table, as SQLMenace answered before.
A: There are a few differences between Temporary Tables (#tmp) and Table Variables (@tmp), although using tempdb isn't one of them, as spelt out in the MSDN link below.
As a rule of thumb, for small to medium volumes of data and simple usage scenarios you should use table variables. (This is an overly broad guideline with of course lots of exceptions - see below and following articles.)
Some points to consider when choosing between them:
*
*Temporary Tables are real tables so you can do things like CREATE INDEXes, etc. If you have large amounts of data for which accessing by index will be faster then temporary tables are a good option.
*Table variables can have indexes by using PRIMARY KEY or UNIQUE constraints. (If you want a non-unique index just include the primary key column as the last column in the unique constraint. If you don't have a unique column, you can use an identity column.) SQL 2014 has non-unique indexes too.
*Table variables don't participate in transactions and SELECTs are implicitly with NOLOCK. The transaction behaviour can be very helpful, for instance if you want to ROLLBACK midway through a procedure then table variables populated during that transaction will still be populated!
*Temp tables might result in stored procedures being recompiled, perhaps often. Table variables will not.
*You can create a temp table using SELECT INTO, which can be quicker to write (good for ad-hoc querying) and may allow you to deal with changing datatypes over time, since you don't need to define your temp table structure upfront.
*You can pass table variables back from functions, enabling you to encapsulate and reuse logic much easier (eg make a function to split a string into a table of values on some arbitrary delimiter).
*Using Table Variables within user-defined functions enables those functions to be used more widely (see CREATE FUNCTION documentation for details). If you're writing a function you should use table variables over temp tables unless there's a compelling need otherwise.
*Both table variables and temp tables are stored in tempdb. But table variables (since 2005) default to the collation of the current database versus temp tables which take the default collation of tempdb (ref). This means you should be aware of collation issues if using temp tables and your db collation is different to tempdb's, causing problems if you want to compare data in the temp table with data in your database.
*Global Temp Tables (##tmp) are another type of temp table available to all sessions and users.
Some further reading:
*
*Martin Smith's great answer on dba.stackexchange.com
*MSDN FAQ on difference between the two: https://support.microsoft.com/en-gb/kb/305977
*MDSN blog article: https://learn.microsoft.com/archive/blogs/sqlserverstorageengine/tempdb-table-variable-vs-local-temporary-table
*Article: https://searchsqlserver.techtarget.com/tip/Temporary-tables-in-SQL-Server-vs-table-variables
*Unexpected behaviors and performance implications of temp tables and temp variables: Paul White on SQLblog.com
A: Differences between Temporary Tables (##temp/#temp) and Table Variables (@table) are as:
*
*Table variable (@table) is created in the memory. Whereas, a Temporary table (##temp/#temp) is created in the tempdb database. However, if there is a memory pressure the pages belonging to a table variable may be pushed to tempdb.
*Table variables cannot be involved in transactions, logging or locking. This makes @table faster then #temp. So table variable is faster then temporary table.
*Temporary table allows Schema modifications unlike Table variables.
*Temporary tables are visible in the created routine and also in the child routines. Whereas, Table variables are only visible in the created routine.
*Temporary tables are allowed CREATE INDEXes whereas, Table variables aren’t allowed CREATE INDEX instead they can have index by using Primary Key or Unique Constraint.
A: Just looking at the claim in the accepted answer that table variables don't participate in logging.
It seems generally untrue that there is any difference in quantity of logging (at least for insert/update/delete operations to the table itself though I have since found that there is some small difference in this respect for cached temporary objects in stored procedures due to additional system table updates).
I looked at the logging behaviour against both a @table_variable and a #temp table for the following operations.
*
*Successful Insert
*Multi Row Insert where statement rolled back due to constraint violation.
*Update
*Delete
*Deallocate
The transaction log records were almost identical for all operations.
The table variable version actually has a few extra log entries because it gets an entry added to (and later removed from) the sys.syssingleobjrefs base table but overall had a few less bytes logged purely as the internal name for table variables consumes 236 less bytes than for #temp tables (118 fewer nvarchar characters).
Full script to reproduce (best run on an instance started in single user mode and using sqlcmd mode)
:setvar tablename "@T"
:setvar tablescript "DECLARE @T TABLE"
/*
--Uncomment this section to test a #temp table
:setvar tablename "#T"
:setvar tablescript "CREATE TABLE #T"
*/
USE tempdb
GO
CHECKPOINT
DECLARE @LSN NVARCHAR(25)
SELECT @LSN = MAX([Current LSN])
FROM fn_dblog(null, null)
EXEC(N'BEGIN TRAN StartBatch
SAVE TRAN StartBatch
COMMIT
$(tablescript)
(
[4CA996AC-C7E1-48B5-B48A-E721E7A435F0] INT PRIMARY KEY DEFAULT 0,
InRowFiller char(7000) DEFAULT ''A'',
OffRowFiller varchar(8000) DEFAULT REPLICATE(''B'',8000),
LOBFiller varchar(max) DEFAULT REPLICATE(cast(''C'' as varchar(max)),10000)
)
BEGIN TRAN InsertFirstRow
SAVE TRAN InsertFirstRow
COMMIT
INSERT INTO $(tablename)
DEFAULT VALUES
BEGIN TRAN Insert9Rows
SAVE TRAN Insert9Rows
COMMIT
INSERT INTO $(tablename) ([4CA996AC-C7E1-48B5-B48A-E721E7A435F0])
SELECT TOP 9 ROW_NUMBER() OVER (ORDER BY (SELECT 0))
FROM sys.all_columns
BEGIN TRAN InsertFailure
SAVE TRAN InsertFailure
COMMIT
/*Try and Insert 10 rows, the 10th one will cause a constraint violation*/
BEGIN TRY
INSERT INTO $(tablename) ([4CA996AC-C7E1-48B5-B48A-E721E7A435F0])
SELECT TOP (10) (10 + ROW_NUMBER() OVER (ORDER BY (SELECT 0))) % 20
FROM sys.all_columns
END TRY
BEGIN CATCH
PRINT ERROR_MESSAGE()
END CATCH
BEGIN TRAN Update10Rows
SAVE TRAN Update10Rows
COMMIT
UPDATE $(tablename)
SET InRowFiller = LOWER(InRowFiller),
OffRowFiller =LOWER(OffRowFiller),
LOBFiller =LOWER(LOBFiller)
BEGIN TRAN Delete10Rows
SAVE TRAN Delete10Rows
COMMIT
DELETE FROM $(tablename)
BEGIN TRAN AfterDelete
SAVE TRAN AfterDelete
COMMIT
BEGIN TRAN EndBatch
SAVE TRAN EndBatch
COMMIT')
DECLARE @LSN_HEX NVARCHAR(25) =
CAST(CAST(CONVERT(varbinary,SUBSTRING(@LSN, 1, 8),2) AS INT) AS VARCHAR) + ':' +
CAST(CAST(CONVERT(varbinary,SUBSTRING(@LSN, 10, 8),2) AS INT) AS VARCHAR) + ':' +
CAST(CAST(CONVERT(varbinary,SUBSTRING(@LSN, 19, 4),2) AS INT) AS VARCHAR)
SELECT
[Operation],
[Context],
[AllocUnitName],
[Transaction Name],
[Description]
FROM fn_dblog(@LSN_HEX, null) AS D
WHERE [Current LSN] > @LSN
SELECT CASE
WHEN GROUPING(Operation) = 1 THEN 'Total'
ELSE Operation
END AS Operation,
Context,
AllocUnitName,
COALESCE(SUM([Log Record Length]), 0) AS [Size in Bytes],
COUNT(*) AS Cnt
FROM fn_dblog(@LSN_HEX, null) AS D
WHERE [Current LSN] > @LSN
GROUP BY GROUPING SETS((Operation, Context, AllocUnitName),())
Results
+-----------------------+--------------------+---------------------------+---------------+------+---------------+------+------------------+
| | | | @TV | #TV | |
+-----------------------+--------------------+---------------------------+---------------+------+---------------+------+------------------+
| Operation | Context | AllocUnitName | Size in Bytes | Cnt | Size in Bytes | Cnt | Difference Bytes |
+-----------------------+--------------------+---------------------------+---------------+------+---------------+------+------------------+
| LOP_ABORT_XACT | LCX_NULL | | 52 | 1 | 52 | 1 | |
| LOP_BEGIN_XACT | LCX_NULL | | 6056 | 50 | 6056 | 50 | |
| LOP_COMMIT_XACT | LCX_NULL | | 2548 | 49 | 2548 | 49 | |
| LOP_COUNT_DELTA | LCX_CLUSTERED | sys.sysallocunits.clust | 624 | 3 | 624 | 3 | |
| LOP_COUNT_DELTA | LCX_CLUSTERED | sys.sysrowsets.clust | 208 | 1 | 208 | 1 | |
| LOP_COUNT_DELTA | LCX_CLUSTERED | sys.sysrscols.clst | 832 | 4 | 832 | 4 | |
| LOP_CREATE_ALLOCCHAIN | LCX_NULL | | 120 | 3 | 120 | 3 | |
| LOP_DELETE_ROWS | LCX_INDEX_INTERIOR | Unknown Alloc Unit | 720 | 9 | 720 | 9 | |
| LOP_DELETE_ROWS | LCX_MARK_AS_GHOST | sys.sysallocunits.clust | 444 | 3 | 444 | 3 | |
| LOP_DELETE_ROWS | LCX_MARK_AS_GHOST | sys.sysallocunits.nc | 276 | 3 | 276 | 3 | |
| LOP_DELETE_ROWS | LCX_MARK_AS_GHOST | sys.syscolpars.clst | 628 | 4 | 628 | 4 | |
| LOP_DELETE_ROWS | LCX_MARK_AS_GHOST | sys.syscolpars.nc | 484 | 4 | 484 | 4 | |
| LOP_DELETE_ROWS | LCX_MARK_AS_GHOST | sys.sysidxstats.clst | 176 | 1 | 176 | 1 | |
| LOP_DELETE_ROWS | LCX_MARK_AS_GHOST | sys.sysidxstats.nc | 144 | 1 | 144 | 1 | |
| LOP_DELETE_ROWS | LCX_MARK_AS_GHOST | sys.sysiscols.clst | 100 | 1 | 100 | 1 | |
| LOP_DELETE_ROWS | LCX_MARK_AS_GHOST | sys.sysiscols.nc1 | 88 | 1 | 88 | 1 | |
| LOP_DELETE_ROWS | LCX_MARK_AS_GHOST | sys.sysobjvalues.clst | 596 | 5 | 596 | 5 | |
| LOP_DELETE_ROWS | LCX_MARK_AS_GHOST | sys.sysrowsets.clust | 132 | 1 | 132 | 1 | |
| LOP_DELETE_ROWS | LCX_MARK_AS_GHOST | sys.sysrscols.clst | 528 | 4 | 528 | 4 | |
| LOP_DELETE_ROWS | LCX_MARK_AS_GHOST | sys.sysschobjs.clst | 1040 | 6 | 1276 | 6 | 236 |
| LOP_DELETE_ROWS | LCX_MARK_AS_GHOST | sys.sysschobjs.nc1 | 820 | 6 | 1060 | 6 | 240 |
| LOP_DELETE_ROWS | LCX_MARK_AS_GHOST | sys.sysschobjs.nc2 | 820 | 6 | 1060 | 6 | 240 |
| LOP_DELETE_ROWS | LCX_MARK_AS_GHOST | sys.sysschobjs.nc3 | 480 | 6 | 480 | 6 | |
| LOP_DELETE_ROWS | LCX_MARK_AS_GHOST | sys.syssingleobjrefs.clst | 96 | 1 | | | -96 |
| LOP_DELETE_ROWS | LCX_MARK_AS_GHOST | sys.syssingleobjrefs.nc1 | 88 | 1 | | | -88 |
| LOP_DELETE_ROWS | LCX_MARK_AS_GHOST | Unknown Alloc Unit | 72092 | 19 | 72092 | 19 | |
| LOP_DELETE_ROWS | LCX_TEXT_MIX | Unknown Alloc Unit | 16348 | 37 | 16348 | 37 | |
| LOP_FORMAT_PAGE | LCX_HEAP | Unknown Alloc Unit | 1596 | 19 | 1596 | 19 | |
| LOP_FORMAT_PAGE | LCX_IAM | Unknown Alloc Unit | 252 | 3 | 252 | 3 | |
| LOP_FORMAT_PAGE | LCX_INDEX_INTERIOR | Unknown Alloc Unit | 84 | 1 | 84 | 1 | |
| LOP_FORMAT_PAGE | LCX_TEXT_MIX | Unknown Alloc Unit | 4788 | 57 | 4788 | 57 | |
| LOP_HOBT_DDL | LCX_NULL | | 108 | 3 | 108 | 3 | |
| LOP_HOBT_DELTA | LCX_NULL | | 9600 | 150 | 9600 | 150 | |
| LOP_INSERT_ROWS | LCX_CLUSTERED | sys.sysallocunits.clust | 456 | 3 | 456 | 3 | |
| LOP_INSERT_ROWS | LCX_CLUSTERED | sys.syscolpars.clst | 644 | 4 | 644 | 4 | |
| LOP_INSERT_ROWS | LCX_CLUSTERED | sys.sysidxstats.clst | 180 | 1 | 180 | 1 | |
| LOP_INSERT_ROWS | LCX_CLUSTERED | sys.sysiscols.clst | 104 | 1 | 104 | 1 | |
| LOP_INSERT_ROWS | LCX_CLUSTERED | sys.sysobjvalues.clst | 616 | 5 | 616 | 5 | |
| LOP_INSERT_ROWS | LCX_CLUSTERED | sys.sysrowsets.clust | 136 | 1 | 136 | 1 | |
| LOP_INSERT_ROWS | LCX_CLUSTERED | sys.sysrscols.clst | 544 | 4 | 544 | 4 | |
| LOP_INSERT_ROWS | LCX_CLUSTERED | sys.sysschobjs.clst | 1064 | 6 | 1300 | 6 | 236 |
| LOP_INSERT_ROWS | LCX_CLUSTERED | sys.syssingleobjrefs.clst | 100 | 1 | | | -100 |
| LOP_INSERT_ROWS | LCX_CLUSTERED | Unknown Alloc Unit | 135888 | 19 | 135888 | 19 | |
| LOP_INSERT_ROWS | LCX_INDEX_INTERIOR | Unknown Alloc Unit | 1596 | 19 | 1596 | 19 | |
| LOP_INSERT_ROWS | LCX_INDEX_LEAF | sys.sysallocunits.nc | 288 | 3 | 288 | 3 | |
| LOP_INSERT_ROWS | LCX_INDEX_LEAF | sys.syscolpars.nc | 500 | 4 | 500 | 4 | |
| LOP_INSERT_ROWS | LCX_INDEX_LEAF | sys.sysidxstats.nc | 148 | 1 | 148 | 1 | |
| LOP_INSERT_ROWS | LCX_INDEX_LEAF | sys.sysiscols.nc1 | 92 | 1 | 92 | 1 | |
| LOP_INSERT_ROWS | LCX_INDEX_LEAF | sys.sysschobjs.nc1 | 844 | 6 | 1084 | 6 | 240 |
| LOP_INSERT_ROWS | LCX_INDEX_LEAF | sys.sysschobjs.nc2 | 844 | 6 | 1084 | 6 | 240 |
| LOP_INSERT_ROWS | LCX_INDEX_LEAF | sys.sysschobjs.nc3 | 504 | 6 | 504 | 6 | |
| LOP_INSERT_ROWS | LCX_INDEX_LEAF | sys.syssingleobjrefs.nc1 | 92 | 1 | | | -92 |
| LOP_INSERT_ROWS | LCX_TEXT_MIX | Unknown Alloc Unit | 5112 | 71 | 5112 | 71 | |
| LOP_MARK_SAVEPOINT | LCX_NULL | | 508 | 8 | 508 | 8 | |
| LOP_MODIFY_COLUMNS | LCX_CLUSTERED | Unknown Alloc Unit | 1560 | 10 | 1560 | 10 | |
| LOP_MODIFY_HEADER | LCX_HEAP | Unknown Alloc Unit | 3780 | 45 | 3780 | 45 | |
| LOP_MODIFY_ROW | LCX_CLUSTERED | sys.syscolpars.clst | 384 | 4 | 384 | 4 | |
| LOP_MODIFY_ROW | LCX_CLUSTERED | sys.sysidxstats.clst | 100 | 1 | 100 | 1 | |
| LOP_MODIFY_ROW | LCX_CLUSTERED | sys.sysrowsets.clust | 92 | 1 | 92 | 1 | |
| LOP_MODIFY_ROW | LCX_CLUSTERED | sys.sysschobjs.clst | 1144 | 13 | 1144 | 13 | |
| LOP_MODIFY_ROW | LCX_IAM | Unknown Alloc Unit | 4224 | 48 | 4224 | 48 | |
| LOP_MODIFY_ROW | LCX_PFS | Unknown Alloc Unit | 13632 | 169 | 13632 | 169 | |
| LOP_MODIFY_ROW | LCX_TEXT_MIX | Unknown Alloc Unit | 108640 | 120 | 108640 | 120 | |
| LOP_ROOT_CHANGE | LCX_CLUSTERED | sys.sysallocunits.clust | 960 | 10 | 960 | 10 | |
| LOP_SET_BITS | LCX_GAM | Unknown Alloc Unit | 1200 | 20 | 1200 | 20 | |
| LOP_SET_BITS | LCX_IAM | Unknown Alloc Unit | 1080 | 18 | 1080 | 18 | |
| LOP_SET_BITS | LCX_SGAM | Unknown Alloc Unit | 120 | 2 | 120 | 2 | |
| LOP_SHRINK_NOOP | LCX_NULL | | | | 32 | 1 | 32 |
+-----------------------+--------------------+---------------------------+---------------+------+---------------+------+------------------+
| Total | | | 410144 | 1095 | 411232 | 1092 | 1088 |
+-----------------------+--------------------+---------------------------+---------------+------+---------------+------+------------------+
A:
In which scenarios does one out-perform the other?
For smaller tables (less than 1000 rows) use a temp variable, otherwise use a temp table.
A: Consider also that you can often replace both with derived tables which may be faster as well. As with all performance tuning, though, only actual tests against your actual data can tell you the best approach for your particular query.
A: @wcm - actually to nit pick the Table Variable isn't Ram only - it can be partially stored on disk.
A temp table can have indexes, whereas a table variable can only have a primary index. If speed is an issue Table variables can be faster, but obviously if there are a lot of records, or the need to search the temp table of a clustered index, then a Temp Table would be better.
Good background article
A: *
*Temp table: A Temp table is easy to create and back up data.
Table variable: But the table variable involves the effort when we usually create the normal tables.
*Temp table: Temp table result can be used by multiple users.
Table variable: But the table variable can be used by the current user only.
*Temp table: Temp table will be stored in the tempdb. It will make network traffic. When we have large data in the temp table then it has to work across the database. A Performance issue will exist.
Table variable: But a table variable will store in the physical memory for some of the data, then later when the size increases it will be moved to the tempdb.
*Temp table: Temp table can do all the DDL operations. It allows creating the indexes, dropping, altering, etc..,
Table variable: Whereas table variable won't allow doing the DDL operations. But the table variable allows us to create the clustered index only.
*Temp table: Temp table can be used for the current session or global. So that a multiple user session can utilize the results in the table.
Table variable: But the table variable can be used up to that program. (Stored procedure)
*Temp table: Temp variable cannot use the transactions. When we do the DML operations with the temp table then it can be rollback or commit the transactions.
Table variable: But we cannot do it for table variable.
*Temp table: Functions cannot use the temp variable. More over we cannot do the DML operation in the functions .
Table variable: But the function allows us to use the table variable. But using the table variable we can do that.
*Temp table: The stored procedure will do the recompilation (can't use same execution plan) when we use the temp variable for every sub sequent calls.
Table variable: Whereas the table variable won't do like that.
A: For all of you who believe the myth that temp variables are in memory only
First, the table variable is NOT necessarily memory resident. Under memory pressure, the pages belonging to a table variable can be pushed out to tempdb.
Read the article here: TempDB:: Table variable vs local temporary table
A: Quote taken from; Professional SQL Server 2012 Internals and Troubleshooting
Statistics
The major difference between temp tables and table variables is that
statistics are not created on table variables. This has two major
consequences, the fi rst of which is that the Query Optimizer uses a
fi xed estimation for the number of rows in a table variable
irrespective of the data it contains. Moreover, adding or removing
data doesn’t change the estimation.
Indexes You can’t create indexes on table variables although you can
create constraints. This means that by creating primary keys or unique
constraints, you can have indexes (as these are created to support
constraints) on table variables. Even if you have constraints, and
therefore indexes that will have statistics, the indexes will not be
used when the query is compiled because they won’t exist at compile
time, nor will they cause recompilations.
Schema Modifications Schema modifications are possible on temporary
tables but not on table variables. Although schema modifi cations are
possible on temporary tables, avoid using them because they cause
recompilations of statements that use the tables.
TABLE VARIABLES ARE NOT CREATED IN MEMORY
There is a common misconception that table variables are in-memory structures
and as such will perform quicker than temporary tables. Thanks to a DMV
called sys . dm _ db _ session _ space _ usage , which shows tempdb usage by
session, you can prove that’s not the case. After restarting SQL Server to clear the
DMV, run the following script to confi rm that your session _ id returns 0 for
user _ objects _ alloc _ page _ count :
SELECT session_id,
database_id,
user_objects_alloc_page_count
FROM sys.dm_db_session_space_usage
WHERE session_id > 50 ;
Now you can check how much space a temporary table uses by running the following
script to create a temporary table with one column and populate it with one row:
CREATE TABLE #TempTable ( ID INT ) ;
INSERT INTO #TempTable ( ID )
VALUES ( 1 ) ;
GO
SELECT session_id,
database_id,
user_objects_alloc_page_count
FROM sys.dm_db_session_space_usage
WHERE session_id > 50 ;
The results on my server indicate that the table was allocated one page in tempdb.
Now run the same script but use a table variable
this time:
DECLARE @TempTable TABLE ( ID INT ) ;
INSERT INTO @TempTable ( ID )
VALUES ( 1 ) ;
GO
SELECT session_id,
database_id,
user_objects_alloc_page_count
FROM sys.dm_db_session_space_usage
WHERE session_id > 50 ;
Which one to Use?
Whether or not you use temporary tables or table variables should be
decided by thorough testing, but it’s best to lean towards temporary
tables as the default because there are far fewer things that can go
wrong.
I’ve seen customers develop code using table variables because they
were dealing with a small amount of rows, and it was quicker than a
temporary table, but a few years later there were hundreds of
thousands of rows in the table variable and performance was terrible,
so try and allow for some capacity planning when you make your
decision!
A: In SQL the Temporary tables are stored in the TempDB and the local temporary tables are only visible in the current session and it will not be visible in another session. This can be shared between nested stored procedure calls. The Global temporary tables are visible to all other sessions and they are destroyed when the last connection referencing table is closed. For Example,
Select Dept.DeptName, Dept.DeptId, COUNT(*) as TotalEmployees
into #TempEmpCount
from Tbl_EmpDetails Emp
join Tbl_Dept Dept
on Emp.DeptId = Dept.DeptId
group by DeptName, Dept.DeptId
Table variables are similar to tempTables, a table variable is also created in TempDB. The scope of a table variable is the batch, stored procedure, or statement block in which it is declared. They can be passed as parameters between procedures. The same query can be written using Table variable by
Declare @tblEmployeeCount table
(DeptName nvarchar(20),DeptId int, TotalEmployees int)
Insert @tblEmployeeCount
Select DeptName, Tbl_Dept.DeptId, COUNT(*) as TotalEmployees
from Tbl_EmpDetails
join Tbl_Dept
on Tbl_EmpDetails.DeptId = Tbl_Dept.DeptId
group by DeptName, Tbl_Dept.DeptId
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27894",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "422"
} |
Q: Is there a way to have index.html functionality with content hosted on S3? Is there a way to make S3 default to an index.html page? E.g.: My bucket object listing:
/index.html
/favicon.ico
/images/logo.gif
A call to www.example.com/index.html works great! But if one were to call www.example.com/ we'd either get a 403 or a REST object listing XML document depending on how bucket-level ACL was configured.
So, the question: Is there a way to have index.html functionality with content hosted on S3?
A: For people still struggling against this after 3 years, let me add some important information:
The URL for your website (and to which you have to point your DNS) is not
<bucket_name>.s3-us-west-2.amazonaws.com, but
<bucket_name>.s3-website-us-west-2.amazonaws.com.
If you use the first, it will not work as intended, no matter how much you config the Index document.
For a specific example, consider:
*
*http://www-example-com.s3.amazonaws.com/index.html works.
*http://www-example-com.s3.amazonaws.com/ fails with AccessDenied.
*http://www-example-com.s3-website-us-west-2.amazonaws.com/ works!
To get your true website address, go to your S3 Management Console, select the target bucket, then Properties, then Static Website Hosting. It will show the website URL that will work.
A: You can easily solve it by Amazon CloudFront link. At Amazon CloudFront you could modify the root object. You can download manager here: m1.mycloudbuddy.com/downloads.html.
A: Amazon S3 now supports Index Documents
The index document for a bucket can be set to something like index.html. When accessing the root of the site or a sub-directory containing a document of that name that document is returned.
It is extremely easy to do using the aws cli:
aws s3 website $MY_BUCKET_NAME --index-document index.html
You can set the index document from the AWS Management Console:
A: Since It's been long time, this question being asked, and Amazon S3 changing their Interface. I would like to answer with updated screenshots.
We need to enable 'static web hosting' for S3 to serve as web hosting.
- Go to Properties -> click on static web hosting -> Select 'use this bucket to host a website'
- Enter the index document (index.html by default), error document and redirection rules, if any.
As answered in this answer on Stack Overflow, web hosting link would be: http://bucket-name.s3-website-region.amazonaws.com
A: I would suggest reading this thread from 2006 (On Amazon web services developers connection). It seems there's no easy solution to this.
A: Yes. using AWS Cloudfront lets you assign a default file.
A: you can do it using dns webforwards and cloaking. just forward to the complete path of the index.html
www.example.com forwards to http://www.example.com.s3.amazonaws.com and make sure you cloak the output.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27899",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "38"
} |
Q: Finding a DOI in a document or page The DOI system places basically no useful limitations on what constitutes a reasonable identifier. However, being able to pull DOIs out of PDFs, web pages, etc. is quite useful for citation information, etc.
Is there a reliable way to identify a DOI in a block of text without assuming the 'doi:' prefix? (any language acceptable, regexes preferred, and avoiding false positives a must)
A: Ok, I'm currently extracting thousands of DOIs from free form text (XML) and I realized that my previous approach had a few problems, namely regarding encoded entities and trailing punctuation, so I went on reading the specification and this is the best I could come with.
The DOI prefix shall be composed of a directory indicator followed by
a registrant code. These two components shall be separated by a full
stop (period).
The directory indicator shall be "10". The directory indicator
distinguishes the entire set of character strings (prefix and suffix)
as digital object identifiers within the resolution system.
Easy enough, the initial \b prevents us from "matching" a "DOI" that doesn't start with 10.:
$pattern = '\b(10[.]';
The second element of the DOI prefix shall be the registrant code. The
registrant code is a unique string assigned to a registrant.
Also, all assigned registrant code are numeric, and at least 4 digits long, so:
$pattern = '\b(10[.][0-9]{4,}';
The registrant code may be further divided into sub-elements for
administrative convenience if desired. Each sub-element of the
registrant code shall be preceded by a full stop.
$pattern = '\b(10[.][0-9]{4,}(?:[.][0-9]+)*';
The DOI syntax shall be made up of a DOI prefix and a DOI suffix
separated by a forward slash.
However, this isn't absolutely necessary, section 2.2.3 states that uncommon suffix systems may use other conventions (such as 10.1000.123456 instead of 10.1000/123456), but lets cut some slack.
$pattern = '\b(10[.][0-9]{4,}(?:[.][0-9]+)*/';
The DOI name is case-insensitive and can incorporate any printable
characters from the legal graphic characters of Unicode. The DOI
suffix shall consist of a character string of any length chosen by the
registrant. Each suffix shall be unique to the prefix element that
precedes it. The unique suffix can be a sequential number, or it might
incorporate an identifier generated from or based on another system.
Now this is where it gets trickier, from all the DOIs I have processed, I saw the following characters (besides [0-9a-zA-Z] of course) in their suffixes: .-()/:- -- so, while it doesn't exist, the DOI 10.1016.12.31/nature.S0735-1097(98)2000/12/31/34:7-7 is completely plausible.
The logical choice would be to use \S or the [[:graph:]] PCRE POSIX class, so lets do that:
$pattern = '\b(10[.][0-9]{4,}(?:[.][0-9]+)*/\S+'; // or
$pattern = '\b(10[.][0-9]{4,}(?:[.][0-9]+)*/[[:graph:]]+';
Now we have a difficult problem, the [[:graph:]] class is a super-set of the [[:punct:]] class, which includes characters easily found in free text or any markup language: "'&<> among others.
Lets just filter the markup ones for now using a negative lookahead:
$pattern = '\b(10[.][0-9]{4,}(?:[.][0-9]+)*/(?:(?!["&\'<>])\S)+'; // or
$pattern = '\b(10[.][0-9]{4,}(?:[.][0-9]+)*/(?:(?!["&\'<>])[[:graph:]])+';
The above should cover encoded entities (&), attribute quotes (["']) and open / close tags ([<>]).
Unlike markup languages, free text usually doesn't employ punctuation characters unless they are bounded by at least one space or placed at the end of a sentence, for instance:
This is a long DOI:
10.1016.12.31/nature.S0735-1097(98)2000/12/31/34:7-7!!!
The solution here is to close our capture group and assert another word boundary:
$pattern = '\b(10[.][0-9]{4,}(?:[.][0-9]+)*/(?:(?!["&\'<>])\S)+)\b'; // or
$pattern = '\b(10[.][0-9]{4,}(?:[.][0-9]+)*/(?:(?!["&\'<>])[[:graph:]])+)\b';
And voilá, here is a demo.
A: I'm sure it's not super-helpful for the OP at this point, but I figured I'd post what I am trying in case anyone else like me stumbles upon this:
(10.(\d)+/(\S)+)
This matches: "10 dot number slash anything-not-whitespace"
But for my use (scraping HTML), this was finding false-positives, so I had to match the above, plus get rid of quotes and greater-than/less-than:
(10.(\d)+/([^(\s\>\"\<)])+)
I'm still testing these out, but I'm feeling hopeful thus far.
A: Here is my go at it:
(10[.][0-9]{4,}[^\s"/<>]*/[^\s"<>]+)
And a couple of valid edge cases where this doesn't fail, but others seem to do:
*
*10.1007/978-3-642-28108-2_19
*10.1007.10/978-3-642-28108-2_19 (fictitious example, see @Ju9OR comment)
*10.1016/S0735-1097(98)00347-7
*10.1579/0044-7447(2006)35\[89:RDUICP\]2.0.CO;2
Also, correctly discards some falsy (X|HT)ML stuff like:
*
*<geo coords="10.4515260,51.1656910"></geo>
A: CrossRef has a recommendation, that they tested successfully on 99.3% of DOIs (known to them):
/^10.\d{4,9}/[-._;()/:A-Z0-9]+$/i
A: This is a really old and answered question, but here's another potential substitute.
\b10\.(\d+\.*)+[\/](([^\s\.])+\.*)+\b
This assumes that white space is not part of the DOI.
Haven't tested this for false positives, but it seems to be able to find all the edge cases mentioned in this page.
A: @Silas The sanity checking is a good idea. However, the regex doesn't cover all DOIs. The first element must (currently) be 10, and the second element must (currently) be numeric, but the third element is barely restricted at all:
"Legal characters are the legal graphic characters of Unicode. This specifically excludes the control character ranges 0x00-0x1F and 0x80-0x9F..."
and that's where the real problem lies. In practice, I've never seen whitespace used, but the spec specifically allows for it. Basically, there doesn't seem to be a sensible way of detecting the end of a DOI.
A: The following regex should do the job (Perl regex syntax):
/(10\.\d+\/\d+)/
You could do some additional sanity checking by opening the urls
http://hdl.handle.net/<doi>
and
http://dx.doi.org/<doi>
where is the candidate doi,
and testing that you a) get a 200 OK http status, and b) the returned page is not the "DOI not found" page for the service.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27910",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "59"
} |
Q: SQL Server - testing the database What tools are people using for testing SQL Server databases?
By this I mean all parts of the database:
*
*configuration
*tables
*column type
*stored procedures
*constraints
Most likely, there is not one tool to do it all.
A: How do you mean "Test the database"?
If you are testing foreign keys, a simply script to insert invalid data is all you should need.
Testing a database could imply a great number of issues. Does it have all the tables? Are the tables correct? Are the indexes in place? Did the latest updates get applied? Has the data been migrated? Is the even valid? Are the foreign keys correct?
There is a lot to test in a database so you are unlikely to find a simple way to test it. I find that a combination of test stored procedures and some Nunit unit tests do most of the vetting of my databases.
A: I personally use NHibernate with SqlCe, this provides a "throw-away" database that doesn't need any specialized tear down after the tests are run.
It also provides a good way to test your nhibernate mappings if applicable.
Here is a link to an article I wrote awhile ago on how to accomplish this: http://www.codeproject.com/KB/database/TDD_and_SqlCE.aspx?display=Print
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27916",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: What is the "best" way to create a thumbnail using ASP.NET? Story: The user uploads an image that will be added to a photo gallery. As part of the upload process, we need to A) store the image on the web server's hard drive and B) store a thumbnail of the image on the web server's hard drive.
"Best" here is defined as
*
*Relatively easy to implement, understand, and maintain
*Results in a thumbnail of reasonable quality
Performance and high-quality thumbnails are secondary.
A: Using an example above and some from a couple of other places, here is an easy function to just drop in (thanks to Nathanael Jones and others here).
using System.Drawing;
using System.Drawing.Drawing2D;
using System.IO;
public static void ResizeImage(string FileNameInput, string FileNameOutput, double ResizeHeight, double ResizeWidth, ImageFormat OutputFormat)
{
using (System.Drawing.Image photo = new Bitmap(FileNameInput))
{
double aspectRatio = (double)photo.Width / photo.Height;
double boxRatio = ResizeWidth / ResizeHeight;
double scaleFactor = 0;
if (photo.Width < ResizeWidth && photo.Height < ResizeHeight)
{
// keep the image the same size since it is already smaller than our max width/height
scaleFactor = 1.0;
}
else
{
if (boxRatio > aspectRatio)
scaleFactor = ResizeHeight / photo.Height;
else
scaleFactor = ResizeWidth / photo.Width;
}
int newWidth = (int)(photo.Width * scaleFactor);
int newHeight = (int)(photo.Height * scaleFactor);
using (Bitmap bmp = new Bitmap(newWidth, newHeight))
{
using (Graphics g = Graphics.FromImage(bmp))
{
g.InterpolationMode = InterpolationMode.HighQualityBicubic;
g.SmoothingMode = SmoothingMode.HighQuality;
g.CompositingQuality = CompositingQuality.HighQuality;
g.PixelOffsetMode = PixelOffsetMode.HighQuality;
g.DrawImage(photo, 0, 0, newWidth, newHeight);
if (ImageFormat.Png.Equals(OutputFormat))
{
bmp.Save(FileNameOutput, OutputFormat);
}
else if (ImageFormat.Jpeg.Equals(OutputFormat))
{
ImageCodecInfo[] info = ImageCodecInfo.GetImageEncoders();
EncoderParameters encoderParameters;
using (encoderParameters = new System.Drawing.Imaging.EncoderParameters(1))
{
// use jpeg info[1] and set quality to 90
encoderParameters.Param[0] = new System.Drawing.Imaging.EncoderParameter(System.Drawing.Imaging.Encoder.Quality, 90L);
bmp.Save(FileNameOutput, info[1], encoderParameters);
}
}
}
}
}
}
A: GetThumbnailImage would work, but if you want a little better quality you can specify your image options for the BitMap class and save your loaded image into there. Here is some sample code:
Image photo; // your uploaded image
Bitmap bmp = new Bitmap(resizeToWidth, resizeToHeight);
graphic = Graphics.FromImage(bmp);
graphic.InterpolationMode = InterpolationMode.HighQualityBicubic;
graphic.SmoothingMode = SmoothingMode.HighQuality;
graphic.PixelOffsetMode = PixelOffsetMode.HighQuality;
graphic.CompositingQuality = CompositingQuality.HighQuality;
graphic.DrawImage(photo, 0, 0, resizeToWidth, resizeToHeight);
imageToSave = bmp;
This provides better quality than GetImageThumbnail would out of the box
A: Here is an extension method in VB.NET for the Image Class
Imports System.Runtime.CompilerServices
Namespace Extensions
''' <summary>
''' Extensions for the Image class.
''' </summary>
''' <remarks>Several usefull extensions for the image class.</remarks>
Public Module ImageExtensions
''' <summary>
''' Extends the image class so that it is easier to get a thumbnail from an image
''' </summary>
''' <param name="Input">Th image that is inputted, not really a parameter</param>
''' <param name="MaximumSize">The maximumsize the thumbnail must be if keepaspectratio is set to true then the highest number of width or height is used and the other is calculated accordingly. </param>
''' <param name="KeepAspectRatio">If set false width and height will be the same else the highest number of width or height is used and the other is calculated accordingly.</param>
''' <returns>A thumbnail as image.</returns>
''' <remarks>
''' <example>Can be used as such.
''' <code>
''' Dim _NewImage as Image
''' Dim _Graphics As Graphics
''' _Image = New Bitmap(100, 100)
''' _Graphics = Graphics.FromImage(_Image)
''' _Graphics.FillRectangle(Brushes.Blue, New Rectangle(0, 0, 100, 100))
''' _Graphics.DrawLine(Pens.Black, 10, 0, 10, 100)
''' Assert.IsNotNull(_Image)
''' _NewImage = _Image.ToThumbnail(10)
''' </code>
''' </example>
''' </remarks>
<Extension()> _
Public Function ToThumbnail(ByVal Input As Image, ByVal MaximumSize As Integer, Optional ByVal KeepAspectRatio As Boolean = True) As Image
Dim ReturnImage As Image
Dim _Callback As Image.GetThumbnailImageAbort = Nothing
Dim _OriginalHeight As Double
Dim _OriginalWidth As Double
Dim _NewHeight As Double
Dim _NewWidth As Double
Dim _NormalImage As Image
Dim _Graphics As Graphics
_NormalImage = New Bitmap(Input.Width, Input.Height)
_Graphics = Graphics.FromImage(_NormalImage)
_Graphics.DrawImage(Input, 0, 0, Input.Width, Input.Height)
_OriginalHeight = _NormalImage.Height
_OriginalWidth = _NormalImage.Width
If KeepAspectRatio = True Then
If _OriginalHeight > _OriginalWidth Then
If _OriginalHeight > MaximumSize Then
_NewHeight = MaximumSize
_NewWidth = _OriginalWidth / _OriginalHeight * MaximumSize
Else
_NewHeight = _OriginalHeight
_NewWidth = _OriginalWidth
End If
Else
If _OriginalWidth > MaximumSize Then
_NewWidth = MaximumSize
_NewHeight = _OriginalHeight / _OriginalWidth * MaximumSize
Else
_NewHeight = _OriginalHeight
_NewWidth = _OriginalWidth
End If
End If
Else
_NewHeight = MaximumSize
_NewWidth = MaximumSize
End If
ReturnImage = _
_NormalImage.GetThumbnailImage(Convert.ToInt32(_NewWidth), Convert.ToInt32(_NewHeight), _Callback, _
IntPtr.Zero)
_NormalImage.Dispose()
_NormalImage = Nothing
_Graphics.Dispose()
_Graphics = Nothing
_Callback = Nothing
Return ReturnImage
End Function
End Module
End Namespace
Sorry the code tag doesn't like vb.net code.
A: I suppose your best solution would be using the GetThumbnailImage from the .NET Image class.
// Example in C#, should be quite alike in ASP.NET
// Assuming filename as the uploaded file
using ( Image bigImage = new Bitmap( filename ) )
{
// Algorithm simplified for purpose of example.
int height = bigImage.Height / 10;
int width = bigImage.Width / 10;
// Now create a thumbnail
using ( Image smallImage = image.GetThumbnailImage( width,
height,
new Image.GetThumbnailImageAbort(Abort), IntPtr.Zero) )
{
smallImage.Save("thumbnail.jpg", ImageFormat.Jpeg);
}
}
A: You can use the Image.GetThumbnailImage function to do it for you.
http://msdn.microsoft.com/en-us/library/system.drawing.image.getthumbnailimage.aspx (.NET 3.5)
http://msdn.microsoft.com/en-us/library/system.drawing.image.getthumbnailimage(VS.80).aspx (.NET 2.0)
public bool ThumbnailCallback()
{
return false;
}
public void Example_GetThumb(PaintEventArgs e)
{
Image.GetThumbnailImageAbort myCallback = new Image.GetThumbnailImageAbort(ThumbnailCallback);
Bitmap myBitmap = new Bitmap("Climber.jpg");
Image myThumbnail = myBitmap.GetThumbnailImage(40, 40, myCallback, IntPtr.Zero);
e.Graphics.DrawImage(myThumbnail, 150, 75);
}
A: Avoid GetThumbnailImage - it will provide very unpredictable results, since it tries to use the embedded JPEG thumbnail if available - even if the embedded thumbnail is entirely the wrong size. DrawImage() is a much better solution.
Wrap your bitmap in a using{} clause - you don't want leaked handles floating around...
Also, you'll want to set your Jpeg encoding quality to 90, which is where GDI+ seems to shine the best:
System.Drawing.Imaging.ImageCodecInfo[] info = System.Drawing.Imaging.ImageCodecInfo.GetImageEncoders();
System.Drawing.Imaging.EncoderParameters encoderParameters;
encoderParameters = new System.Drawing.Imaging.EncoderParameters(1);
encoderParameters.Param[0] = new System.Drawing.Imaging.EncoderParameter(System.Drawing.Imaging.Encoder.Quality, 90L);
thumb.Save(ms, info[1], encoderParameters);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27921",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
} |
Q: Calculate distance between two latitude-longitude points? (Haversine formula) How do I calculate the distance between two points specified by latitude and longitude?
For clarification, I'd like the distance in kilometers; the points use the WGS84 system and I'd like to understand the relative accuracies of the approaches available.
A: pip install haversine
Python implementation
Origin is the center of the contiguous United States.
from haversine import haversine, Unit
origin = (39.50, 98.35)
paris = (48.8567, 2.3508)
haversine(origin, paris, unit=Unit.MILES)
To get the answer in kilometers simply set unit=Unit.KILOMETERS (that's the default).
A: There could be a simpler solution, and more correct: The perimeter of earth is 40,000Km at the equator, about 37,000 on Greenwich (or any longitude) cycle. Thus:
pythagoras = function (lat1, lon1, lat2, lon2) {
function sqr(x) {return x * x;}
function cosDeg(x) {return Math.cos(x * Math.PI / 180.0);}
var earthCyclePerimeter = 40000000.0 * cosDeg((lat1 + lat2) / 2.0);
var dx = (lon1 - lon2) * earthCyclePerimeter / 360.0;
var dy = 37000000.0 * (lat1 - lat2) / 360.0;
return Math.sqrt(sqr(dx) + sqr(dy));
};
I agree that it should be fine-tuned as, I myself said that it's an ellipsoid, so the radius to be multiplied by the cosine varies. But it's a bit more accurate. Compared with Google Maps and it did reduce the error significantly.
A: There is some errors in the code provided, I've fixed it below.
All the above answers assumes the earth is a sphere. However, a more accurate approximation would be that of an oblate spheroid.
a= 6378.137#equitorial radius in km
b= 6356.752#polar radius in km
def Distance(lat1, lons1, lat2, lons2):
lat1=math.radians(lat1)
lons1=math.radians(lons1)
R1=(((((a**2)*math.cos(lat1))**2)+(((b**2)*math.sin(lat1))**2))/((a*math.cos(lat1))**2+(b*math.sin(lat1))**2))**0.5 #radius of earth at lat1
x1=R1*math.cos(lat1)*math.cos(lons1)
y1=R1*math.cos(lat1)*math.sin(lons1)
z1=R1*math.sin(lat1)
lat2=math.radians(lat2)
lons2=math.radians(lons2)
R2=(((((a**2)*math.cos(lat2))**2)+(((b**2)*math.sin(lat2))**2))/((a*math.cos(lat2))**2+(b*math.sin(lat2))**2))**0.5 #radius of earth at lat2
x2=R2*math.cos(lat2)*math.cos(lons2)
y2=R2*math.cos(lat2)*math.sin(lons2)
z2=R2*math.sin(lat2)
return ((x1-x2)**2+(y1-y2)**2+(z1-z2)**2)**0.5
A: I don't like adding yet another answer, but the Google maps API v.3 has spherical geometry (and more). After converting your WGS84 to decimal degrees you can do this:
<script src="http://maps.google.com/maps/api/js?sensor=false&libraries=geometry" type="text/javascript"></script>
distance = google.maps.geometry.spherical.computeDistanceBetween(
new google.maps.LatLng(fromLat, fromLng),
new google.maps.LatLng(toLat, toLng));
No word about how accurate Google's calculations are or even what model is used (though it does say "spherical" rather than "geoid". By the way, the "straight line" distance will obviously be different from the distance if one travels on the surface of the earth which is what everyone seems to be presuming.
A: Here is a C# Implementation:
static class DistanceAlgorithm
{
const double PIx = 3.141592653589793;
const double RADIUS = 6378.16;
/// <summary>
/// Convert degrees to Radians
/// </summary>
/// <param name="x">Degrees</param>
/// <returns>The equivalent in radians</returns>
public static double Radians(double x)
{
return x * PIx / 180;
}
/// <summary>
/// Calculate the distance between two places.
/// </summary>
/// <param name="lon1"></param>
/// <param name="lat1"></param>
/// <param name="lon2"></param>
/// <param name="lat2"></param>
/// <returns></returns>
public static double DistanceBetweenPlaces(
double lon1,
double lat1,
double lon2,
double lat2)
{
double dlon = Radians(lon2 - lon1);
double dlat = Radians(lat2 - lat1);
double a = (Math.Sin(dlat / 2) * Math.Sin(dlat / 2)) + Math.Cos(Radians(lat1)) * Math.Cos(Radians(lat2)) * (Math.Sin(dlon / 2) * Math.Sin(dlon / 2));
double angle = 2 * Math.Atan2(Math.Sqrt(a), Math.Sqrt(1 - a));
return angle * RADIUS;
}
}
A: Here is a java implementation of the Haversine formula.
public final static double AVERAGE_RADIUS_OF_EARTH_KM = 6371;
public int calculateDistanceInKilometer(double userLat, double userLng,
double venueLat, double venueLng) {
double latDistance = Math.toRadians(userLat - venueLat);
double lngDistance = Math.toRadians(userLng - venueLng);
double a = Math.sin(latDistance / 2) * Math.sin(latDistance / 2)
+ Math.cos(Math.toRadians(userLat)) * Math.cos(Math.toRadians(venueLat))
* Math.sin(lngDistance / 2) * Math.sin(lngDistance / 2);
double c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1 - a));
return (int) (Math.round(AVERAGE_RADIUS_OF_EARTH_KM * c));
}
Note that here we are rounding the answer to the nearest km.
A: You can use the build in CLLocationDistance to calculate this:
CLLocation *location1 = [[CLLocation alloc] initWithLatitude:latitude1 longitude:longitude1];
CLLocation *location2 = [[CLLocation alloc] initWithLatitude:latitude2 longitude:longitude2];
[self distanceInMetersFromLocation:location1 toLocation:location2]
- (int)distanceInMetersFromLocation:(CLLocation*)location1 toLocation:(CLLocation*)location2 {
CLLocationDistance distanceInMeters = [location1 distanceFromLocation:location2];
return distanceInMeters;
}
In your case if you want kilometers just divide by 1000.
A: As pointed out, an accurate calculation should take into account that the earth is not a perfect sphere. Here are some comparisons of the various algorithms offered here:
geoDistance(50,5,58,3)
Haversine: 899 km
Maymenn: 833 km
Keerthana: 897 km
google.maps.geometry.spherical.computeDistanceBetween(): 900 km
geoDistance(50,5,-58,-3)
Haversine: 12030 km
Maymenn: 11135 km
Keerthana: 10310 km
google.maps.geometry.spherical.computeDistanceBetween(): 12044 km
geoDistance(.05,.005,.058,.003)
Haversine: 0.9169 km
Maymenn: 0.851723 km
Keerthana: 0.917964 km
google.maps.geometry.spherical.computeDistanceBetween(): 0.917964 km
geoDistance(.05,80,.058,80.3)
Haversine: 33.37 km
Maymenn: 33.34 km
Keerthana: 33.40767 km
google.maps.geometry.spherical.computeDistanceBetween(): 33.40770 km
Over small distances, Keerthana's algorithm does seem to coincide with that of Google Maps. Google Maps does not seem to follow any simple algorithm, suggesting that it may be the most accurate method here.
Anyway, here is a Javascript implementation of Keerthana's algorithm:
function geoDistance(lat1, lng1, lat2, lng2){
const a = 6378.137; // equitorial radius in km
const b = 6356.752; // polar radius in km
var sq = x => (x*x);
var sqr = x => Math.sqrt(x);
var cos = x => Math.cos(x);
var sin = x => Math.sin(x);
var radius = lat => sqr((sq(a*a*cos(lat))+sq(b*b*sin(lat)))/(sq(a*cos(lat))+sq(b*sin(lat))));
lat1 = lat1 * Math.PI / 180;
lng1 = lng1 * Math.PI / 180;
lat2 = lat2 * Math.PI / 180;
lng2 = lng2 * Math.PI / 180;
var R1 = radius(lat1);
var x1 = R1*cos(lat1)*cos(lng1);
var y1 = R1*cos(lat1)*sin(lng1);
var z1 = R1*sin(lat1);
var R2 = radius(lat2);
var x2 = R2*cos(lat2)*cos(lng2);
var y2 = R2*cos(lat2)*sin(lng2);
var z2 = R2*sin(lat2);
return sqr(sq(x1-x2)+sq(y1-y2)+sq(z1-z2));
}
A: Here is a typescript implementation of the Haversine formula
static getDistanceFromLatLonInKm(lat1: number, lon1: number, lat2: number, lon2: number): number {
var deg2Rad = deg => {
return deg * Math.PI / 180;
}
var r = 6371; // Radius of the earth in km
var dLat = deg2Rad(lat2 - lat1);
var dLon = deg2Rad(lon2 - lon1);
var a =
Math.sin(dLat / 2) * Math.sin(dLat / 2) +
Math.cos(deg2Rad(lat1)) * Math.cos(deg2Rad(lat2)) *
Math.sin(dLon / 2) * Math.sin(dLon / 2);
var c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1 - a));
var d = r * c; // Distance in km
return d;
}
A: Here is the SQL Implementation to calculate the distance in km,
SELECT UserId, ( 3959 * acos( cos( radians( your latitude here ) ) * cos( radians(latitude) ) *
cos( radians(longitude) - radians( your longitude here ) ) + sin( radians( your latitude here ) ) *
sin( radians(latitude) ) ) ) AS distance FROM user HAVING
distance < 5 ORDER BY distance LIMIT 0 , 5;
For further details in the implementation by programming langugage, you can just go through the php script given here
A: This script [in PHP] calculates distances between the two points.
public static function getDistanceOfTwoPoints($source, $dest, $unit='K') {
$lat1 = $source[0];
$lon1 = $source[1];
$lat2 = $dest[0];
$lon2 = $dest[1];
$theta = $lon1 - $lon2;
$dist = sin(deg2rad($lat1)) * sin(deg2rad($lat2)) + cos(deg2rad($lat1)) * cos(deg2rad($lat2)) * cos(deg2rad($theta));
$dist = acos($dist);
$dist = rad2deg($dist);
$miles = $dist * 60 * 1.1515;
$unit = strtoupper($unit);
if ($unit == "K") {
return ($miles * 1.609344);
}
else if ($unit == "M")
{
return ($miles * 1.609344 * 1000);
}
else if ($unit == "N") {
return ($miles * 0.8684);
}
else {
return $miles;
}
}
A: I needed to calculate a lot of distances between the points for my project, so I went ahead and tried to optimize the code, I have found here. On average in different browsers my new implementation runs 2 times faster than the most upvoted answer.
function distance(lat1, lon1, lat2, lon2) {
var p = 0.017453292519943295; // Math.PI / 180
var c = Math.cos;
var a = 0.5 - c((lat2 - lat1) * p)/2 +
c(lat1 * p) * c(lat2 * p) *
(1 - c((lon2 - lon1) * p))/2;
return 12742 * Math.asin(Math.sqrt(a)); // 2 * R; R = 6371 km
}
You can play with my jsPerf and see the results here.
Recently I needed to do the same in python, so here is a python implementation:
from math import cos, asin, sqrt, pi
def distance(lat1, lon1, lat2, lon2):
p = pi/180
a = 0.5 - cos((lat2-lat1)*p)/2 + cos(lat1*p) * cos(lat2*p) * (1-cos((lon2-lon1)*p))/2
return 12742 * asin(sqrt(a)) #2*R*asin...
And for the sake of completeness: Haversine on Wikipedia.
A: Thanks very much for all this. I used the following code in my Objective-C iPhone app:
const double PIx = 3.141592653589793;
const double RADIO = 6371; // Mean radius of Earth in Km
double convertToRadians(double val) {
return val * PIx / 180;
}
-(double)kilometresBetweenPlace1:(CLLocationCoordinate2D) place1 andPlace2:(CLLocationCoordinate2D) place2 {
double dlon = convertToRadians(place2.longitude - place1.longitude);
double dlat = convertToRadians(place2.latitude - place1.latitude);
double a = ( pow(sin(dlat / 2), 2) + cos(convertToRadians(place1.latitude))) * cos(convertToRadians(place2.latitude)) * pow(sin(dlon / 2), 2);
double angle = 2 * asin(sqrt(a));
return angle * RADIO;
}
Latitude and Longitude are in decimal. I didn't use min() for the asin() call as the distances that I'm using are so small that they don't require it.
It gave incorrect answers until I passed in the values in Radians - now it's pretty much the same as the values obtained from Apple's Map app :-)
Extra update:
If you are using iOS4 or later then Apple provide some methods to do this so the same functionality would be achieved with:
-(double)kilometresBetweenPlace1:(CLLocationCoordinate2D) place1 andPlace2:(CLLocationCoordinate2D) place2 {
MKMapPoint start, finish;
start = MKMapPointForCoordinate(place1);
finish = MKMapPointForCoordinate(place2);
return MKMetersBetweenMapPoints(start, finish) / 1000;
}
A: This is a simple PHP function that will give a very reasonable approximation (under +/-1% error margin).
<?php
function distance($lat1, $lon1, $lat2, $lon2) {
$pi80 = M_PI / 180;
$lat1 *= $pi80;
$lon1 *= $pi80;
$lat2 *= $pi80;
$lon2 *= $pi80;
$r = 6372.797; // mean radius of Earth in km
$dlat = $lat2 - $lat1;
$dlon = $lon2 - $lon1;
$a = sin($dlat / 2) * sin($dlat / 2) + cos($lat1) * cos($lat2) * sin($dlon / 2) * sin($dlon / 2);
$c = 2 * atan2(sqrt($a), sqrt(1 - $a));
$km = $r * $c;
//echo '<br/>'.$km;
return $km;
}
?>
As said before; the earth is NOT a sphere. It is like an old, old baseball that Mark McGwire decided to practice with - it is full of dents and bumps. The simpler calculations (like this) treat it like a sphere.
Different methods may be more or less precise according to where you are on this irregular ovoid AND how far apart your points are (the closer they are the smaller the absolute error margin). The more precise your expectation, the more complex the math.
For more info: wikipedia geographic distance
A: here is an example in postgres sql (in km, for miles version, replace 1.609344 by 0.8684 version)
CREATE OR REPLACE FUNCTION public.geodistance(alat float, alng float, blat
float, blng float)
RETURNS float AS
$BODY$
DECLARE
v_distance float;
BEGIN
v_distance = asin( sqrt(
sin(radians(blat-alat)/2)^2
+ (
(sin(radians(blng-alng)/2)^2) *
cos(radians(alat)) *
cos(radians(blat))
)
)
) * cast('7926.3352' as float) * cast('1.609344' as float) ;
RETURN v_distance;
END
$BODY$
language plpgsql VOLATILE SECURITY DEFINER;
alter function geodistance(alat float, alng float, blat float, blng float)
owner to postgres;
A:
Java implementation in according Haversine formula
double calculateDistance(double latPoint1, double lngPoint1,
double latPoint2, double lngPoint2) {
if(latPoint1 == latPoint2 && lngPoint1 == lngPoint2) {
return 0d;
}
final double EARTH_RADIUS = 6371.0; //km value;
//converting to radians
latPoint1 = Math.toRadians(latPoint1);
lngPoint1 = Math.toRadians(lngPoint1);
latPoint2 = Math.toRadians(latPoint2);
lngPoint2 = Math.toRadians(lngPoint2);
double distance = Math.pow(Math.sin((latPoint2 - latPoint1) / 2.0), 2)
+ Math.cos(latPoint1) * Math.cos(latPoint2)
* Math.pow(Math.sin((lngPoint2 - lngPoint1) / 2.0), 2);
distance = 2.0 * EARTH_RADIUS * Math.asin(Math.sqrt(distance));
return distance; //km value
}
A: I made a custom function in R to calculate haversine distance(km) between two spatial points using functions available in R base package.
custom_hav_dist <- function(lat1, lon1, lat2, lon2) {
R <- 6371
Radian_factor <- 0.0174533
lat_1 <- (90-lat1)*Radian_factor
lat_2 <- (90-lat2)*Radian_factor
diff_long <-(lon1-lon2)*Radian_factor
distance_in_km <- 6371*acos((cos(lat_1)*cos(lat_2))+
(sin(lat_1)*sin(lat_2)*cos(diff_long)))
rm(lat1, lon1, lat2, lon2)
return(distance_in_km)
}
Sample output
custom_hav_dist(50.31,19.08,54.14,19.39)
[1] 426.3987
PS: To calculate distances in miles, substitute R in function (6371) with 3958.756 (and for nautical miles, use 3440.065).
A: I post here my working example.
List all points in table having distance between a designated point (we use a random point - lat:45.20327, long:23.7806) less than 50 KM, with latitude & longitude, in MySQL (the table fields are coord_lat and coord_long):
List all having DISTANCE<50, in Kilometres (considered Earth radius 6371 KM):
SELECT denumire, (6371 * acos( cos( radians(45.20327) ) * cos( radians( coord_lat ) ) * cos( radians( 23.7806 ) - radians(coord_long) ) + sin( radians(45.20327) ) * sin( radians(coord_lat) ) )) AS distanta
FROM obiective
WHERE coord_lat<>''
AND coord_long<>''
HAVING distanta<50
ORDER BY distanta desc
The above example was tested in MySQL 5.0.95 and 5.5.16 (Linux).
A: In the other answers an implementation in r is missing.
Calculating the distance between two point is quite straightforward with the distm function from the geosphere package:
distm(p1, p2, fun = distHaversine)
where:
p1 = longitude/latitude for point(s)
p2 = longitude/latitude for point(s)
# type of distance calculation
fun = distCosine / distHaversine / distVincentySphere / distVincentyEllipsoid
As the earth is not perfectly spherical, the Vincenty formula for ellipsoids is probably the best way to calculate distances. Thus in the geosphere package you use then:
distm(p1, p2, fun = distVincentyEllipsoid)
Off course you don't necessarily have to use geosphere package, you can also calculate the distance in base R with a function:
hav.dist <- function(long1, lat1, long2, lat2) {
R <- 6371
diff.long <- (long2 - long1)
diff.lat <- (lat2 - lat1)
a <- sin(diff.lat/2)^2 + cos(lat1) * cos(lat2) * sin(diff.long/2)^2
b <- 2 * asin(pmin(1, sqrt(a)))
d = R * b
return(d)
}
A: To calculate the distance between two points on a sphere you need to do the Great Circle calculation.
There are a number of C/C++ libraries to help with map projection at MapTools if you need to reproject your distances to a flat surface. To do this you will need the projection string of the various coordinate systems.
You may also find MapWindow a useful tool to visualise the points. Also as its open source its a useful guide to how to use the proj.dll library, which appears to be the core open source projection library.
A: Here is my java implementation for calculation distance via decimal degrees after some search. I used mean radius of world (from wikipedia) in km. İf you want result miles then use world radius in miles.
public static double distanceLatLong2(double lat1, double lng1, double lat2, double lng2)
{
double earthRadius = 6371.0d; // KM: use mile here if you want mile result
double dLat = toRadian(lat2 - lat1);
double dLng = toRadian(lng2 - lng1);
double a = Math.pow(Math.sin(dLat/2), 2) +
Math.cos(toRadian(lat1)) * Math.cos(toRadian(lat2)) *
Math.pow(Math.sin(dLng/2), 2);
double c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1-a));
return earthRadius * c; // returns result kilometers
}
public static double toRadian(double degrees)
{
return (degrees * Math.PI) / 180.0d;
}
A: Here's the accepted answer implementation ported to Java in case anyone needs it.
package com.project529.garage.util;
/**
* Mean radius.
*/
private static double EARTH_RADIUS = 6371;
/**
* Returns the distance between two sets of latitudes and longitudes in meters.
* <p/>
* Based from the following JavaScript SO answer:
* http://stackoverflow.com/questions/27928/calculate-distance-between-two-latitude-longitude-points-haversine-formula,
* which is based on https://en.wikipedia.org/wiki/Haversine_formula (error rate: ~0.55%).
*/
public double getDistanceBetween(double lat1, double lon1, double lat2, double lon2) {
double dLat = toRadians(lat2 - lat1);
double dLon = toRadians(lon2 - lon1);
double a = Math.sin(dLat / 2) * Math.sin(dLat / 2) +
Math.cos(toRadians(lat1)) * Math.cos(toRadians(lat2)) *
Math.sin(dLon / 2) * Math.sin(dLon / 2);
double c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1 - a));
double d = EARTH_RADIUS * c;
return d;
}
public double toRadians(double degrees) {
return degrees * (Math.PI / 180);
}
A: For those looking for an Excel formula based on WGS-84 & GRS-80 standards:
=ACOS(COS(RADIANS(90-Lat1))*COS(RADIANS(90-Lat2))+SIN(RADIANS(90-Lat1))*SIN(RADIANS(90-Lat2))*COS(RADIANS(Long1-Long2)))*6371
Source
A: there is a good example in here to calculate distance with PHP http://www.geodatasource.com/developers/php :
function distance($lat1, $lon1, $lat2, $lon2, $unit) {
$theta = $lon1 - $lon2;
$dist = sin(deg2rad($lat1)) * sin(deg2rad($lat2)) + cos(deg2rad($lat1)) * cos(deg2rad($lat2)) * cos(deg2rad($theta));
$dist = acos($dist);
$dist = rad2deg($dist);
$miles = $dist * 60 * 1.1515;
$unit = strtoupper($unit);
if ($unit == "K") {
return ($miles * 1.609344);
} else if ($unit == "N") {
return ($miles * 0.8684);
} else {
return $miles;
}
}
A: Here is the implementation VB.NET, this implementation will give you the result in KM or Miles based on an Enum value you pass.
Public Enum DistanceType
Miles
KiloMeters
End Enum
Public Structure Position
Public Latitude As Double
Public Longitude As Double
End Structure
Public Class Haversine
Public Function Distance(Pos1 As Position,
Pos2 As Position,
DistType As DistanceType) As Double
Dim R As Double = If((DistType = DistanceType.Miles), 3960, 6371)
Dim dLat As Double = Me.toRadian(Pos2.Latitude - Pos1.Latitude)
Dim dLon As Double = Me.toRadian(Pos2.Longitude - Pos1.Longitude)
Dim a As Double = Math.Sin(dLat / 2) * Math.Sin(dLat / 2) + Math.Cos(Me.toRadian(Pos1.Latitude)) * Math.Cos(Me.toRadian(Pos2.Latitude)) * Math.Sin(dLon / 2) * Math.Sin(dLon / 2)
Dim c As Double = 2 * Math.Asin(Math.Min(1, Math.Sqrt(a)))
Dim result As Double = R * c
Return result
End Function
Private Function toRadian(val As Double) As Double
Return (Math.PI / 180) * val
End Function
End Class
A: I condensed the computation down by simplifying the formula.
Here it is in Ruby:
include Math
earth_radius_mi = 3959
radians = lambda { |deg| deg * PI / 180 }
coord_radians = lambda { |c| { :lat => radians[c[:lat]], :lng => radians[c[:lng]] } }
# from/to = { :lat => (latitude_in_degrees), :lng => (longitude_in_degrees) }
def haversine_distance(from, to)
from, to = coord_radians[from], coord_radians[to]
cosines_product = cos(to[:lat]) * cos(from[:lat]) * cos(from[:lng] - to[:lng])
sines_product = sin(to[:lat]) * sin(from[:lat])
return earth_radius_mi * acos(cosines_product + sines_product)
end
A: function getDistanceFromLatLonInKm(lat1,lon1,lat2,lon2,units) {
var R = 6371; // Radius of the earth in km
var dLat = deg2rad(lat2-lat1); // deg2rad below
var dLon = deg2rad(lon2-lon1);
var a =
Math.sin(dLat/2) * Math.sin(dLat/2) +
Math.cos(deg2rad(lat1)) * Math.cos(deg2rad(lat2)) *
Math.sin(dLon/2) * Math.sin(dLon/2)
;
var c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1-a));
var d = R * c;
var miles = d / 1.609344;
if ( units == 'km' ) {
return d;
} else {
return miles;
}}
Chuck's solution, valid for miles also.
A: In Mysql use the following function pass the parameters as using POINT(LONG,LAT)
CREATE FUNCTION `distance`(a POINT, b POINT)
RETURNS double
DETERMINISTIC
BEGIN
RETURN
GLength( LineString(( PointFromWKB(a)), (PointFromWKB(b)))) * 100000; -- To Make the distance in meters
END;
A: function getDistanceFromLatLonInKm(position1, position2) {
"use strict";
var deg2rad = function (deg) { return deg * (Math.PI / 180); },
R = 6371,
dLat = deg2rad(position2.lat - position1.lat),
dLng = deg2rad(position2.lng - position1.lng),
a = Math.sin(dLat / 2) * Math.sin(dLat / 2)
+ Math.cos(deg2rad(position1.lat))
* Math.cos(deg2rad(position2.lat))
* Math.sin(dLng / 2) * Math.sin(dLng / 2),
c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1 - a));
return R * c;
}
console.log(getDistanceFromLatLonInKm(
{lat: 48.7931459, lng: 1.9483572},
{lat: 48.827167, lng: 2.2459745}
));
A: Here's another converted to Ruby code:
include Math
#Note: from/to = [lat, long]
def get_distance_in_km(from, to)
radians = lambda { |deg| deg * Math.PI / 180 }
radius = 6371 # Radius of the earth in kilometer
dLat = radians[to[0]-from[0]]
dLon = radians[to[1]-from[1]]
cosines_product = Math.sin(dLat/2) * Math.sin(dLat/2) + Math.cos(radians[from[0]]) * Math.cos(radians[to[1]]) * Math.sin(dLon/2) * Math.sin(dLon/2)
c = 2 * Math.atan2(Math.sqrt(cosines_product), Math.sqrt(1-cosines_product))
return radius * c # Distance in kilometer
end
A: One of the main challenges to calculating distances - especially large ones - is accounting for the curvature of the Earth. If only the Earth were flat, calculating the distance between two points would be as simple as for that of a straight line! The Haversine formula includes a constant (it's the R variable below) that represents the radius of the Earth. Depending on whether you are measuring in miles or kilometers, it would equal 3956 mi or 6367 km respectively.
The basic formula is:
dlon = lon2 - lon1
dlat = lat2 - lat1
a = (sin(dlat/2))^2 + cos(lat1) * cos(lat2) * (sin(dlon/2))^2
c = 2 * atan2( sqrt(a), sqrt(1-a) )
distance = R * c (where R is the radius of the Earth)
R = 6367 km OR 3956 mi
lat1, lon1: The Latitude and Longitude of point 1 (in decimal degrees)
lat2, lon2: The Latitude and Longitude of point 2 (in decimal degrees)
unit: The unit of measurement in which to calculate the results where:
'M' is statute miles (default)
'K' is kilometers
'N' is nautical miles
Sample
function distance(lat1, lon1, lat2, lon2, unit) {
try {
var radlat1 = Math.PI * lat1 / 180
var radlat2 = Math.PI * lat2 / 180
var theta = lon1 - lon2
var radtheta = Math.PI * theta / 180
var dist = Math.sin(radlat1) * Math.sin(radlat2) + Math.cos(radlat1) * Math.cos(radlat2) * Math.cos(radtheta);
dist = Math.acos(dist)
dist = dist * 180 / Math.PI
dist = dist * 60 * 1.1515
if (unit == "K") {
dist = dist * 1.609344
}
if (unit == "N") {
dist = dist * 0.8684
}
return dist
} catch (err) {
console.log(err);
}
}
A: As this is the most popular discussion of the topic I'll add my experience from late 2019-early 2020 here. To add to the existing answers - my focus was to find an accurate AND fast (i.e. vectorized) solution.
Let's start with what is mostly used by answers here - the Haversine approach. It is trivial to vectorize, see example in python below:
def haversine(lat1, lon1, lat2, lon2):
"""
Calculate the great circle distance between two points
on the earth (specified in decimal degrees)
All args must be of equal length.
Distances are in meters.
Ref:
https://stackoverflow.com/questions/29545704/fast-haversine-approximation-python-pandas
https://ipython.readthedocs.io/en/stable/interactive/magics.html
"""
Radius = 6.371e6
lon1, lat1, lon2, lat2 = map(np.radians, [lon1, lat1, lon2, lat2])
dlon = lon2 - lon1
dlat = lat2 - lat1
a = np.sin(dlat/2.0)**2 + np.cos(lat1) * np.cos(lat2) * np.sin(dlon/2.0)**2
c = 2 * np.arcsin(np.sqrt(a))
s12 = Radius * c
# initial azimuth in degrees
y = np.sin(lon2-lon1) * np.cos(lat2)
x = np.cos(lat1)*np.sin(lat2) - np.sin(lat1)*np.cos(lat2)*np.cos(dlon)
azi1 = np.arctan2(y, x)*180./math.pi
return {'s12':s12, 'azi1': azi1}
Accuracy-wise, it is least accurate. Wikipedia states 0.5% of relative deviation on average without any sources. My experiments show less of a deviation. Below is the comparison ran on 100,000 random points vs my library, which should be accurate to millimeter levels:
np.random.seed(42)
lats1 = np.random.uniform(-90,90,100000)
lons1 = np.random.uniform(-180,180,100000)
lats2 = np.random.uniform(-90,90,100000)
lons2 = np.random.uniform(-180,180,100000)
r1 = inverse(lats1, lons1, lats2, lons2)
r2 = haversine(lats1, lons1, lats2, lons2)
print("Max absolute error: {:4.2f}m".format(np.max(r1['s12']-r2['s12'])))
print("Mean absolute error: {:4.2f}m".format(np.mean(r1['s12']-r2['s12'])))
print("Max relative error: {:4.2f}%".format(np.max((r2['s12']/r1['s12']-1)*100)))
print("Mean relative error: {:4.2f}%".format(np.mean((r2['s12']/r1['s12']-1)*100)))
Output:
Max absolute error: 26671.47m
Mean absolute error: -2499.84m
Max relative error: 0.55%
Mean relative error: -0.02%
So on average 2.5km deviation on 100,000 random pairs of coordinates, which may be good for majority of cases.
Next option is Vincenty's formulae which is accurate up to millimeters, depending on convergence criteria and can be vectorized as well. It does have the issue with convergence near antipodal points. You can make it converge at those points by relaxing convergence criteria, but accuracy drops to 0.25% and more. Outside of antipodal points Vincenty will provide results close to Geographiclib within relative error of less than 1.e-6 on average.
Geographiclib, mentioned here, is really the current golden standard. It has several implementations and fairly fast, especially if you are using C++ version.
Now, if you are planning to use Python for anything above 10k points I'd suggest to consider my vectorized implementation. I created a geovectorslib library with vectorized Vincenty routine for my own needs, which uses Geographiclib as fallback for near antipodal points. Below is the comparison vs Geographiclib for 100k points. As you can see it provides up to 20x improvement for inverse and 100x for direct methods for 100k points and the gap will grow with number of points. Accuracy-wise it will be within 1.e-5 rtol of Georgraphiclib.
Direct method for 100,000 points
94.9 ms ± 25 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
9.79 s ± 1.4 s per loop (mean ± std. dev. of 7 runs, 1 loop each)
Inverse method for 100,000 points
1.5 s ± 504 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
24.2 s ± 3.91 s per loop (mean ± std. dev. of 7 runs, 1 loop each)
A: If you want the driving distance/route (posting it here because this is the first result for the distance between two points on google but for most people the driving distance is more useful), you can use Google Maps Distance Matrix Service:
getDrivingDistanceBetweenTwoLatLong(origin, destination) {
return new Observable(subscriber => {
let service = new google.maps.DistanceMatrixService();
service.getDistanceMatrix(
{
origins: [new google.maps.LatLng(origin.lat, origin.long)],
destinations: [new google.maps.LatLng(destination.lat, destination.long)],
travelMode: 'DRIVING'
}, (response, status) => {
if (status !== google.maps.DistanceMatrixStatus.OK) {
console.log('Error:', status);
subscriber.error({error: status, status: status});
} else {
console.log(response);
try {
let valueInMeters = response.rows[0].elements[0].distance.value;
let valueInKms = valueInMeters / 1000;
subscriber.next(valueInKms);
subscriber.complete();
}
catch(error) {
subscriber.error({error: error, status: status});
}
}
});
});
}
A: You could use a module like geolib too:
How to install:
$ npm install geolib
How to use:
import { getDistance } from 'geolib'
const distance = getDistance(
{ latitude: 51.5103, longitude: 7.49347 },
{ latitude: "51° 31' N", longitude: "7° 28' E" }
)
console.log(distance)
Documentation:
https://www.npmjs.com/package/geolib
A: This link might be helpful to you, as it details the use of the Haversine formula to calculate the distance.
Excerpt:
This script [in Javascript] calculates great-circle distances between the two points –
that is, the shortest distance over the earth’s surface – using the
‘Haversine’ formula.
function getDistanceFromLatLonInKm(lat1,lon1,lat2,lon2) {
var R = 6371; // Radius of the earth in km
var dLat = deg2rad(lat2-lat1); // deg2rad below
var dLon = deg2rad(lon2-lon1);
var a =
Math.sin(dLat/2) * Math.sin(dLat/2) +
Math.cos(deg2rad(lat1)) * Math.cos(deg2rad(lat2)) *
Math.sin(dLon/2) * Math.sin(dLon/2)
;
var c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1-a));
var d = R * c; // Distance in km
return d;
}
function deg2rad(deg) {
return deg * (Math.PI/180)
}
A: The haversine is definitely a good formula for probably most cases, other answers already include it so I am not going to take the space. But it is important to note that no matter what formula is used (yes not just one). Because of the huge range of accuracy possible as well as the computation time required. The choice of formula requires a bit more thought than a simple no brainer answer.
This posting from a person at nasa, is the best one I found at discussing the options
http://www.cs.nyu.edu/visual/home/proj/tiger/gisfaq.html
For example, if you are just sorting rows by distance in a 100 miles radius. The flat earth formula will be much faster than the haversine.
HalfPi = 1.5707963;
R = 3956; /* the radius gives you the measurement unit*/
a = HalfPi - latoriginrad;
b = HalfPi - latdestrad;
u = a * a + b * b;
v = - 2 * a * b * cos(longdestrad - longoriginrad);
c = sqrt(abs(u + v));
return R * c;
Notice there is just one cosine and one square root. Vs 9 of them on the Haversine formula.
A: Had an issue with math.deg in LUA... if anyone knows a fix please clean up this code!
In the meantime here's an implementation of the Haversine in LUA (use this with Redis!)
function calcDist(lat1, lon1, lat2, lon2)
lat1= lat1*0.0174532925
lat2= lat2*0.0174532925
lon1= lon1*0.0174532925
lon2= lon2*0.0174532925
dlon = lon2-lon1
dlat = lat2-lat1
a = math.pow(math.sin(dlat/2),2) + math.cos(lat1) * math.cos(lat2) * math.pow(math.sin(dlon/2),2)
c = 2 * math.asin(math.sqrt(a))
dist = 6371 * c -- multiply by 0.621371 to convert to miles
return dist
end
cheers!
A: FSharp version, using miles:
let radialDistanceHaversine location1 location2 : float =
let degreeToRadian degrees = degrees * System.Math.PI / 180.0
let earthRadius = 3959.0
let deltaLat = location2.Latitude - location1.Latitude |> degreeToRadian
let deltaLong = location2.Longitude - location1.Longitude |> degreeToRadian
let a =
(deltaLat / 2.0 |> sin) ** 2.0
+ (location1.Latitude |> degreeToRadian |> cos)
* (location2.Latitude |> degreeToRadian |> cos)
* (deltaLong / 2.0 |> sin) ** 2.0
atan2 (a |> sqrt) (1.0 - a |> sqrt)
* 2.0
* earthRadius
A: Dart lang:
import 'dart:math' show cos, sqrt, asin;
double calculateDistance(LatLng l1, LatLng l2) {
const p = 0.017453292519943295;
final a = 0.5 -
cos((l2.latitude - l1.latitude) * p) / 2 +
cos(l1.latitude * p) *
cos(l2.latitude * p) *
(1 - cos((l2.longitude - l1.longitude) * p)) /
2;
return 12742 * asin(sqrt(a));
}
A: You can calculate it by using Haversine formula which is:
a = sin²(Δφ/2) + cos φ1 ⋅ cos φ2 ⋅ sin²(Δλ/2)
c = 2 ⋅ atan2( √a, √(1−a) )
d = R ⋅ c
An example to calculate distance between two points is given below
Suppose i have to calculate distance between New Delhi to London, so how can i use this formula :
New delhi co-ordinates= 28.7041° N, 77.1025° E
London co-ordinates= 51.5074° N, 0.1278° W
var R = 6371e3; // metres
var φ1 = 28.7041.toRadians();
var φ2 = 51.5074.toRadians();
var Δφ = (51.5074-28.7041).toRadians();
var Δλ = (0.1278-77.1025).toRadians();
var a = Math.sin(Δφ/2) * Math.sin(Δφ/2) +
Math.cos(φ1) * Math.cos(φ2) *
Math.sin(Δλ/2) * Math.sin(Δλ/2);
var c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1-a));
var d = R * c; // metres
d = d/1000; // km
A: The functions needed for an accurate calculation of distance between lat-long points are complex, and the pitfalls are many. I would not recomend haversine or other spherical solutions due to the big inaccuracies (the earth is not a perfect sphere). The vincenty formula is better, but will in some cases throw errors, even when coded correctly.
Instead of coding the functions yourself I suggest using geopy which have implemented the very accurate geographiclib for distance calculations (paper from author).
#pip install geopy
from geopy.distance import geodesic
NY = [40.71278,-74.00594]
Beijing = [39.90421,116.40739]
print("WGS84: ",geodesic(NY, Beijing).km) #WGS84 is Standard
print("Intl24: ",geodesic(NY, Beijing, ellipsoid='Intl 1924').km) #geopy includes different ellipsoids
print("Custom ellipsoid: ",geodesic(NY, Beijing, ellipsoid=(6377., 6356., 1 / 297.)).km) #custom ellipsoid
#supported ellipsoids:
#model major (km) minor (km) flattening
#'WGS-84': (6378.137, 6356.7523142, 1 / 298.257223563)
#'GRS-80': (6378.137, 6356.7523141, 1 / 298.257222101)
#'Airy (1830)': (6377.563396, 6356.256909, 1 / 299.3249646)
#'Intl 1924': (6378.388, 6356.911946, 1 / 297.0)
#'Clarke (1880)': (6378.249145, 6356.51486955, 1 / 293.465)
#'GRS-67': (6378.1600, 6356.774719, 1 / 298.25)
The only drawback with this library is that it doesn't support vectorized calculations.
For vectorized calculations you can use the new geovectorslib.
#pip install geovectorslib
from geovectorslib import inverse
print(inverse(lats1,lons1,lats2,lons2)['s12'])
lats and lons are numpy arrays. Geovectorslib is very accurate and extremly fast! I haven't found a solution for changing ellipsoids though. The WGS84 ellipsoid is used as standard, which is the best choice for most uses.
A: If you are using python;
pip install geopy
from geopy.distance import geodesic
origin = (30.172705, 31.526725) # (latitude, longitude) don't confuse
destination = (30.288281, 31.732326)
print(geodesic(origin, destination).meters) # 23576.805481751613
print(geodesic(origin, destination).kilometers) # 23.576805481751613
print(geodesic(origin, destination).miles) # 14.64994773134371
A: Here is the Erlang implementation
lat_lng({Lat1, Lon1}=_Point1, {Lat2, Lon2}=_Point2) ->
P = math:pi() / 180,
R = 6371, % Radius of Earth in KM
A = 0.5 - math:cos((Lat2 - Lat1) * P) / 2 +
math:cos(Lat1 * P) * math:cos(Lat2 * P) * (1 - math:cos((Lon2 - Lon1) * P))/2,
R * 2 * math:asin(math:sqrt(A)).
A: Here's a simple javascript function that may be useful from this link.. somehow related but we're using google earth javascript plugin instead of maps
function getApproximateDistanceUnits(point1, point2) {
var xs = 0;
var ys = 0;
xs = point2.getX() - point1.getX();
xs = xs * xs;
ys = point2.getY() - point1.getY();
ys = ys * ys;
return Math.sqrt(xs + ys);
}
The units tho are not in distance but in terms of a ratio relative to your coordinates. There are other computations related you can substitute for the getApproximateDistanceUnits function link here
Then I use this function to see if a latitude longitude is within the radius
function isMapPlacemarkInRadius(point1, point2, radi) {
if (point1 && point2) {
return getApproximateDistanceUnits(point1, point2) <= radi;
} else {
return 0;
}
}
point may be defined as
$$.getPoint = function(lati, longi) {
var location = {
x: 0,
y: 0,
getX: function() { return location.x; },
getY: function() { return location.y; }
};
location.x = lati;
location.y = longi;
return location;
};
then you can do your thing to see if a point is within a region with a radius say:
//put it on the map if within the range of a specified radi assuming 100,000,000 units
var iconpoint = Map.getPoint(pp.latitude, pp.longitude);
var centerpoint = Map.getPoint(Settings.CenterLatitude, Settings.CenterLongitude);
//approx ~200 units to show only half of the globe from the default center radius
if (isMapPlacemarkInRadius(centerpoint, iconpoint, 120)) {
addPlacemark(pp.latitude, pp.longitude, pp.name);
}
else {
otherSidePlacemarks.push({
latitude: pp.latitude,
longitude: pp.longitude,
name: pp.name
});
}
A: //JAVA
public Double getDistanceBetweenTwoPoints(Double latitude1, Double longitude1, Double latitude2, Double longitude2) {
final int RADIUS_EARTH = 6371;
double dLat = getRad(latitude2 - latitude1);
double dLong = getRad(longitude2 - longitude1);
double a = Math.sin(dLat / 2) * Math.sin(dLat / 2) + Math.cos(getRad(latitude1)) * Math.cos(getRad(latitude2)) * Math.sin(dLong / 2) * Math.sin(dLong / 2);
double c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1 - a));
return (RADIUS_EARTH * c) * 1000;
}
private Double getRad(Double x) {
return x * Math.PI / 180;
}
A: I've created this small Javascript LatLng object, might be useful for somebody.
var latLng1 = new LatLng(5, 3);
var latLng2 = new LatLng(6, 7);
var distance = latLng1.distanceTo(latLng2);
Code:
/**
* latLng point
* @param {Number} lat
* @param {Number} lng
* @returns {LatLng}
* @constructor
*/
function LatLng(lat,lng) {
this.lat = parseFloat(lat);
this.lng = parseFloat(lng);
this.__cache = {};
}
LatLng.prototype = {
toString: function() {
return [this.lat, this.lng].join(",");
},
/**
* calculate distance in km to another latLng, with caching
* @param {LatLng} latLng
* @returns {Number} distance in km
*/
distanceTo: function(latLng) {
var cacheKey = latLng.toString();
if(cacheKey in this.__cache) {
return this.__cache[cacheKey];
}
// the fastest way to calculate the distance, according to this jsperf test;
// http://jsperf.com/haversine-salvador/8
// http://stackoverflow.com/questions/27928
var deg2rad = 0.017453292519943295; // === Math.PI / 180
var lat1 = this.lat * deg2rad;
var lng1 = this.lng * deg2rad;
var lat2 = latLng.lat * deg2rad;
var lng2 = latLng.lng * deg2rad;
var a = (
(1 - Math.cos(lat2 - lat1)) +
(1 - Math.cos(lng2 - lng1)) * Math.cos(lat1) * Math.cos(lat2)
) / 2;
var distance = 12742 * Math.asin(Math.sqrt(a)); // Diameter of the earth in km (2 * 6371)
// cache the distance
this.__cache[cacheKey] = distance;
return distance;
}
};
A: function distance($lat1, $lon1, $lat2, $lon2) {
$pi80 = M_PI / 180;
$lat1 *= $pi80; $lon1 *= $pi80; $lat2 *= $pi80; $lon2 *= $pi80;
$dlat = $lat2 - $lat1;
$dlon = $lon2 - $lon1;
$a = sin($dlat / 2) * sin($dlat / 2) + cos($lat1) * cos($lat2) * sin($dlon / 2) * sin($dlon / 2);
$km = 6372.797 * 2 * atan2(sqrt($a), sqrt(1 - $a));
return $km;
}
A: Here's a Scala implementation:
def calculateHaversineDistance(lat1: Double, lon1: Double, lat2: Double, lon2: Double): Double = {
val long2 = lon2 * math.Pi / 180
val lat2 = lat2 * math.Pi / 180
val long1 = lon1 * math.Pi / 180
val lat1 = lat1 * math.Pi / 180
val dlon = long2 - long1
val dlat = lat2 - lat1
val a = math.pow(math.sin(dlat / 2), 2) + math.cos(lat1) * math.cos(lat2) * math.pow(math.sin(dlon / 2), 2)
val c = 2 * math.atan2(Math.sqrt(a), math.sqrt(1 - a))
val haversineDistance = 3961 * c // 3961 = radius of earth in miles
haversineDistance
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27928",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1130"
} |
Q: Dynamic Element Names I want to transform an XML document. The source XML looks like this:
<svc:ElementList>
<svc:Element>
<Year>2007</Year>
</svc:Element>
<svc:Element>
<Year>2006</Year>
</svc:Element>
<svc:Element>
<Year>2005</Year>
</svc:Element>
</svc:ElementList>
I want to turn that into:
<ElementList>
<NewTag2007/>
<NewTag2006/>
<NewTag2005/>
</ElementList>
The following line of code isn't working:
<xsl:element name="{concat('NewTag',Element/Year)}"/>
The output is a series of elements that look like this: < NewTag >. (Without the spaces...)
"//Element/Year", "./Element/Year", and "//svc:Element/Year" don't work either. One complication is that the "Element" tag is in the "svc" namespace while the "Year" tag is in the default namespace.
So anyway, am I facing a namespace issue or am I mis-using the "concat()" function?
A: Probably namespace issues and maybe one with current context. For source (with added namespace declaration to make it well-formed xml)
<svc:ElementList xmlns:svc="svc">
<svc:Element>
<Year>2007</Year>
</svc:Element>
<svc:Element>
<Year>2006</Year>
</svc:Element>
<svc:Element>
<Year>2005</Year>
</svc:Element>
</svc:ElementList>
the stylesheet
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:svc="svc"
version="1.0">
<xsl:template match="svc:ElementList">
<xsl:element name="{local-name()}">
<xsl:for-each select="svc:Element">
<xsl:element name="{concat('NewTag', Year)}"/>
</xsl:for-each>
</xsl:element>
</xsl:template>
</xsl:stylesheet>
will give you the output you need. Note that svc:Element needs to be selected using namespace prefixed and that the context when generating the new tags is svc:Element, not svc:ElementList.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27931",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: Is it worth it to learn a dialect of assembly? My goals are focused on software application development, and maybe web application development, but most likely desktop applications. I'm embarking on a path to becoming more familiar with C/C++, but should I go much lower than that, into assembly? Or would I not have a benefit for my long-term goals?
A: G'day,
I learnt PDP assembler when I did my Elect. Eng. degree in the late '70's. The last dialect of assembler that I really used had four different modes of memory addressing. Last dialect I ooked at had 17 modes!
Not sure what learning assembler really gives you nowadays. Back then it was an essential part of a CS stream in my elect. eng. degree.
As to learning C++ I'd just sit down and work through "Accelerated C++" which approaches C++ in its own right and not as "C with other bits".
As to C, I'd just work through the latest version of "C Programming Lanuage" (a.k.a.) K'n'R
Hope this helps.
cheers,
Rob
Now if you'd asked about nano-progrmming... (-:
A: "Is it worth it to learn a dialect of assembly?"
I have programmed assembly professionally. M68k running a fax machine and scanner. Also windows VxDs (virtual device drivers) back in the windows 3/3.1 days before they had a real kernel.
When you code assembly to do ordinary software-y kinds of tasks (copying memory, concatenating strings, calling interrupt handlers etc), it's kind of interesting. Sometimes you code assembly to be called by C code to perform some specialized task as fast as possible on a given processor. That can be more interesting because you're looking for ways to take advantage of every cycle the processor gives you. You care about what's in the processor's L1 cache. You care about aligning data in memory to avoid cacheline hits (if I remember the term). You care about dual pipeline processor architecture and using the right 2 or 3 or 4 instructions in the right order so that 2 or 3 or 4 things are happening with a single clock tick (-one- of those HZ in the processor's XgHz).
When you code assembly to drive custom hardware, now you're doing stuff like filling a 16byte buffer of memory, setting up a DMA operation and firing that data over to a controller that is doing something like driving a laser printer drum. And the drum is turning and can't be stopped and wants its next 16 bytes within the next 5us. Of course that can be done in C or C++. But the examples are endless.
I would maybe trim off the last half of your question "Is it worth it to learn a dialect of assembly?" And make it "Is it worth it to learn?"
If you love programming, how you define "worth" involves some component of the love of programming. In that sense, I've never learned something in programming and not thought it was worth it. Even if I didn't use it much afterwords.
In that same sense, I would almost say that the harder something is to learn, the more "worth" it it is.
But all that fluffy crap aside, I believe it is worth it to at least get -some- assembly background. Go ahead and figure out how to write the assembly to replace a few simple stdlib routines like strcpy, memmove etc. Then try to optimize them, calling them from C a million times while timing it.
A: I am really surprised to see so many "no" answers to this question. I think you should learn assembly.
I do not expect that you would ever use assembly directly as part of your job. But it does not follow from that that you should not learn it.
Learning assembly will teach you about what is going on inside the computer. It will help you to understand what the software is actually doing.
It's really about professionalism. Are you going to be a professional software engineer? Or are you going to be a copy-and-paste hack? Sure, the latter may pay the bills, but being a professional is so much more satisfying.
To hear someone say, "Nah, don't bother learning assembly," sounds to my ears like "Here's the cookbook for building bridges. You don't need to learn about physics or engineering to build a bridge. Just follow these recipes." No, thank you.
A: I wouldn't start learning ASM. If you want to learn C/C++ then start with that. As the quality of your code matures, you may find you have a need for ASM. 99% of the time you won't, but every now and then you might need it.
Also, it does help to know ASM in terms of understanding what C/C++ is doing behind the scenes. But again, until you get more advanced, you probably won't have a need for it.
A: I did, and I think it helped me at the time. It doesn't help me day to day anymore, but I think it would depend on your job.
I learned assembler 20 years ago on a Commodore and again in University on an IBM mainframe. I can't say it helps me in my current job.
A: It's probably not going to have a whole lot of benefit unless you have a direct application for it. If you're going for general knowledge, C/C++ is a fine place to start.
That said, the challenges that assembly poses are very interesting and it requires a pretty different mindset to get things done.
I spent a little time learning Z80 assembly by programming the TI-86 calculator. The Z80 instruction set is pretty small and the novelty of programming a calculator in assembly is very amusing.
ticalc.org has a lot of good resources on TI assembly programming.
A: I agree with Mark. I think it's similar to learning MSIL when writing in C#, VB.NET, or another .NET language. It helps to know what's going on under the hood, but you could go your entire life creating applications that work and never need it.
A: No. Unless you want to for fun, you really don't need learn assembly.
There are some things you need to know assembly for, like driver creation, OS development, exploit development, but aside from that, I personally believe you can quite happily code forever without knowing it.
If you do need to learn assembly, you'll know it - I wouldn't learn it for the sake of learning it..
A: If you are writing unmanaged C++, it is occasionally invaluable to know at least basic x86 assembly, binary number systems, etc. I primarily do C/C++ development, and I occasionally need to debug production code for errors that are so specific to the machine-code representation produced by the compiler that the only way to find, and then fix the bug is to read the decompiled assembly and ascertain why the compiler generated it as such.
For more information on assembly, see the question: What is the best way to learn Assembly? Specifically, for someone who has experience in dynamic languages.
A: Assembly is not very difficult. Once you're familiar with C, spend a day or two learning basic assembly. Its helpfulness in terms of debugging is tremendous, plus its fun being able to write code that beats the C equivalent speed-wise by a factor of 10, 15, or more.
A: I would not suggest to learn a "modern assembler language".
However knowing a little about MOS 6510 Assembler and browsing through the disassembled C64 Kernel, a.k.a. it's OS and BASIC Interpreter, helped me a lot to understand what's going on inside a computer - stuff like interrupts and memory pages.
This could possibly help you to give you hints how to write optimized code in other languages. A lot of that - however - is already done by modern compilers, so I'd only suggest that if you're interested in what's going on inside of that black box.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27942",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Which resolution to target for a Mobile App? When desinging UI for mobile apps in general which resolution could be considered safe as a general rule of thumb. My interest lies specifically in web based apps. The iPhone has a pretty high resolution for a hand held, and the Nokia E Series seem to oriented differently. Is 240×320 still considered safe?
A: Not enough information...
You say you're targeting a "Mobile App" but the reality is that mobile could mean anything from a cell phone with 128x128 resolution to a MID with 800x600 resolution.
There is no "safe" resolution for such a wide range, and if you're truly targeting all of them you need to design a custom interface for each major resolution. Add some scaling factors in and you might be able to cut it down to 5-8 different interface designs.
Further, the UI means "User Interface" and includes a lot more than just the resolution - you can't count on a touchscreen, full keyboard, or even software keys.
You need to either better define your target, or explain your target here so we can better help you.
Keep in mind that there are millions of phone users that don't have PDA resolutions, and you can really only count on 128x128 or better to cover the majority of technically inclined cell phone users (those that know there's a web browser in their phone, nevermind those that use it).
But if you're prepared to accept these losses, go ahead and hit for 320x240 and 240x320. That will give you most current PDA phones and up (older blackberries and palm devices had smaller square orientations). Plan on spending time later supporting lower resolution devices and above all...
Do not tie your app to a particular resolution.
Make sure your app is flexible enough that you can deploy new UI's without changing internal application logic - in other words separate the presentation from the core logic. You will find this very useful later - the mobile world changes daily. Once you gauge how your app is being used you can, for instance, easily deploy an iPhone specific version that is pixel perfect (and prettier than an upscaled 320x240) in order to engage more users. Being able to do this in a few hours (because you don't have to change the internals) is going to put you miles ahead of the competition if someone else makes a swipe at your market.
-Adam
A:
Right now I believe it would make sense for me to target about 2 resolutions and latter learn my customers best needs through feedback?
It's a chicken and egg problem.
Ideally before you develop the product you already know what your customers use/need.
Often not even the customers know what they need until they use something (and more often than not you find out what they don't need rather than what they need).
So in this case, yes, spend a little bit of time developing a prototype app that you can send out there to a few people and get feedback. They will have better feedback because they can try it out, and you will have a springboard to start from. The ability to quickly release UI updates without changing core logic will allow you test several interfaces quickly without a huge time investment.
Further, to customers you will seem really responsive to their needs, which will be a big benefit to people who's jobs depend on reaction time.
-Adam
A: You mentioned Web based apps. Any particular framework you have in mind?
In many cases, WALL seems to help to large extent.
Here's one Article, Adapting to User Devices Using Mobile Web Technology exploiting WALL.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27948",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How do I change the locations of source files in a symbols file (pdb) Basically what I want to do it this: a pdb file contains a location of source files (e.g. C:\dev\proj1\helloworld.cs). Is it possible to modify that pdb file so that it contains a different location (e.g. \more\differenter\location\proj1\helloworld.cs)?
A: If you're looking to be more generic about the paths embedded in a pdb file, you could first use the MS-DOS subst command to map a particular folder to a drive letter.
subst N: <MyRealPath>
Then open your project relative to the N: drive and rebuild it. Your PDB files will reference the source files on N:. Now it doesn't matter where you place that particular set of source files, so long as you subsequently call the root directory "N:" like you did when you built it.
This practice is recommended by John Robbins in his excellent book, Debugging Applications for Microsoft .NET and Microsoft Windows.
A: I wanted to find the answer to this in order to debug a crash dump that occurred in an executable that I did not build on my machine, therefore the path to the source code referenced in the PDB was invalid, as was the path to the PDB referenced in the executable.
After searching around and failing to find something that works, I discovered that if you place the executable and PDB alongside the crash dump file (i.e. in the same directory) then open and run the crash dump in VS, VS will find and use the PDB/EXE locally. Furthermore, it will also prompt for the location of the source code when clicking on an entry in the call stack: pointing it at whichever source code is relevant, it all works fine, which is great!
Anyway, hopefully this helps someone else...:)
A: You can use the source indexing feature of the Debugging Tools for Windows, which will save references to the appropriate revisions of the files in your source repository as an alternate stream in the PDB file.
A: It is certainly possible, as On Freund has already pointed out.
But if it is only so that the sources can be located and loaded during debugging, then a better way would be to set the source path correspondingly. Once set in a debugger, it will preemt all hard coded paths inside PDBs.
In windbg (for instance):
.srcpath+ path_to_source_root
or this (in case you're debugging remotely):
.lsrcpath+ path_to_source_root
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27952",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25"
} |
Q: How do HttpOnly cookies work with AJAX requests? JavaScript needs access to cookies if AJAX is used on a site with access restrictions based on cookies. Will HttpOnly cookies work on an AJAX site?
Edit: Microsoft created a way to prevent XSS attacks by disallowing JavaScript access to cookies if HttpOnly is specified. FireFox later adopted this. So my question is: If you are using AJAX on a site, like StackOverflow, are Http-Only cookies an option?
Edit 2: Question 2. If the purpose of HttpOnly is to prevent JavaScript access to cookies, and you can still retrieve the cookies via JavaScript through the XmlHttpRequest Object, what is the point of HttpOnly?
Edit 3: Here is a quote from Wikipedia:
When the browser receives such a cookie, it is supposed to use it as usual in the following HTTP exchanges, but not to make it visible to client-side scripts.[32] The HttpOnly flag is not part of any standard, and is not implemented in all browsers. Note that there is currently no prevention of reading or writing the session cookie via a XMLHTTPRequest. [33].
I understand that document.cookie is blocked when you use HttpOnly. But it seems that you can still read cookie values in the XMLHttpRequest object, allowing for XSS. How does HttpOnly make you any safer than? By making cookies essentially read only?
In your example, I cannot write to your document.cookie, but I can still steal your cookie and post it to my domain using the XMLHttpRequest object.
<script type="text/javascript">
var req = null;
try { req = new XMLHttpRequest(); } catch(e) {}
if (!req) try { req = new ActiveXObject("Msxml2.XMLHTTP"); } catch(e) {}
if (!req) try { req = new ActiveXObject("Microsoft.XMLHTTP"); } catch(e) {}
req.open('GET', 'http://stackoverflow.com/', false);
req.send(null);
alert(req.getAllResponseHeaders());
</script>
Edit 4: Sorry, I meant that you could send the XMLHttpRequest to the StackOverflow domain, and then save the result of getAllResponseHeaders() to a string, regex out the cookie, and then post that to an external domain. It appears that Wikipedia and ha.ckers concur with me on this one, but I would love be re-educated...
Final Edit: Ahh, apparently both sites are wrong, this is actually a bug in FireFox. IE6 & 7 are actually the only browsers that currently fully support HttpOnly.
To reiterate everything I've learned:
*
*HttpOnly restricts all access to document.cookie in IE7 & and FireFox (not sure about other browsers)
*HttpOnly removes cookie information from the response headers in XMLHttpObject.getAllResponseHeaders() in IE7.
*XMLHttpObjects may only be submitted to the domain they originated from, so there is no cross-domain posting of the cookies.
edit: This information is likely no longer up to date.
A: Yes, HTTP-Only cookies would be fine for this functionality. They will still be provided with the XmlHttpRequest's request to the server.
In the case of Stack Overflow, the cookies are automatically provided as part of the XmlHttpRequest request. I don't know the implementation details of the Stack Overflow authentication provider, but that cookie data is probably automatically used to verify your identity at a lower level than the "vote" controller method.
More generally, cookies are not required for AJAX. XmlHttpRequest support (or even iframe remoting, on older browsers) is all that is technically required.
However, if you want to provide security for AJAX enabled functionality, then the same rules apply as with traditional sites. You need some method for identifying the user behind each request, and cookies are almost always the means to that end.
In your example, I cannot write to your document.cookie, but I can still steal your cookie and post it to my domain using the XMLHttpRequest object.
XmlHttpRequest won't make cross-domain requests (for exactly the sorts of reasons you're touching on).
You could normally inject script to send the cookie to your domain using iframe remoting or JSONP, but then HTTP-Only protects the cookie again since it's inaccessible.
Unless you had compromised StackOverflow.com on the server side, you wouldn't be able to steal my cookie.
Edit 2: Question 2. If the purpose of Http-Only is to prevent JavaScript access to cookies, and you can still retrieve the cookies via JavaScript through the XmlHttpRequest Object, what is the point of Http-Only?
Consider this scenario:
*
*I find an avenue to inject JavaScript code into the page.
*Jeff loads the page and my malicious JavaScript modifies his cookie to match mine.
*Jeff submits a stellar answer to your question.
*Because he submits it with my cookie data instead of his, the answer will become mine.
*You vote up "my" stellar answer.
*My real account gets the point.
With HTTP-Only cookies, the second step would be impossible, thereby defeating my XSS attempt.
Edit 4: Sorry, I meant that you could send the XMLHttpRequest to the StackOverflow domain, and then save the result of getAllResponseHeaders() to a string, regex out the cookie, and then post that to an external domain. It appears that Wikipedia and ha.ckers concur with me on this one, but I would love be re-educated...
That's correct. You can still session hijack that way. It does significantly thin the herd of people who can successfully execute even that XSS hack against you though.
However, if you go back to my example scenario, you can see where HTTP-Only does successfully cut off the XSS attacks which rely on modifying the client's cookies (not uncommon).
It boils down to the fact that a) no single improvement will solve all vulnerabilities and b) no system will ever be completely secure. HTTP-Only is a useful tool in shoring up against XSS.
Similarly, even though the cross domain restriction on XmlHttpRequest isn't 100% successful in preventing all XSS exploits, you'd still never dream of removing the restriction.
A: Yes, they are a viable option for an Ajax based site. Authentication cookies aren't for manipulation by scripts, but are simply included by the browser on all HTTP requests made to the server.
Scripts don't need to worry about what the session cookie says - as long as you are authenticated, then any requests to the server initiated by either a user or the script will include the appropriate cookies. The fact that the scripts cannot themselves know the content of the cookies doesn't matter.
For any cookies that are used for purposes other than authentication, these can be set without the HTTP only flag, if you want script to be able to modify or read these. You can pick and choose which cookies should be HTTP only, so for example anything non-sensitive like UI preferences (sort order, collapse left hand pane or not) can be shared in cookies with the scripts.
I really like the HTTP only cookies - it's one of those proprietary browser extensions that was a really neat idea.
A: Not necessarily, it depends what you want to do. Could you elaborate a bit? AJAX doesn't need access to cookies to work, it can make requests on its own to extract information, the page request that the AJAX call makes could access the cookie data & pass that back to the calling script without Javascript having to directly access the cookies
A: As clarification - from the server's perspective, the page that is requested by an AJAX request is essentially no different to a standard HTTP get request done by the user clicking on a link. All the normal request properties: user-agent, ip, session, cookies, etc. are passed to the server.
A: There's a bit more to this.
Ajax doesn't strictly require cookies, but they can be useful as other posters have mentioned. Marking a cookie HTTPOnly to hide it from scripts only partially works, because not all browsers support it, but also because there are common workarounds.
It's odd that the XMLHTTPresponse headers are giving the cookie, technically the server doesn't have to return the cookie with the response. Once it's set on the client, it stays set until it expires. Though there are schemes in which the cookie is changed with every request to prevent re-use. So you may be able to avoid that workaround by changing the server to not provide the cookie on the XMLHTTP responses.
In general though, I think HTTPOnly should be used with some caution. There are cross site scripting attacks where an attacker arranges for a user to submit an ajax-like request originating from another site, using simple post forms, without the use of XMLHTTP, and your browser's still-active cookie would authenticate the request.
If you want to be sure that an AJAX request is authenticated, the request itself AND the HTTP headers need to contain the cookie. Eg through the use of scripts or unique hidden inputs. HTTPOnly would hinder that.
Usually the interesting reason to want HTTPOnly is to prevent third-party content included on your webpage from stealing cookies. But there are many interesting reasons to be very cautious about including third-party content, and filter it aggressively.
A: Cookies are automatically handled by the browser when you make an AJAX call, so there's no need for your Javascript to mess around with cookies.
A:
Therefore I am assuming JavaScript needs access to your cookies.
All HTTP requests from your browser transmit your cookie information for the site in question. JavaScript can both set and read cookies. Cookies are not by definition required for Ajax applications, but they are required for most web applications to maintain user state.
The formal answer to your question as phrased - "Does JavaScript need access to cookies if AJAX is used?" - is therefore "no". Think of enhanced search fields that use Ajax requests to provide auto-suggest options, for example. There is no need of cookie information in that case.
A: No, the page that the AJAX call requests has access to cookies too & that's what checks whether you're logged in.
You can do other authentication with the Javascript, but I wouldn't trust it, I always prefer putting any sort of authentication checking in the back-end.
A: Yes, cookies are very useful for Ajax.
Putting the authentication in the request URL is bad practice. There was a news item last week about getting the authentication tokens in the URL's from the google cache.
No, there is no way to prevent attacks. Older browsers still allow trivial access to cookies via javascript. You can bypass http only, etc. Whatever you come up with can be gotten around given enough effort. The trick is to make it too much effort to be worthwhile.
If you want to make your site more secure (there is no perfect security) you could use an authentication cookie that expires. Then, if the cookie is stolen, the attacker must use it before it expires. If they don't then you have a good indication there's suspicious activity on that account. The shorter the time window the better for security but the more load it puts on your server generating and maintaining keys.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27972",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "208"
} |
Q: SQL Group By with an Order By I have a table of tags and want to get the highest count tags from the list.
Sample data looks like this
id (1) tag ('night')
id (2) tag ('awesome')
id (3) tag ('night')
using
SELECT COUNT(*), `Tag` from `images-tags`
GROUP BY `Tag`
gets me back the data I'm looking for perfectly. However, I would like to organize it, so that the highest tag counts are first, and limit it to only send me the first 20 or so.
I tried this...
SELECT COUNT(id), `Tag` from `images-tags`
GROUP BY `Tag`
ORDER BY COUNT(id) DESC
LIMIT 20
and I keep getting an "Invalid use of group function - ErrNr 1111"
What am I doing wrong?
I'm using MySQL 4.1.25-Debian
A: In Oracle, something like this works nicely to separate your counting and ordering a little better. I'm not sure if it will work in MySql 4.
select 'Tag', counts.cnt
from
(
select count(*) as cnt, 'Tag'
from 'images-tags'
group by 'tag'
) counts
order by counts.cnt desc
A: MySQL prior to version 5 did not allow aggregate functions in ORDER BY clauses.
You can get around this limit with the deprecated syntax:
SELECT COUNT(id), `Tag` from `images-tags`
GROUP BY `Tag`
ORDER BY 1 DESC
LIMIT 20
1, since it's the first column you want to group on.
A: In all versions of MySQL, simply alias the aggregate in the SELECT list, and order by the alias:
SELECT COUNT(id) AS theCount, `Tag` from `images-tags`
GROUP BY `Tag`
ORDER BY theCount DESC
LIMIT 20
A: I don't know about MySQL, but in MS SQL, you can use the column index in the order by clause. I've done this before when doing counts with group bys as it tends to be easier to work with.
So
SELECT COUNT(id), `Tag` from `images-tags`
GROUP BY `Tag`
ORDER BY COUNT(id) DESC
LIMIT 20
Becomes
SELECT COUNT(id), `Tag` from `images-tags`
GROUP BY `Tag`
ORDER 1 DESC
LIMIT 20
A: Try this query
SELECT data_collector_id , count (data_collector_id ) as frequency
from rent_flats
where is_contact_person_landlord = 'True'
GROUP BY data_collector_id
ORDER BY count(data_collector_id) DESC
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27983",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "159"
} |
Q: Exporting a C++ class from a DLL Most of my C/C++ development involves monolithic module files and absolutely no classes whatsoever, so usually when I need to make a DLL with accessible functions I just export them using the standard __declspec(dllexport) directive. Then access them either dynamically via LoadLibrary() or at compile time with a header and lib file.
How do you do this when you want to export an entire class (and all it's public methods and properties)?
Is it possible to dynamically load that class at runtime and if so, how?
How would you do it with a header and lib for compile time linking?
A: Recently I asked myself exactly the same question, and summarized my findings in a blog post. You may find it useful.
It covers exporting C++ classes from a DLL, as well as loading them dynamically with LoadLibrary, and discusses some of the issues around that, such as memory management, name mangling and calling conventions.
A: When you build the DLL and the module that will use the DLL, have some kind of #define that you can use to distinguish between one and the other, then you can do something like this in your class header file:
#if defined( BUILD_DLL )
#define IMPORT_EXPORT __declspec(dllexport)
#else
#define IMPORT_EXPORT __declspec(dllimport)
#endif
class IMPORT_EXPORT MyClass {
...
};
Edit: crashmstr beat me to it!
A:
What about late-binding? As in loading
it with LoadLibrary() and
GetProcAddress() ? I'm used being able
to load the library at run time and it
would be great if you could do that
here.
So there are two ways to load the DLL. The first is to reference one or more symbols from the DLL (your classname, for example), supply an appropriate import .LIB and let the linker figure everything out.
The second is to explicitly load the DLL via LoadLibrary.
Either approach works fine for C-level function exports. You can either let the linker handle it or call GetProcAddress as you noted.
But when it comes to exported classes, typically only the first approach is used, i.e., implicitly link to the DLL. In this case the DLL is loaded at application start time, and the application fails to load if the DLL can't be found.
If you want to link to a class defined in a DLL, and you want that DLL to be loaded dynamically, sometime after program initiation, you have two options:
*
*Create objects of the class using a special factory function, which internally will have to use (a tiny bit of) assembler to "hook up" newly created objects to their appropriate offsets. This has to be done at run-time AFTER the DLL has been loaded, obviously. A good explanation of this approach can be found here.
*Use a delay-load DLL.
All things considered... probably better to just go with implicit linking, in which case you definitely want to use the preprocessor technique shown above. In fact, if you create a new DLL in Visual Studio and choose the "export symbols" option these macros will be created for you.
Good luck...
A: Adding a simple working example for exporting a C++ class from a DLL :
The given below example gives you only a short overview of how dll and exe can interact each other (self explanatory ) but it needs more things to add for changing into a production code.
Full sample example is divided in to two part
A. Creating a .dll library (MyDLL.dll)
B. Creating an Application which uses .dll library (Application).
A. .dll project file (MyDLL.dll):
1. dllHeader.h
#ifdef MYDLL_EXPORTS
#define DLLCALL __declspec(dllexport) /* Should be enabled before compiling
.dll project for creating .dll*/
#else
#define DLLCALL __declspec(dllimport) /* Should be enabled in Application side
for using already created .dll*/
#endif
// Interface Class
class ImyMath {
public:
virtual ~ImyMath() {;}
virtual int Add(int a, int b) = 0;
virtual int Subtract(int a, int b) = 0;
};
// Concrete Class
class MyMath: public ImyMath {
public:
MyMath() {}
int Add(int a, int b);
int Subtract(int a, int b);
int a,b;
};
// Factory function that will return the new object instance. (Only function
// should be declared with DLLCALL)
extern "C" /*Important for avoiding Name decoration*/
{
DLLCALL ImyMath* _cdecl CreateMathObject();
};
// Function Pointer Declaration of CreateMathObject() [Entry Point Function]
typedef ImyMath* (*CREATE_MATH) ();
2. dllSrc.cpp
#include "dllHeader.h"
// Create Object
DLLCALL ImyMath* _cdecl CreateMathObject() {
return new MyMath();
}
int MyMath::Add(int a, int b) {
return a+b;
}
int MyMath::Subtract(int a, int b) {
return a-b;
}
B. Application Project which load and link the already created .dll file:
#include <iostream>
#include <windows.h>
#include "dllHeader.h"
int main()
{
HINSTANCE hDLL = LoadLibrary(L"MyDLL.dll"); // L".\Debug\MyDLL.dll"
if (hDLL == NULL) {
std::cout << "Failed to load library.\n";
}
else {
CREATE_MATH pEntryFunction = (CREATE_MATH)GetProcAddress(hDLL,"CreateMathObject");
ImyMath* pMath = pEntryFunction();
if (pMath) {
std::cout << "10+10=" << pMath->Add(10, 10) << std::endl;
std::cout << "50-10=" << pMath->Subtract(50, 10) << std::endl;
}
FreeLibrary(hDLL);
}
std::cin.get();
return 0;
}
A: I use some macros to mark the code for import or export
#ifdef ISDLL
#define DLL __declspec(dllexport)
#endif
#ifdef USEDLL
#define DLL __declspec(dllimport)
#endif
Then declare the class in a header file:
class DLL MyClassToExport { ... }
Then #define ISDLL in the libary, and USEDLL before including the header file in the place you want to use the class.
I don't know if you might need to do anything differently for working with LoadLibrary
A: If you're willing to put a vtable in the class you're exporting, you can export a function that returns an interface and implement the class in the .dll, then put that in the .def file. You might have to do some declaration trickery, but it shouldn't be too hard.
Just like COM. :)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27998",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29"
} |
Q: Regular cast vs. static_cast vs. dynamic_cast I've been writing C and C++ code for almost twenty years, but there's one aspect of these languages that I've never really understood. I've obviously used regular casts i.e.
MyClass *m = (MyClass *)ptr;
all over the place, but there seem to be two other types of casts, and I don't know the difference. What's the difference between the following lines of code?
MyClass *m = (MyClass *)ptr;
MyClass *m = static_cast<MyClass *>(ptr);
MyClass *m = dynamic_cast<MyClass *>(ptr);
A: You should look at the article C++ Programming/Type Casting.
It contains a good description of all of the different cast types. The following taken from the above link:
const_cast
const_cast(expression) The const_cast<>() is used to add/remove
const(ness) (or volatile-ness) of a variable.
static_cast
static_cast(expression) The static_cast<>() is used to cast between
the integer types. 'e.g.' char->long, int->short etc.
Static cast is also used to cast pointers to related types, for
example casting void* to the appropriate type.
dynamic_cast
Dynamic cast is used to convert pointers and references at run-time,
generally for the purpose of casting a pointer or reference up or down
an inheritance chain (inheritance hierarchy).
dynamic_cast(expression)
The target type must be a pointer or reference type, and the
expression must evaluate to a pointer or reference. Dynamic cast works
only when the type of object to which the expression refers is
compatible with the target type and the base class has at least one
virtual member function. If not, and the type of expression being cast
is a pointer, NULL is returned, if a dynamic cast on a reference
fails, a bad_cast exception is thrown. When it doesn't fail, dynamic
cast returns a pointer or reference of the target type to the object
to which expression referred.
reinterpret_cast
Reinterpret cast simply casts one type bitwise to another. Any pointer
or integral type can be casted to any other with reinterpret cast,
easily allowing for misuse. For instance, with reinterpret cast one
might, unsafely, cast an integer pointer to a string pointer.
A: FYI, I believe Bjarne Stroustrup is quoted as saying that C-style casts are to be avoided and that you should use static_cast or dynamic_cast if at all possible.
Barne Stroustrup's C++ style FAQ
Take that advice for what you will. I'm far from being a C++ guru.
A: Avoid using C-Style casts.
C-style casts are a mix of const and reinterpret cast, and it's difficult to find-and-replace in your code. A C++ application programmer should avoid C-style cast.
A: Static cast
The static cast performs conversions between compatible types. It is similar to the C-style cast, but is more restrictive. For example, the C-style cast would allow an integer pointer to point to a char.
char c = 10; // 1 byte
int *p = (int*)&c; // 4 bytes
Since this results in a 4-byte pointer pointing to 1 byte of allocated memory, writing to this pointer will either cause a run-time error or will overwrite some adjacent memory.
*p = 5; // run-time error: stack corruption
In contrast to the C-style cast, the static cast will allow the compiler to check that the pointer and pointee data types are compatible, which allows the programmer to catch this incorrect pointer assignment during compilation.
int *q = static_cast<int*>(&c); // compile-time error
Reinterpret cast
To force the pointer conversion, in the same way as the C-style cast does in the background, the reinterpret cast would be used instead.
int *r = reinterpret_cast<int*>(&c); // forced conversion
This cast handles conversions between certain unrelated types, such as from one pointer type to another incompatible pointer type. It will simply perform a binary copy of the data without altering the underlying bit pattern. Note that the result of such a low-level operation is system-specific and therefore not portable. It should be used with caution if it cannot be avoided altogether.
Dynamic cast
This one is only used to convert object pointers and object references into other pointer or reference types in the inheritance hierarchy. It is the only cast that makes sure that the object pointed to can be converted, by performing a run-time check that the pointer refers to a complete object of the destination type. For this run-time check to be possible the object must be polymorphic. That is, the class must define or inherit at least one virtual function. This is because the compiler will only generate the needed run-time type information for such objects.
Dynamic cast examples
In the example below, a MyChild pointer is converted into a MyBase pointer using a dynamic cast. This derived-to-base conversion succeeds, because the Child object includes a complete Base object.
class MyBase
{
public:
virtual void test() {}
};
class MyChild : public MyBase {};
int main()
{
MyChild *child = new MyChild();
MyBase *base = dynamic_cast<MyBase*>(child); // ok
}
The next example attempts to convert a MyBase pointer to a MyChild pointer. Since the Base object does not contain a complete Child object this pointer conversion will fail. To indicate this, the dynamic cast returns a null pointer. This gives a convenient way to check whether or not a conversion has succeeded during run-time.
MyBase *base = new MyBase();
MyChild *child = dynamic_cast<MyChild*>(base);
if (child == 0)
std::cout << "Null pointer returned";
If a reference is converted instead of a pointer, the dynamic cast will then fail by throwing a bad_cast exception. This needs to be handled using a try-catch statement.
#include <exception>
// …
try
{
MyChild &child = dynamic_cast<MyChild&>(*base);
}
catch(std::bad_cast &e)
{
std::cout << e.what(); // bad dynamic_cast
}
Dynamic or static cast
The advantage of using a dynamic cast is that it allows the programmer to check whether or not a conversion has succeeded during run-time. The disadvantage is that there is a performance overhead associated with doing this check. For this reason using a static cast would have been preferable in the first example, because a derived-to-base conversion will never fail.
MyBase *base = static_cast<MyBase*>(child); // ok
However, in the second example the conversion may either succeed or fail. It will fail if the MyBase object contains a MyBase instance and it will succeed if it contains a MyChild instance. In some situations this may not be known until run-time. When this is the case dynamic cast is a better choice than static cast.
// Succeeds for a MyChild object
MyChild *child = dynamic_cast<MyChild*>(base);
If the base-to-derived conversion had been performed using a static cast instead of a dynamic cast the conversion would not have failed. It would have returned a pointer that referred to an incomplete object. Dereferencing such a pointer can lead to run-time errors.
// Allowed, but invalid
MyChild *child = static_cast<MyChild*>(base);
// Incomplete MyChild object dereferenced
(*child);
Const cast
This one is primarily used to add or remove the const modifier of a variable.
const int myConst = 5;
int *nonConst = const_cast<int*>(&myConst); // removes const
Although const cast allows the value of a constant to be changed, doing so is still invalid code that may cause a run-time error. This could occur for example if the constant was located in a section of read-only memory.
*nonConst = 10; // potential run-time error
const cast is instead used mainly when there is a function that takes a non-constant pointer argument, even though it does not modify the pointee.
void print(int *p)
{
std::cout << *p;
}
The function can then be passed a constant variable by using a const cast.
print(&myConst); // error: cannot convert
// const int* to int*
print(nonConst); // allowed
Source and More Explanations
A: static_cast
static_cast is used for cases where you basically want to reverse an implicit conversion, with a few restrictions and additions. static_cast performs no runtime checks. This should be used if you know that you refer to an object of a specific type, and thus a check would be unnecessary. Example:
void func(void *data) {
// Conversion from MyClass* -> void* is implicit
MyClass *c = static_cast<MyClass*>(data);
...
}
int main() {
MyClass c;
start_thread(&func, &c) // func(&c) will be called
.join();
}
In this example, you know that you passed a MyClass object, and thus there isn't any need for a runtime check to ensure this.
dynamic_cast
dynamic_cast is useful when you don't know what the dynamic type of the object is. It returns a null pointer if the object referred to doesn't contain the type casted to as a base class (when you cast to a reference, a bad_cast exception is thrown in that case).
if (JumpStm *j = dynamic_cast<JumpStm*>(&stm)) {
...
} else if (ExprStm *e = dynamic_cast<ExprStm*>(&stm)) {
...
}
You can not use dynamic_cast for downcast (casting to a derived class) if the argument type is not polymorphic. For example, the following code is not valid, because Base doesn't contain any virtual function:
struct Base { };
struct Derived : Base { };
int main() {
Derived d; Base *b = &d;
dynamic_cast<Derived*>(b); // Invalid
}
An "up-cast" (cast to the base class) is always valid with both static_cast and dynamic_cast, and also without any cast, as an "up-cast" is an implicit conversion (assuming the base class is accessible, i.e. it's a public inheritance).
Regular Cast
These casts are also called C-style cast. A C-style cast is basically identical to trying out a range of sequences of C++ casts, and taking the first C++ cast that works, without ever considering dynamic_cast. Needless to say, this is much more powerful as it combines all of const_cast, static_cast and reinterpret_cast, but it's also unsafe, because it does not use dynamic_cast.
In addition, C-style casts not only allow you to do this, but they also allow you to safely cast to a private base-class, while the "equivalent" static_cast sequence would give you a compile-time error for that.
Some people prefer C-style casts because of their brevity. I use them for numeric casts only, and use the appropriate C++ casts when user defined types are involved, as they provide stricter checking.
A: C-style casts conflate const_cast, static_cast, and reinterpret_cast.
I wish C++ didn't have C-style casts. C++ casts stand out properly (as they should; casts are normally indicative of doing something bad) and properly distinguish between the different kinds of conversion that casts perform. They also permit similar-looking functions to be written, e.g. boost::lexical_cast, which is quite nice from a consistency perspective.
A: dynamic_cast has runtime type checking and only works with references and pointers, whereas static_cast does not offer runtime type checking. For complete information, see the MSDN article static_cast Operator.
A: dynamic_cast only supports pointer and reference types. It returns NULL if the cast is impossible if the type is a pointer or throws an exception if the type is a reference type. Hence, dynamic_cast can be used to check if an object is of a given type, static_cast cannot (you will simply end up with an invalid value).
C-style (and other) casts have been covered in the other answers.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/28002",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1995"
} |
Q: Use table metadata for select statement in SQL Server? I have a large database and would like to select table names that have a certain column name. I have done something like this in MySQL, but can't find any info on SQL Server.
I want to do something like:
select [table]
from [db]
where table [has column 'classtypeid']
How can I do something like this?
A: Use the ANSI information_schema views, this will also work in MySQL
select table_name
from information_schema.columns
where column_name = 'classtypeid'
A: Here you go:
SELECT C.TABLE_NAME
FROM INFORMATION_SCHEMA.COLUMNS AS C
INNER JOIN INFORMATION_SCHEMA.TABLES AS T ON C.TABLE_NAME = T.TABLE_NAME
AND C.TABLE_SCHEMA = T.TABLE_SCHEMA
WHERE C.COLUMN_NAME = 'classtypeid'
AND T.TABLE_TYPE = 'BASE TABLE'
Edit: Note that this will not list views based on any tables with that column. If you only query INFORMATION_SCHEMA.COLUMNS you will also get back views.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/28003",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: When do you use table clusters? How do you determine when to use table clusters? There are two types, index and hash, to use for different cases. In your experience, have the introduction and use of table clusters paid off?
If none of your tables are set up this way, modifying them to use table clusters would add to the complexity of the set up. But would the expected performance benefits outweight the cost of increased complexity in future maintenance work?
Do you have any favorite online references or books that describe table clustering well and give good implementation examples?
//Oracle tips greatly appreciated.
A: The killer feature of table clusters is that you can store related rows of different tables at the same physical location.
That can improve join performance by an order of magnitude. However, it doesn't pay of so often as it sounds.
The only time I used it was a three-table join, executed by two hash joins. It took too long ;). However, the join was on the same column, so it was possible to use a hash table cluster keyed by the join column. That caused all related rows to be stored alongside (ideally, in the same database block). Knowing that, Oracle can execute the join with a special optimization ("cluster join").
It's more or less pre-joined, but still feeling like normal tables (for INSERT/SELECT/UPDATE/DELETE).
On the other hand, there are "single table clusters" that are mostly used to control the "clustering factor" -- A similar idea like clustered indexes (called Index-Organized-Table in Oracle) but not adding high cost if using a secondary index.
A: One can speak a lot about clustering, but I found that almost ultimate explanation about Oracle clusters (pros and cons, when to use and how to use) can be found in Tom Kyte's book - Effective Oracle by Design, also you can search asktom for some specific cluster usage examples (1, 2 etc). You should definitely take a look at this book if you haven't yet.
Some info you can also find here.
But the thing you should always do before creating complex schema structures is to try, to test, to benchmark and choose the one solution that best fits your needs :)
Hope this helps.
A: I haven't used Oracle's table clusters myself, but I understand that its index table clusters are very much like MS SQL Server's clustered indexes. That is, the row data is physically organized by the clustered index's key.
That makes one ideal for a heavily-accessed column that has a reasonably small number of possible values (compared to the total number of rows), where most queries want to retrieve all rows with a particular value. Because all such rows are physically stored together, disk I/O, particularly seek time, is reduced.
"Reasonably small" is not easily defined, but postal or zip codes in an address table seems reasonable if you're often querying for all addresses in a single code's region. Province/state/territory codes are likely too small a selection for a country-wide address table.
So, you don't want to use them on columns with few possible values (e.g., M/F for gender) because then the clustering doesn't buy you anything and likely costs you for insertions. You also never want to use clustering on "autonumber" surrogate key columns (from sequences in Oracle) because that will create a "hot spot" in the last extent of the table as all insertions must physically happen there. You also don't want to apply clustering to a column value that will be updated because the RDBMS will have to physically move the record to maintain the clustered ordering.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/28009",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Subsets and Splits