text
stringlengths 8
267k
| meta
dict |
---|---|
Q: How to do crossdomain calls from Silverlight? What's needed to succesfully make a crossdomain call from Silverlight?
A: If I understand your question correctly you would need to have a clientaccesspolicy.xml file in the domain web root of the server that you wish to call (ie www.example.com/clientaccesspolicy.xml) that defines that it is ok for services from other domains to call services on that domain.
Read the How to Make a Service Available Across Domain Boundaries MSDN article for more detailed information.
A: See Jon Galloway's blog post on this as well
http://weblogs.asp.net/jgalloway/archive/2008/12/12/silverlight-crossdomain-access-workarounds.aspx
A: Intellisense helper file and walk-through: http://silverlight.net/learn/learnvideo.aspx?video=47174
A: Maybe also check out JSONP http://www.west-wind.com/weblog/posts/107136.aspx for example this is how you can get Twitter updates in JavaScript on the client side even though Twitter is on a different domain than you web page.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29814",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Java return copy to hide future changes In Java, say you have a class that wraps an ArrayList (or any collection) of objects.
How would you return one of those objects such that the caller will not see any future changes to the object made in the ArrayList?
i.e. you want to return a deep copy of the object, but you don't know if it is cloneable.
A: Turn that into a spec:
-that objects need to implement an interface in order to be allowed into the collection
Something like ArrayList<ICloneable>()
Then you can be assured that you always do a deep copy - the interface should have a method that is guaranteed to return a deep copy.
I think that's the best you can do.
A: One option is to use serialization. Here's a blog post explaining it:
http://weblogs.java.net/blog/emcmanus/archive/2007/04/cloning_java_ob.html
A: I suppose it is an ovbious answer:
Make a requisite for the classes stored in the collection to be cloneable. You could check that at insertion time or at retrieval time, whatever makes more sense, and throw an exception.
Or if the item is not cloneable, just fail back to the return by reference option.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29820",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Giant NodeManagerLogs from hibernate in weblogic One of our weblogic 8.1s has suddenly started logging giant amounts of logs and filling the disk.
The logs giving us hassle resides in
mydrive:\bea\weblogic81\common\nodemanager\NodeManagerLogs\generatedManagedServer1\managedserveroutput.log
and the entries in the logfile is just the some kind of entries repeated again and again. Stuff like
19:21:24,470 DEBUG [StdRowLockSemaphore] Lock 'TRIGGER_ACCESS' returned by: LLL-SCHEDULER_QuartzSchedulerThread
19:21:31,923 DEBUG [StdRowLockSemaphore] Lock 'STATE_ACCESS' is deLLLred by: QuartzScheduler_LLL-SCHEDULER-NACDLLLF011219763113220_ClusterManager
19:21:31,923 DEBUG [StdRowLockSemaphore] Lock 'STATE_ACCESS' is being obtained: QuartzScheduler_LLL-SCHEDULER-NACDLLLF011219763113220_ClusterManager
19:21:31,923 DEBUG [StdRowLockSemaphore] Lock 'STATE_ACCESS' given to: QuartzScheduler_LLL-SCHEDULER-NACDLLLF011219763113220_ClusterManager
19:21:31,923 DEBUG [StdRowLockSemaphore] Lock 'TRIGGER_ACCESS' is deLLLred by: QuartzScheduler_LLL-SCHEDULER-NACDLLLF011219763113220_ClusterManager
...
19:17:46,798 DEBUG [CascadingAction] cascading to saveOrUpdate: mypackage.config.common.Share
19:17:46,798 DEBUG [DefaultSaveOrUpdateEventListener] reassociated uninitialized proxy
19:17:46,798 DEBUG [Cascade] done processing cascade ACTION_SAVE_UPDATE for: mypackage.config.common.FileLocation
19:17:46,798 DEBUG [Cascade] processing cascade ACTION_SAVE_UPDATE for: mypackage.config.common.FileLocation
19:17:46,798 DEBUG [CascadingAction] cascading to saveOrUpdate: mypackage.config.common.Share
19:17:46,798 DEBUG [DefaultSaveOrUpdateEventListener] reassociated uninitialized proxy
I can't find any debug settings set anywhere.
I've looked in the Remote Start classpath and Arguments for the managed server.
Can anyone point me in the direction to gain control over this logfile?
A: Since those log entries aren't problems, it sounds like the global log level has been turned up to DEBUG. Alternatively, perhaps a new Logging mechanism has been implemented or a new log Appender that writes to stdout, and thus is being re-logged by Weblogic. I would look at the configuration of your logger. (Or provide it with one, if it is using a default config)
For example, when using Hibernate with an active Log4J setup, Hibernate will automatically join in with the Log4J instance that you set up in your own application
It can be tuned, as per the normal Log4J config. This example uses the properties configuration style:
log4j.category.org.hibernate=WARN
Hibernate may join in with other logging mechanisms via the apache commons logging API. Look at how to configure your own logger and tune out the org.hibernate.* frequencies.
n.b. When debugging, switching back on
log4j.category.org.hibernate.SQL=INFO or DEBUG
can be useful.
A: Is it a large system with many programmers? If so it might be worth checking that nowhere in the code is the logger having its config changed programatically.
In log4j, this can be done using the LogManager or BasicConfigurator classes. Also via the PropertyConfigurator and DomConfigurator. Just one rogue line of code could set up a new Logger to stdout using the PatternLayout shown in your example.
BasicConfigurator.configure();
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29822",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Visual Source Safe --> TFS Migration Around here we have been working with a bunch of Visual Source Safe repositories for about 10 years or so.
Now I want to get rid of sourcesafe and move on to Team Foundation Server.
Do you have any tips or tricks for me before I embark on this migration? What are the things I have to be careful about?
I am sure this migration will mean that our working habits have to be modified in some way. Do you think that these changes could be a problem for the organization? Think about a group of about 20 .NET developers in a single site.
A: Be aware that TFS does not support sharing files between different projects, as VSS does. If you have any such shared files the link between them will be broken during the migration, resulting in initially identical, but now distinct files in each project. Updates to one of these files in TFS will no longer propagate to the copies in the other projects.
A: If you do choose to use the VSSConverter.exe tool that ships with Visual Studio Team Foundation Server, then you should install TFS 2008 SP1 first as it includes a number of improvements as detailed on this blog by the migration tools team.
Some of the key features of the
release include:
Elimination of namespace conflicts. I
previously blogged about this as "the
rename problem" and we have fixed the
converter to correctly migrate files
with overlapping namespaces. This was
the biggest pain point for most users
trying to use previous versions of the
tool.
Automatic solution rebinding.
In this latest version, VS solution
files will be automatically upgraded
to the 9.0 version and checked back in
to version control. Previously users
were required to do this manually.
Correcting of timestamp
inconsistencies. The use of client
timestamps by VSS can lead to
revisions being recorded in the
opposite order that they actually
occurred in. The tool now recognizes
this issue and continues migrating
changes where it would previously
fail.
Improved logging. Although
we've fixed a lot of issues, providing
better, more detailed logging will
help users that do run into issues
diagnose the problems.
A: I just googled, but this walkthrough seems like a good reference, and it mentions the tool VSSConverter which should help you make the migration as painless as possible.
I would like to recommend one thing though: Backup. Backup everything before you do this. Should anything go wrong it's better to be safe than sorry.
My links aren't showing up. This is the address: http://msdn.microsoft.com/en-us/library/ms181247(VS.80).aspx
A: We are currently in the process of doing this at my day job. We are actually making the switch over in about a month. I am a main part of the migration and a big part of why we are getting off of SourceSafe. To help in the migration, I used the Visual Studio® Team System 2008 Team Foundation Server and Team Suite VPC Image. It was very useful. Right off the bat, the image contains a full working TFS installation for you to play and demo with. It also includes Hands on Labs and one of the labs is running the VSS -> TFS migration tool. If you have an MSDN subscription, once you have played with the image, the next step would be to install the TFS Small Team edition that comes with your subscription.
One thing to note is to make sure you get the latest Service Packs for Visual Studio 2008 and the .NET Framework installed on the image. The service packs fixed some annoying bugs and it definately increased the usability of the system. We have a farely large SourceSafe database with about 90+ projects and the migration tool took about 32 hours to complete. First I made a backup of our sourcesafe database for testing. Then I made the migration on the test sourcesafe database. Afterwards, I checked the source tree in TFS and everything transferred fine. We kept all the history for our source files from VSS which was great. No need to keep that stinking VSS database around after we go live.
We are taking the migration in steps. First the source control and letting our developers get use to using it. Then after that we will be migrating the QA and Business Analysts over to use the Work Item tracking features.
My advice is to take the migration in steps. Don't do too much at one time. Give time for people who will be using the system to train up.
A: VSS Converter is a far from perfect solution. And there are significant differences between the 2005 and the 2008SP1 version of the converter.
For example, in a VSS DB that's been in use for a long time, there will have been a large number of users contributing to VSS. Many of these users will have left the organisation a long time ago and therefore will no longer have domain accounts. TFS requires mapping VSS users to domain accounts, so you will have to decide whether you map old users to a single 'dummy' domain account or to a current team member.
In addition, VSS Converter 2008 requires these domain accounts to be valid TFS accounts. Whereas the 2005 converter does not enforce this.
If your VSS history contains significant folder Moves, then it's likely you will loose all history before this Move. For example, if you Move a folder to a new location, then Delete the previous parent, you will loose all history. See this article for more explanation:
http://msdn.microsoft.com/en-us/library/ms253166.aspx
In one migration I was involved with, we had a 10 year old VSS database that lost all history prior to 6 months ago. This was due to a significant tidy up that took place 6 months ago.
A: TFS conversion tool <-- Use this
I've used this tool for some times already, the results are pretty satisfatory as it comes with the history of changesets from SourceSafe if you desire too.
Anyway, using this tool you should always pay attention to errors and warnings in the log, and check if everything built okay / passed okay.
It's recomended to also run an Analysis on SS before running this.
Hope it helps
A: There are a few different ways you can migrate. The tool will pull your history, etc. over, but the more pragmatic and simple way is to lock VSS as a history archive and start fresh:
*
*Have everyone check in all changes into VSS, make sure everything builds, etc.
*Set all VSS databases to "locked" (read-only rights for all users)
*Get Latest on the entire VSS database into a "clean" set of folders on a workstation
*Check all of the files into TFS from the workstation
For any history prior to the conversion, folks need to go to VSS, but after a week or two it's realistically unlikely to happen all that often. And you know that the history in VSS is accurate and not corrupted by the conversion process.
A: Good guidance there from my former colleage Guy Starbuck. Another thing to add with that approach - you may have decided over time that you want to refactor the way your application is organized (folders etc) and this will give you an oppurtunity to do so.
I've been in situations where we organized a solution haphazardly without thought (let alone major changes in the application) which led to a desire to organize things differently - and the move from VSS to TFS is a great oppurtunity to do so.
As far as the original question:
And: this migration will for sure mean that our working habits have to be modified in some way. Do you think that this changes could be a problem for the organization? Think to a group of about 20 .net developers, in a single site
I would say - yes your working habits will change but much more for the better.
*
*You no should use "Check-out" Locks and "Get-Latest on Check-out".
*You can now effectively Branch and Merge
*You will now have "Changesets" all files checked-in at the same time will be grouped together. This makes historical change tracking much easier - but more importantly - rollbacks are much easier (ie find all files checked in at the same time and roll them back)
*Associating Check-ins to Work Items. Don't overlook Work Items! The biggest mistake you can make is to only use TFS as a VSS replacement. The Build and Project Management features are excellent - you paid for them - USE THEM!
As far as details on how your experience will change, another former colleague of mine (and Team System MVP) Steve St. Jean wrote a detailed article on the differences: From VSS to TFS
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29838",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: Thread not waking up from Thread.Sleep() We have a Windows Service written in C#. The service spawns a thread that does this:
private void ThreadWorkerFunction()
{
while(false == _stop) // stop flag set by other thread
{
try
{
openConnection();
doStuff();
closeConnection();
}
catch (Exception ex)
{
log.Error("Something went wrong.", ex);
Thread.Sleep(TimeSpan.FromMinutes(10));
}
}
}
We put the Thread.Sleep in after a couple of times when the database had gone away and we came back to 3Gb logs files full of database connection errors.
This has been running fine for months, but recently we've seen a few instances where the log.Error() statement logs a "System.InvalidOperationException: This SqlTransaction has completed; it is no longer usable" exception and then never ever comes back. The service can be left running for days but nothing more will be logged.
Having done some reading I know that Thread.Sleep is not ideal, but why would it simply never come back?
A: Dig in and find out? Stick a debugger on that bastard!
I can see at least the following possibilities:
*
*the logging system hangs;
*the thread exited just fine but the service is still running because some other part has a logic error.
And maybe, but almost certainly not, the following:
*
*Sleep() hangs.
But in any case, attaching a debugger will show you whether the thread is still there and whether it really has hung.
A:
We put the Thread.Sleep in after a couple of times when the database had gone away and we came back to 3Gb logs files full of database connection errors.
I would think a better option would be to make it so that your logging system trapped duplicates, so that it could write something like, "The previous message was repeated N times".
Assume I've written a standard note about how you should open your connection at the last possible moment and close it at the earliest opportunity, rather than spanning a potentially huge function in the way you've done it (but perhaps that is an artefact of your demonstrative code and your application is actually written properly).
When you say that it's reporting the error you describe, do you mean that this handler is reporting the error? The reason it's not clear to me is that in the code snippet you say "Something went wrong", but you didn't say that in your description; I wouldn't want this to be something so silly as the exception is being caught somewhere else, and the code is getting stuck somewhere other than the sleep.
A: I've had exactly the same problem. Moving the Sleep line outside of the exception handler fixed the problem for me, like this:
bool hadError = false;
try {
...
} catch (...) {
hadError = true;
}
if (hadError)
Thread.Sleep(...);
Interrupting threads does not seem to work in the context of an exception handler.
A: Have you tried using Monitor.Pulse (ensure your thread is using thread management before running this) to get the thread to do something? If that works, then you're going to have to look a bit more into your threading logic.
A: From the code you've posted, it's not clear that after an exception is thrown the system is definitely able to restart - e.g. if the exception comes from doStuff(), then the control flow will pass back (after the 10 minute wait) to openConnection(), without ever passing through closeConnection().
But as others have said, just attach a debugger and find where it actually is.
A: Try Thread.Sleep(10 * 60 * 1000)
A: I never fully figured out what was going on, but it seemed to be related to ThreadInterruptedExceptions being thrown during the 10 minute sleep, so I changed to code to:
private void ThreadWorkerFunction()
{
DateTime? timeout = null;
while (!_stop)
{
try
{
if (timeout == null || timeout < DateTime.Now)
{
openDatabaseConnections();
doStuff();
closeDatabaseConnections();
}
else
{
Thread.Sleep(1000);
}
}
catch (ThreadInterruptedException tiex)
{
log.Error("The worker thread was interrupted... ignoring.", tiex);
}
catch (Exception ex)
{
log.Error("Something went wrong.", ex);
timeout = DateTime.Now + TimeSpan.FromMinutes(10);
}
}
}
Aside from specifically catching the ThreadInterruptedException, this just feels safer as all the sleeping happens within a try block, so anything unexpected that happens will be logged. I'll update this answer if I ever find out more.
A: Stumbled on this while looking for a Thread.Sleep problem of my own. This may or may not be related, but if your doSomething() throws an exception, closeDatabaseConnections() won't happen, which has some potential for resource leaks.. I'd put that in a finally block. Just something to think about.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29841",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Dynamic reference to resource files in C# I have an application on which I am implementing localization.
I now need to dynamically reference a name in the resouce file.
assume I have a resource file called Login.resx, an a number of strings: foo="hello", bar="cruel" and baz="world"
normally, I will refer as:
String result =Login.foo;
and result=="hello";
my problem is, that at code time, I do not know if I want to refer to foo, bar or baz - I have a string that contains either "foo", "bar" or "baz".
I need something like:
Login["foo"];
Does anyone know if there is any way to dynamically reference a string in a resource file?
A: You'll need to instance a ResourceManager for the Login.resx:
var resman = new System.Resources.ResourceManager(
"RootNamespace.Login",
System.Reflection.Assembly.GetExecutingAssembly()
)
var text = resman.GetString("resname");
It might help to look at the generated code in the code-behind files of the resource files that are created by the IDE. These files basically contain readonly properties for each resource that makes a query to an internal resource manager.
A: If you put your Resource file in the App_GlobalResources folder like I did, you need to use
global::System.Resources.ResourceManager
temp = new
global::System.Resources.ResourceManager("RootNamespace.Login",
global::System.Reflection.Assembly.Load("App_GlobalResources"));
It took me a while to figure this out. Hope this will help someone. :)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29845",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: Get last item in a table - SQL I have a History Table in SQL Server that basically tracks an item through a process. The item has some fixed fields that don't change throughout the process, but has a few other fields including status and Id which increment as the steps of the process increase.
Basically I want to retrieve the last step for each item given a Batch Reference. So if I do a
Select * from HistoryTable where BatchRef = @BatchRef
It will return all the steps for all the items in the batch - eg
Id Status BatchRef ItemCount
1 1 Batch001 100
1 2 Batch001 110
2 1 Batch001 60
2 2 Batch001 100
But what I really want is:
Id Status BatchRef ItemCount
1 2 Batch001 110
2 2 Batch001 100
Edit: Appologies - can't seem to get the TABLE tags to work with Markdown - followed the help to the letter, and looks fine in the preview
A: It's kind of hard to make sense of your table design - I think SO ate your delimiters.
The basic way of handling this is to GROUP BY your fixed fields, and select a MAX (or MIN) for some unqiue value (a datetime usually works well). In your case, I think that the GROUP BY would be BatchRef and ItemCount, and Id will be your unique column.
Then, join back to the table to get all columns. Something like:
SELECT *
FROM HistoryTable
JOIN (
SELECT
MAX(Id) as Id.
BatchRef,
ItemCount
FROM HsitoryTable
WHERE
BacthRef = @batchRef
GROUP BY
BatchRef,
ItemCount
) as Latest ON
HistoryTable.Id = Latest.Id
A: Assuming you have an identity column in the table...
select
top 1 <fields>
from
HistoryTable
where
BatchRef = @BatchRef
order by
<IdentityColumn> DESC
A: Assuming the Item Ids are incrementally numbered:
--Declare a temp table to hold the last step for each item id
DECLARE @LastStepForEach TABLE (
Id int,
Status int,
BatchRef char(10),
ItemCount int)
--Loop counter
DECLARE @count INT;
SET @count = 0;
--Loop through all of the items
WHILE (@count < (SELECT MAX(Id) FROM HistoryTable WHERE BatchRef = @BatchRef))
BEGIN
SET @count = @count + 1;
INSERT INTO @LastStepForEach (Id, Status, BatchRef, ItemCount)
SELECT Id, Status, BatchRef, ItemCount
FROM HistoryTable
WHERE BatchRef = @BatchRef
AND Id = @count
AND Status =
(
SELECT MAX(Status)
FROM HistoryTable
WHERE BatchRef = @BatchRef
AND Id = @count
)
END
SELECT *
FROM @LastStepForEach
A: SELECT id, status, BatchRef, MAX(itemcount) AS maxItemcount
FROM HistoryTable GROUP BY id, status, BatchRef
HAVING status > 1
A: It's a bit hard to decypher your data the way WMD has formatted it, but you can pull of the sort of trick you need with common table expressions on SQL 2005:
with LastBatches as (
select Batch, max(Id)
from HistoryTable
group by Batch
)
select *
from HistoryTable h
join LastBatches b on b.Batch = h.Batch and b.Id = h.Id
Or a subquery (assuming the group by in the subquery works - off the top of my head I don't recall):
select *
from HistoryTable h
join (
select Batch, max(Id)
from HistoryTable
group by Batch
) b on b.Batch = h.Batch and b.Id = h.Id
Edit: I was assuming you wanted the last item for every batch. If you just need it for the one batch then the other answers (doing a top 1 and ordering descending) are the way to go.
A: As already suggested you probably want to reorder your query to sort it in the other direction so you actually fetch the first row. Then you'd probably want to use something like
SELECT TOP 1 ...
if you're using MSSQL 2k or earlier, or the SQL compliant variant
SELECT * FROM (
SELECT
ROW_NUMBER() OVER (ORDER BY key ASC) AS rownumber,
columns
FROM tablename
) AS foo
WHERE rownumber = n
for any other version (or for other database systems that support the standard notation), or
SELECT ... LIMIT 1 OFFSET 0
for some other variants without the standard SQL support.
See also this question for some additional discussion around selecting rows. Using the aggregate function max() might or might not be faster depending on whether calculating the value requires a table scan.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29847",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Is there a wxWidgets framework for C? My understanding is that wxWidgets is for a number of programming languages (C++, Python, Perl, and C#/.NET) but that does not include C. Is there a similar framework for the C programming language, or is this not something that C is used for?
A: If you don't mind working with older libraries there are quite a few. For example, there's a no-frills GUI kit for Ansi-C called IUP. Also, check out this list -- Search on that page for 'C API'. I think the most modern and well-known is the above-mentioned GTK+.
A: You can try GTK+. I believe wxWidgets implementation for linux is written in GTK+.
A: Obviously No, Because it has predefined classes but if you insist you can try GTK which is for C.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29855",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Install Python to match directory layout in OS X 10.5 The default Python install on OS X 10.5 is 2.5.1 with a FAT 32 bit (Intel and PPC) client. I want to setup apache and mysql to run django. In the past, I have run Apache and MySQL to match this install in 32 bit mode (even stripping out the 64 bit stuff from Apache to make it work).
I want to upgrade Python to 64 bit. I am completely comfortable with compiling it from source with one caveat. How do I match the way that the default install is laid out? Especially, with regards to site-packages being in /Library/Python/2.5/ and not the one in buried at the top of the framework once I compile it.
A: Not sure I entirely understand your question, but can't you simply build and install a 64 bit version and then create symbolic links so that /Library/Python/2.5 and below point to your freshly built version of python?
A: Personally, I wouldn't worry about it until you see a problem. Messing with the default python install on a *Nix system can cause more trouble than it's worth. I can say from personal experience that you never truly understand what python has done for the nix world until you have a problem with it.
You can also add a second python installation, but that also causes more problems than it's worth IMO.
So I suppose the best question to start out with would be why exactly do you want to use the 64 bit version of python?
A: Hyposaurus,
It is possible to have multiple versions of Python installed simultaneously. Installing two versions in parallel solves your problem and helps avoid the problems laid out by Jason Baker above.
The easiest way, and the way I recommend, is to use MacPorts, which will install all its software separately. By default, for example, everything is installed in /opt/local
Another method is to simply download the source and compile with a specified prefix. Note that this method doesn't modify your PATH environment variable, so you'll need to do that yourself if you want to avoid typing the fully qualified path to the python executable each time
./configure --prefix=/usr/local/python64
make
sudo make install
Then you can simply point your Apache install at the new version using mod_python's PythonInterpreter directive
A: Essentially, yes. I was not sure you could do it like that (current version does not do it like that). When using the python install script, however, there is no option (that I can find) to specify where to put directories and files (eg --prefix). I was hoping to match the current layout of python related files so as to avoid 'polluting' my machine with redundant files.
A: The short answer is because I can. The long answer, expanding on what the OP said, is to be more compatible with apache and mysql/postgresql. They are all 64bit (apache is a fat binary with ppc, ppc64 x86 and x86 and x86_64, the others just straight 64bit). Mysqldb and mod_python wont compile unless they are all running the same architecture. Yes I could run them all in 32bit (and have in the past) but this is much more work then compiling one program.
EDIT: You pretty much convinced though to just let the installer do its thing and update the PATH to reflect this.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29856",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Regex to match all HTML tags except and I need to match and remove all tags using a regular expression in Perl. I have the following:
<\\??(?!p).+?>
But this still matches with the closing </p> tag. Any hint on how to match with the closing tag as well?
Note, this is being performed on xhtml.
A: Not sure why you are wanting to do this - regex for HTML sanitisation isn't always the best method (you need to remember to sanitise attributes and such, remove javascript: hrefs and the likes)... but, a regex to match HTML tags that aren't <p></p>:
(<[^pP].*?>|</[^pP]>)
Verbose:
(
< # < opening tag
[^pP].*? # p non-p character, then non-greedy anything
> # > closing tag
| # ....or....
</ # </
[^pP] # a non-p tag
> # >
)
A: I used Xetius regex and it works fine. Except for some flex generated tags which can be : with no spaces inside. I tried ti fix it with a simple ? after \s and it looks like it's working :
<(?!\/?p(?=>|\s?.*>))\/?.*?>
I'm using it to clear tags from flex generated html text so i also added more excepted tags :
<(?!\/?(p|a|b|i|u|br)(?=>|\s?.*>))\/?.*?>
A: If you insist on using a regex, something like this will work in most cases:
# Remove all HTML except "p" tags
$html =~ s{<(?>/?)(?:[^pP]|[pP][^\s>/])[^>]*>}{}g;
Explanation:
s{
< # opening angled bracket
(?>/?) # ratchet past optional /
(?:
[^pP] # non-p tag
| # ...or...
[pP][^\s>/] # longer tag that begins with p (e.g., <pre>)
)
[^>]* # everything until closing angled bracket
> # closing angled bracket
}{}gx; # replace with nothing, globally
But really, save yourself some headaches and use a parser instead. CPAN has several modules that are suitable. Here's an example using the HTML::TokeParser module that comes with the extremely capable HTML::Parser CPAN distribution:
use strict;
use HTML::TokeParser;
my $parser = HTML::TokeParser->new('/some/file.html')
or die "Could not open /some/file.html - $!";
while(my $t = $parser->get_token)
{
# Skip start or end tags that are not "p" tags
next if(($t->[0] eq 'S' || $t->[0] eq 'E') && lc $t->[1] ne 'p');
# Print everything else normally (see HTML::TokeParser docs for explanation)
if($t->[0] eq 'T')
{
print $t->[1];
}
else
{
print $t->[-1];
}
}
HTML::Parser accepts input in the form of a file name, an open file handle, or a string. Wrapping the above code in a library and making the destination configurable (i.e., not just printing as in the above) is not hard. The result will be much more reliable, maintainable, and possibly also faster (HTML::Parser uses a C-based backend) than trying to use regular expressions.
A: Xetius, resurrecting this ancient question because it had a simple solution that wasn't mentioned. (Found your question while doing some research for a regex bounty quest.)
With all the disclaimers about using regex to parse html, here is a simple way to do it.
#!/usr/bin/perl
$regex = '(<\/?p[^>]*>)|<[^>]*>';
$subject = 'Bad html <a> </I> <p>My paragraph</p> <i>Italics</i> <p class="blue">second</p>';
($replaced = $subject) =~ s/$regex/$1/eg;
print $replaced . "\n";
See this live demo
Reference
How to match pattern except in situations s1, s2, s3
How to match a pattern unless...
A: Since HTML is not a regular language I would not expect a regular expression to do a very good job at matching it. They might be up to this task (though I'm not convinced), but I would consider looking elsewhere; I'm sure perl must have some off-the-shelf libraries for manipulating HTML.
Anyway, I would think that what you want to match is </?(p.+|.*)(\s*.*)> non-greedily (I don't know the vagaries of perl's regexp syntax so I cannot help further). I am assuming that \s means whitespace. Perhaps it doesn't. Either way, you want something that'll match attributes offset from the tag name by whitespace. But it's more difficult than that as people often put unescaped angle brackets inside scripts and comments and perhaps even quoted attribute values, which you don't want to match against.
So as I say, I don't really think regexps are the right tool for the job.
A:
Since HTML is not a regular language
HTML isn't but HTML tags are and they can be adequatly described by regular expressions.
A: In my opinion, trying to parse HTML with anything other than an HTML parser is just asking for a world of pain. HTML is a really complex language (which is one of the major reasons that XHTML was created, which is much simpler than HTML).
For example, this:
<HTML /
<HEAD /
<TITLE / > /
<P / >
is a complete, 100% well-formed, 100% valid HTML document. (Well, it's missing the DOCTYPE declaration, but other than that ...)
It is semantically equivalent to
<html>
<head>
<title>
>
</title>
</head>
<body>
<p>
>
</p>
</body>
</html>
But it's nevertheless valid HTML that you're going to have to deal with. You could, of course, devise a regex to parse it, but, as others already suggested, using an actual HTML parser is just sooo much easier.
A: I came up with this:
<(?!\/?p(?=>|\s.*>))\/?.*?>
x/
< # Match open angle bracket
(?! # Negative lookahead (Not matching and not consuming)
\/? # 0 or 1 /
p # p
(?= # Positive lookahead (Matching and not consuming)
> # > - No attributes
| # or
\s # whitespace
.* # anything up to
> # close angle brackets - with attributes
) # close positive lookahead
) # close negative lookahead
# if we have got this far then we don't match
# a p tag or closing p tag
# with or without attributes
\/? # optional close tag symbol (/)
.*? # and anything up to
> # first closing tag
/
This will now deal with p tags with or without attributes and the closing p tags, but will match pre and similar tags, with or without attributes.
It doesn't strip out attributes, but my source data does not put them in. I may change this later to do this, but this will suffice for now.
A: Assuming that this will work in PERL as it does in languages that claim to use PERL-compatible syntax:
/<\/?[^p][^>]*>/
EDIT:
But that won't match a <pre> or <param> tag, unfortunately.
This, perhaps?
/<\/?(?!p>|p )[^>]+>/
That should cover <p> tags that have attributes, too.
A: You also might want to allow for whitespace before the "p" in the p tag. Not sure how often you'll run into this, but < p> is perfectly valid HTML.
A: The original regex can be made to work with very little effort:
<(?>/?)(?!p).+?>
The problem was that the /? (or \?) gave up what it matched when the assertion after it failed. Using a non-backtracking group (?>...) around it takes care that it never releases the matched slash, so the (?!p) assertion is always anchored to the start of the tag text.
(That said I agree that generally parsing HTML with regexes is not the way to go).
A: Try this, it should work:
/<\/?([^p](\s.+?)?|..+?)>/
Explanation: it matches either a single letter except “p”, followed by an optional whitespace and more characters, or multiple letters (at least two).
/EDIT: I've added the ability to handle attributes in p tags.
A: This works for me because all the solutions above failed for other html tags starting with p such as param pre progress, etc. It also takes care of the html attributes too.
~(<\/?[^>]*(?<!<\/p|p)>)~ig
A: You should probably also remove any attributes on the <p> tag, since someone bad could do something like:
<p onclick="document.location.href='http://www.evil.com'">Clickable text</p>
The easiest way to do this, is to use the regex people suggest here to search for <p> tags with attributes, and replace them with <p> tags without attributes. Just to be on the safe side.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29869",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
} |
Q: How to bring in a web app I run a game and the running is done by hand, I have a few scripts that help me but essentially it's me doing the work. I am at the moment working on web app that will allow the users to input directly some of their game actions and thus save me a lot of work.
The problem is that I'm one man working on a moderately sized (upwards of 20 tables) project, the workload isn't the issue, it's that bugs will have slipped in even though I test as I write. So my question is thus two-fold.
*
*Beta testing, I love open beta's but would a closed beta be somehow more effective and give better results?
*How should I bring in the app? Should I one turn drop it in and declare it's being used or should I use it alongside the normal construct of the game?
A: This is my general approach to testing/launching.
How you test/launch depends mostly on:
*
*What your application is.
*Who your users are.
If you application is a technical application and is geared to the technically-minded, the word "beta" won't really scare them - but provide an opportunity to test the product before it goes 'live', and help to improve the system. This is the ideal circumstance in which to use either an open or closed beta. It's usually beneficial to start off 'closed' with a group of people you select and trust to bug-find quickly and reliably - after you're more confident that all the critical bugs are gone, open it up with an invite system (for example).
If, however, your application is 'trivial' from a technical standpoint (i.e. it's something like Twitter, or Facebook, or Flickr - nothing that is inherently geared towards technical usage), then you're going to have to be more careful in how you plan your testing. Closed testing is most definitely your first port of call, and this should last for longer than a closed beta on a more 'technical' product. The reason? Your 'average Joe' doesn't necessarily know what the word "beta" means, and others may well be scared by it, or judge your service prematurely (not understanding the concept of this 'public testing' phase). Many won't want to be used as guinea pigs.
A: I don't understand what you mean by "bring in the app" and "one turn drop it". By "bring in the app" do you mean deploy? As for "One turn drop", I totally don't understand it.
As for open betas, that depends on your audience, really. Counterstrike, for example, apparently run a few closed betas before doing open betas, so here's my suggestion:
*
*Set up a forum in some free forumboard, or set up a topic in a popular gaming forum.
*Look for people (whether or not they are in those forums) that you trust, and let them in in a closed beta. This will allow you to iron out serious kinks at first.
*If your closed group isn't reporting as much bugs any more, release it to open beta, pointing out ways on how they could give feedback to you.
This is similar to the approach StackOverflow took, but this being a game setting it up on a gaming forum will give the dual benefit of advertising your game and getting some interested beta testers.
A: I'll try to answer with the limited amount of details you've given.
1: Wether it's open or closed is really only an issue if you have great buzz, and a large group of users hammering down your door, trying toget in on the action.
If this is the case, I think you might get more loyalty and commitment from users in a closed beta.
2: You haven't given many (any) details as to what kind of game you are talking about, so it's pretty hard to answer this one.
/Jonas
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29870",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: What would be a good, windows and iis (http) based distributed version control system At my job we make & sell websites. Usually we install our .NET C# based site on a customer's server and maintain and support it remotely. However, every once in a while, for bigger development works and just to make things simpler (and faster!), we will copy the site to a local server.
This is great, but has one pain - moving the site back to the customer. Now, If nothing was change on the customer's copy - no problem. However, it is the sad truth that sometime (read more often than I would like) some fixes were needed to be applied on the production server. Either because the customer needed it NOW or simply because it was major bug.
I know that you can easily apply those bug fixes to the local copy as well, but this is an error prone process. So I'm setting my hopes on a distributed version control to help synchronize the two copies.
Here is what I need:
*
*Easy to install - nothing else needed except the installer and admin rights.
*Can integrated in an existing website as a virtual directory and works on port 80 - no hassle with new DNS required.
*Excellent software
That's it. Any ideas?
Some comments on the answers
First, thanks! much appreciated.
I've looked at Mercurial and Bazaar and both look very good. The only caveat is the installation as a virtual directory on IIS. Mercurial, as far as I understand, use a special protocol (wire) and Bazaar needs and addition of python extensions. Is there another system which is easier to integrate with IIS? I'm willing to take a performance hit for that.
A: I'd look at either Mercurial or Bazaar. I'm told Git also works on windows, but I suspect the windows port is still a second class port at best.
You'll probably need to be able to run python scripts on your webserver to host either of them.
A: Maybe not exactly what you request but checkout DeltaCopy which is a windows version of rsync. You can also read about another rsync solution here
A: I can also vouch for Mercurial. Simple to use and powerful to boot!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29882",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Printing DOM Changes What I am trying to do is change the background colour of a table cell <td> and then when a user goes to print the page, the changes are now showing.
I am currently using an unobtrusive script to run the following command on a range of cells:
element.style.backgroundColor = "#f00"
This works on screen in IE and FF, however, when you go to Print Preview, the background colours are lost.
Am I doing something wrong?
A: Is it not recommended to do this with stylesheets? You can change the media type in the LINK statement in your HTML, so when the page is printed, it will revert to the different style?
A: Have you tried hard-coding the values just to see if background-colors are showing on the print-preview at all? I think it is a setting in the Browser.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29883",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to keep the browser history in sync when using Ajax? I'm writing a simple photo album app using ASP.NET Ajax.
The app uses async Ajax calls to pre-load the next photo in the album, without changing the URL in the browser.
The problem is that when the user clicks the back button in the browser, the app doesn't go back to the previous photo, instead, it navigates to the home page of the application.
Is there a way to trick the browser into adding each Ajax call to the browsing history?
A: MSDN has an article about Managing Browser History in ASP.NET AJAX
A: Many websites make use of a hidden iframe to do this, simply refresh the iframe with the new URL, which adds it to the browsing history. Then all you have to do is handle how your application reacts to those 'back button' events - you'll either need to detect the state/location of the iframe, or refresh the page using that URL.
A: You can use simple & lightweight PathJS lib.
Usage example:
Path.map("#/page1").to(function(){
...
});
Path.map("#/page2").to(function(){
...
});
Path.root("#/mainpage");
Path.listen();
A: Update: There is now the HTML5 History API (pushState, popState) which deprecates the HTML4 hashchange functionality. History.js provides cross-browser compatibility and an optional hashchange fallback for HTML4 browsers.
The answer for this question will be more or less the same as my answers for these questions:
*
*How to show Ajax requests in URL?
*How does Gmail handle back/forward in rich JavaScript?
In summary, you'll definitely want to check out these two projects which explain the whole hashchange process and adding ajax to the mix:
*
*jQuery History (using hashes to manage your pages state and bind to changes to update your page).
*jQuery Ajaxy (ajax extension for jQuery History, to allow for complete ajax websites while being completely unobtrusive and gracefully degradable).
A: The 3.5 SP1 update has support for browser history and back button in ASP.NET ajax now.
A: For all solutions about the back button, none of them are "automatic". With every single one you are going to have to do some work to persist the state of the page. So no, there isn't a way to "trick" the browser, but there are some great libraries out there that help you with the back button.
A: Info: Ajax Navigation is a regular feature of the upcoming IE8.
A: If you are using Rails, then definitely try Wiselinks https://github.com/igor-alexandrov/wiselinks. It is a a Swiss Army knife for browser state management. Here are some details: http://igor-alexandrov.github.io/blog/2013/07/11/the-way-to-wiselinks-1-dot-0/.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29886",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: How to get your own (local) IP-Address from an udp-socket (C/C++)
*
*You have multiple network adapters.
*Bind a UDP socket to an local port, without specifying an address.
*Receive packets on one of the adapters.
How do you get the local ip address of the adapter which received the packet?
The question is, "What is the ip address from the receiver adapter?" not the address from the sender which we get in the
receive_from( ..., &senderAddr, ... );
call.
A: You could enumerate all the network adapters, get their IP addresses and compare the part covered by the subnet mask with the sender's address.
Like:
IPAddress FindLocalIPAddressOfIncomingPacket( senderAddr )
{
foreach( adapter in EnumAllNetworkAdapters() )
{
adapterSubnet = adapter.subnetmask & adapter.ipaddress;
senderSubnet = adapter.subnetmask & senderAddr;
if( adapterSubnet == senderSubnet )
{
return adapter.ipaddress;
}
}
}
A: The solution provided by timbo assumes that the address ranges are unique and not overlapping. While this is usually the case, it isn't a generic solution.
There is an excellent implementation of a function that does exactly what you're after provided in the Steven's book "Unix network programming" (section 20.2)
This is a function based on recvmsg(), rather than recvfrom(). If your socket has the IP_RECVIF option enabled then recvmsg() will return the index of the interface on which the packet was received. This can then be used to look up the destination address.
The source code is available here. The function in question is 'recvfrom_flags()'
A: G'day,
I assume that you've done your bind using INADDR_ANY to specify the address.
If this is the case, then the semantics of INADDR_ANY is such that a UDP socket is created on the port specified on all of your interfaces. The socket is going to get all packets sent to all interfaces on the port specified.
When sending using this socket, the lowest numbered interface is used. The outgoing sender's address field is set to the IP address of that first outgoing interface used.
First outgoing interface is defined as the sequence when you do an ifconfig -a. It will probably be eth0.
HTH.
cheers,
Rob
A: Unfortunately the sendto and recvfrom API calls are fundamentally broken when used with sockets bound to "Any IP" because they have no field for local IP information.
So what can you do about it?
*
*You can guess (for example based on the routing table).
*You can get a list of local addresses and bind a seperate socket to each local address.
*You can use newer APIs that support this information. There are two parts to this, firstly you have to use the relavent socket option (ip_recvif for IPv4, ipv6_recvif for IPv6) to tell the stack you want this information. Then you have to use a different function (recvmsg on linux and several other unix-like systems, WSArecvmsg on windows) to receive the packet.
None of these options are great. Guessing will obviously produce wrong answers soemtimes. Binding seperate sockets increases the complexity of your software and causes problems if the list of local addresses changes will your program is running. The newer APIs are the correct techical soloution but may reduce portability (in particular it looks like WSArecvmsg is not available on windows XP) and may require modifications to the socket wrapper library you are using.
Edit looks like I was wrong, it seems the MS documentation is misleading and that WSArecvmsg is available on windows XP. See https://stackoverflow.com/a/37334943/5083516
A: In Linux environment, you can use recvmsg to get local ip address.
//create socket and bind to local address:INADDR_ANY:
int s = socket(PF_INET,SOCK_DGRAM,0);
bind(s,(struct sockaddr *)&myAddr,sizeof(myAddr)) ;
// set option
int onFlag=1;
int ret = setsockopt(s,IPPROTO_IP,IP_PKTINFO,&onFlag,sizeof(onFlag));
// prepare buffers
// receive data buffer
char dataBuf[1024] ;
struct iovec iov = {
.iov_base=dataBuf,
.iov_len=sizeof(dataBuf)
} ;
// control buffer
char cBuf[1024] ;
// message
struct msghdr msg = {
.msg_name=NULL, // to receive peer addr with struct sockaddr_in
.msg_namelen=0, // sizeof(struct sockaddr_in)
.msg_iov=&iov,
.msg_iovlen=1,
.msg_control=cBuf,
.msg_controllen=sizeof(cBuf)
} ;
while(1) {
// reset buffers
msg.msg_iov[0].iov_base = dataBuf ;
msg.msg_iov[0].iov_len = sizeof(dataBuf) ;
msg.msg_control = cBuf ;
msg.msg_controllen = sizeof(cBuf) ;
// receive
recvmsg(s,&msg,0);
for( struct cmsghdr* pcmsg=CMSG_FIRSTHDR(&msg);
pcmsg!=NULL; pcmsg=CMSG_NXTHDR(&msg,pcmsg) ) {
if(pcmsg->cmsg_level==IPPROTO_IP && pcmsg->cmsg_type==IP_PKTINFO) {
struct in_pktinfo * pktinfo=(struct in_pktinfo *)CMSG_DATA(pcmsg);
printf("ifindex=%d ip=%s\n", pktinfo->ipi_ifindex, inet_ntoa(pktinfo->ipi_addr)) ;
}
}
}
The following does not work in asymmetric routing environment.
you can first set SO_REUSEADDR to true
BOOL bOptVal = 1;
setsockopt(so, SOL_SOCKET, SO_REUSEADDR, (char *)&boOptVal, sizeof(bOptVal));
after receive_from( ..., &remoteAddr, ... ); create another socket, and connect back to remoteAddr. Then call getsockname can get the ip address.
SOCKET skNew = socket( )
// Same local address and port as that of your first socket
// INADDR_ANY
bind(skNew, , )
// set SO_REUSEADDR to true again
setsockopt(skNew, SOL_SOCKET, SO_REUSEADDR, (char *)&boOptVal, sizeof(bOptVal));
// connect back
connect(skNew, remoteAddr)
// get local address of the socket
getsocketname(skNew, )
A: Try this:
gethostbyname("localhost");
A:
ssize_t
recvfrom(int socket, void *restrict buffer, size_t length, int flags,
struct sockaddr *restrict address, socklen_t *restrict address_len);
ssize_t
recvmsg(int socket, struct msghdr *message, int flags);
[..]
If address is not a null pointer and the socket is not connection-oriented, the
source address of the message is filled in.
Actual code:
int nbytes = recvfrom(sock, buf, MAXBUFSIZE, MSG_WAITALL, (struct sockaddr *)&bindaddr, &addrlen);
fprintf(stdout, "Read %d bytes on local address %s\n", nbytes, inet_ntoa(bindaddr.sin_addr.s_addr));
hope this helps.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29890",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: vmware-cmd causes "perl.exe - Ordinal Not Found" error My automated script for starting and stopping VMWare Server virtual machines has stopped working. vmware-cmd has started raising the error:
The ordinal 3288 could not be located in the dynamic link library LIBEAY32.dll.
I am not aware of any specific change or update when this started happening.
I have found a bunch of other people reporting this problem (or very similar) but no solution.
Do you know what caused this error, and/or how to fix this?
A: I would have said that something must have updated either the LIBEAY32.dll or another dll that depends on it. You may find some helpful information using the depends tool. If you use this to open up the perl.exe then it should highlight the dependency path that produces the problem. You can compare this with other machines on which perl runs.
The ordinal is effectively a function that is expected by perl or a dll, but is not present in the verision of LIBEAY32.dll that you have. The depends tool makes this quite clear.
A: Have discovered that this only occurs when the script is run on a different drive to the one where the EXE is located. As a work around for this I have simply moved the scripts execution.
Apparently the DLL relates to SSL, which isn't relevant to what I'm doing, so this is a suitable workaround. I'm guessing that the problem is caused by changes in the EXE for how it determines relative paths (unlikley as nothing (AFAICT) has changed). Or the %PATH% environmental variable has changed (more likely).
Hope this helps someone in the future.
A: Please check your path settings and see if you have included "C:\Program Files\VMware\VMware Workstation" for VMWare management purpose. Once you delete it, you won't see the error no more.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29927",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to submit a form when the return key is pressed? Can someone please tell me how to submit an HTML form when the return key is pressed and if there are no buttons in the form?
The submit button is not there. I am using a custom div instead of that.
A: To submit the form when the enter key is pressed create a javascript function along these lines.
function checkSubmit(e) {
if(e && e.keyCode == 13) {
document.forms[0].submit();
}
}
Then add the event to whatever scope you need eg on the div tag:
<div onKeyPress="return checkSubmit(event)"/>
This is also the default behaviour of Internet Explorer 7 anyway though (probably earlier versions as well).
A: Here is how I do it with jQuery
j(".textBoxClass").keypress(function(e)
{
// if the key pressed is the enter key
if (e.which == 13)
{
// do work
}
});
Other javascript wouldnt be too different. the catch is checking for keypress argument of "13", which is the enter key
A: I believe this is what you want.
//<![CDATA[
//Send form if they hit enter.
document.onkeypress = enter;
function enter(e) {
if (e.which == 13) { sendform(); }
}
//Form to send
function sendform() {
document.forms[0].submit();
}
//]]>
Every time a key is pressed, function enter() will be called. If the key pressed matches the enter key (13), then sendform() will be called and the first encountered form will be sent. This is only for Firefox and other standards compliant browsers.
If you find this code useful, please be sure to vote me up!
A: IMO, this is the cleanest answer:
<form action="" method="get">
Name: <input type="text" name="name"/><br/>
Pwd: <input type="password" name="password"/><br/>
<div class="yourCustomDiv"/>
<input type="submit" style="display:none"/>
</form>
Better yet, if you are using javascript to submit the form using the custom div, you should also use javascript to create it, and to set the display:none style on the button. This way users with javascript disabled will still see the submit button and can click on it.
It has been noted that display:none will cause IE to ignore the input. I created a new JSFiddle example that starts as a standard form, and uses progressive enhancement to hide the submit and create the new div. I did use the CSS styling from StriplingWarrior.
A: Use the following script.
<SCRIPT TYPE="text/javascript">
<!--
function submitenter(myfield,e)
{
var keycode;
if (window.event) keycode = window.event.keyCode;
else if (e) keycode = e.which;
else return true;
if (keycode == 13)
{
myfield.form.submit();
return false;
}
else
return true;
}
//-->
</SCRIPT>
For each field that should submit the form when the user hits enter, call the submitenter function as follows.
<FORM ACTION="../cgi-bin/formaction.pl">
name: <INPUT NAME=realname SIZE=15><BR>
password: <INPUT NAME=password TYPE=PASSWORD SIZE=10
onKeyPress="return submitenter(this,event)"><BR>
<INPUT TYPE=SUBMIT VALUE="Submit">
</FORM>
A: I tried various javascript/jQuery-based strategies, but I kept having issues. The latest issue to arise involved accidental submission when the user uses the enter key to select from the browser's built-in auto-complete list. I finally switched to this strategy, which seems to work on all the browsers my company supports:
<div class="hidden-submit"><input type="submit" tabindex="-1"/></div>
.hidden-submit {
border: 0 none;
height: 0;
width: 0;
padding: 0;
margin: 0;
overflow: hidden;
}
This is similar to the currently-accepted answer by Chris Marasti-Georg, but by avoiding display: none, it appears to work correctly on all browsers.
Update
I edited the code above to include a negative tabindex so it doesn't capture the tab key. While this technically won't validate in HTML 4, the HTML5 spec includes language to make it work the way most browsers were already implementing it anyway.
A: I use this method:
<form name='test' method=post action='sendme.php'>
<input type=text name='test1'>
<input type=button value='send' onClick='document.test.submit()'>
<input type=image src='spacer.gif'> <!-- <<<< this is the secret! -->
</form>
Basically, I just add an invisible input of type image (where "spacer.gif" is a 1x1 transparent gif).
In this way, I can submit this form either with the 'send' button or simply by pressing enter on the keyboard.
This is the trick!
A: Use the <button> tag. From the W3C standard:
Buttons created with the BUTTON element function just like buttons created with the INPUT element, but they offer richer rendering possibilities: the BUTTON element may have content. For example, a BUTTON element that contains an image functions like and may resemble an INPUT element whose type is set to "image", but the BUTTON element type allows content.
Basically there is another tag, <button>, which requires no javascript, that also can submit a form. It can be styled much in the way of a <div> tag (including <img /> inside the button tag). The buttons from the <input /> tag are not nearly as flexible.
<button type="submit">
<img src="my-icon.png" />
Clicking will submit the form
</button>
There are three types to set on the <button>; they map to the <input> button types.
<button type="submit">Will submit the form</button>
<button type="reset">Will reset the form</button>
<button type="button">Will do nothing; add javascript onclick hooks</button>
Standards
*
*W3C wiki about <button>
*HTML5 <button>
*HTML4 <button>
I use <button> tags with css-sprites and a bit of css styling to get colorful and functional form buttons. Note that it's possible to write css for, for example, <a class="button"> links share to styling with the <button> element.
A: Why don't you just apply the div submit styles to a submit button? I'm sure there's a javascript for this but that would be easier.
A: If you are using asp.net you can use the defaultButton attribute on the form.
A: I think you should actually have a submit button or a submit image... Do you have a specific reason for using a "submit div"? If you just want custom styles I recommend <input type="image".... http://webdesign.about.com/cs/forms/a/aaformsubmit_2.htm
A: Extending on the answers, this is what worked for me, maybe someone will find it useful.
Html
<form method="post" action="/url" id="editMeta">
<textarea class="form-control" onkeypress="submitOnEnter(event)"></textarea>
</form>
Js
function submitOnEnter(e) {
if (e.which == 13) {
document.getElementById("editMeta").submit()
}
}
A: Using the "autofocus" attribute works to give input focus to the button by default. In fact clicking on any control within the form also gives focus to the form, a requirement for the form to react to the RETURN. So, the "autofocus" does that for you in case the user never clicked on any other control within the form.
So, the "autofocus" makes the crucial difference if the user never clicked on any of the form controls before hitting RETURN.
But even then, there are still 2 conditions to be met for this to work without JS:
a) you have to specify a page to go to (if left empty it wont work). In my example it is hello.php
b) the control has to be visible. You could conceivably move it off the page to hide, but you cannot use display:none or visibility:hidden.
What I did, was to use inline style to just move it off the page to the left by 200px. I made the height 0px so that it does not take up space. Because otherwise it can still disrupt other controls above and below. Or you could float the element too.
<form action="hello.php" method="get">
Name: <input type="text" name="name"/><br/>
Pwd: <input type="password" name="password"/><br/>
<div class="yourCustomDiv"/>
<input autofocus type="submit" style="position:relative; left:-200px; height:0px;" />
</form>
A: Similar to Chris Marasti-Georg's example, instead using inline javascript.
Essentially add onkeypress to the fields you want the enter key to work with. This example acts on the password field.
<html>
<head><title>title</title></head>
<body>
<form action="" method="get">
Name: <input type="text" name="name"/><br/>
Pwd: <input type="password" name="password" onkeypress="if(event.keyCode==13) {javascript:form.submit();}" /><br/>
<input type="submit" onClick="javascript:form.submit();"/>
</form>
</body>
</html>
A: Since display: none buttons and inputs won't work in Safari and IE, I found that the easiest way, requiring no extra javascript hacks, is to simply add an absolutely positioned <button /> to the form and place it far off screen.
<form action="" method="get">
<input type="text" name="name" />
<input type="password" name="password" />
<div class="yourCustomDiv"/>
<button style="position:absolute;left:-10000px;right:9990px"/>
</form>
This works in the current version of all major browsers as of September 2016.
Obviously its reccomended (and more semantically correct) to just style the <button/> as desired.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29943",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "87"
} |
Q: What is the best way to setup an integration testing server? Setting up an integration server, I’m in doubt about the best approach regarding using multiple tasks to complete the build. Is the best way to set all in just one big-job or make small dependent ones?
A: You definitely want to break up the tasks. Here is a nice example of CruiseControl.NET configuration that has different targets (tasks) for each step. It also uses a common.build file which can be shared among projects with little customization.
http://code.google.com/p/dot-net-reference-app/source/browse/#svn/trunk
A: I use TeamCity with an nant build script. TeamCity makes it easy to setup the CI server part, and nant build script makes it easy to do a number of tasks as far as report generation is concerned.
Here is an article I wrote about using CI with CruiseControl.NET, it has a nant build script in the comments that can be re-used across projects:
Continuous Integration with CruiseControl
A: The approach I favour is the following setup (Actually assuming you are in a .NET project):
*
*CruiseControl.NET.
*NANT tasks for each individual step. Nant.Contrib for alternative CC templates.
*NUnit to run unit tests.
*NCover to perform code coverage.
*FXCop for static analysis reports.
*Subversion for source control.
*CCTray or similar on all dev boxes to get notification of builds and failures etc.
On many projects you find that there are different levels of tests and activities which take place when someone does a checkin. Sometimes these can increase in time to the point where it can be a long time after a build before a dev can see if they have broken the build with a checkin.
What I do in these cases is create three builds (or maybe two):
*
*A CI build is triggered by checkin and does a clean SVN Get, Build and runs lightweight tests. Ideally you can keep this down to minutes or less.
*A more comprehensive build which could be hourly (if changes) which does the same as the CI but runs more comprehensive and time consuming tests.
*An overnight build which does everything and also runs code coverage and static analysis of the assemblies and runs any deployment steps to build daily MSI packages etc.
The key thing about any CI system is that it needs to be organic and constantly being tweaked. There are some great extensions to CruiseControl.NET which log and chart build timings etc for the steps and let you do historical analysis and so allow you to continously tweak the builds to keep them snappy. It's something that managers find hard to accept that a build box will probably keep you busy for a fifth of your working time just to stop it grinding to a halt.
A: We use buildbot, with the build broken down into discrete steps. There is a balance to be found between having build steps be broken down with enough granularity and being a complete unit.
For example at my current position, we build the sub-pieces for each of our platforms (Mac, Linux, Windows) on their respective platforms. We then have a single step (with a few sub steps) that compiles them into the final version that will end up in the final distributions.
If something goes wrong in any of those steps it is pretty easy to diagnose.
My advice is to write the steps out on a whiteboard in as vague terms as you can and then base your steps on that. In my case that would be:
*
*Build Plugin Pieces
*
*Compile for Mac
*Compile for PC
*Compile for Linux
*Make final Plugins
*Run Plugin tests
*Build intermediate IDE (We have to bootstrap building)
*Build final IDE
*Run IDE tests
A: I would definitely break down the jobs. Chances are you're likely to make changes in the builds, and it'll be easier to track down issues if you have smaller tasks instead of searching through one monolithic build.
You should be able to create one big job from the smaller pieces, anyways.
A: G'day,
As you're talking about integration testing my big (obvious) tip would be to make the test server built and configured as close as possible to the deployment environment as possible.
</thebloodyobvious> (-:
cheers,
Rob
A: Break your tasks up into discrete goal/operations, then use a higher-level script to tie them all together appropriately.
This makes your build process easier to understand for other people (you're documenting as you go so anyone on your team can pick it up, right?), as well as increasing the potential for re-use. It's likely you won't reuse the high-level scripts (although this could be possible if you have similar projects), but you can definitely reuse (even if it's copy/paste) the discrete operations rather easily.
Consider the example of getting the latest source from your repository. You'll want to group the tasks/operations for retrieving the code with some logging statements and reference the appropriate account information. This is the sort of thing that's very easy to reuse from one project to the next.
For my team's environment, we use NAnt since it provides a common scripting environment between dev machines (where we write/debug the scripts) and the CI server (since we just execute the same scripts in a clean environment). We use Jenkins to manage our builds, but at their core each project is just calling into the same NAnt scripts and then we manipulate the results (ie, archive the build output, flag failing tests etc).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29971",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Best practice for dynamically added Web.UI.ITemplate classes We have a couple of ASP.Net dataview column templates that are dynamically added to the dataview depending on columns selected by users.
These templated cells need to handle custom databindings:
public class CustomColumnTemplate:
ITemplate
{
public void InstantiateIn( Control container )
{
//create a new label
Label contentLabel = new Label();
//add a custom data binding
contentLabel.DataBinding +=
( sender, e ) =>
{
//do custom stuff at databind time
contentLabel.Text = //bound content
};
//add the label to the cell
container.Controls.Add( contentLabel );
}
}
...
myGridView.Columns.Add( new TemplateField
{
ItemTemplate = new CustomColumnTemplate(),
HeaderText = "Custom column"
} );
Firstly this seems rather messy, but there is also a resource issue. The Label is generated, and can't be disposed in the InstantiateIn because then it wouldn't be there to databind.
Is there a better pattern for these controls?
Is there a way to make sure that the label is disposed after the databind and render?
A: I have worked extensively with templated control and I have not found a better solution.
Why are you referencing the contentLable in the event handler?
The sender is the label you can cast it to the label and have the reference to the label. Like below.
//add a custom data binding
contentLabel.DataBinding +=
(object sender, EventArgs e ) =>
{
//do custom stuff at databind time
((Label)sender).Text = //bound content
};
Then you should be able to dispose of the label reference in InstantiateIn.
Please note I have not tested this.
A: One solution is to make your template itself implement IDisposable, and then dispose the controls in your template's Dispose method. Of course this means you need some sort of collection to keep track of the controls you've created. Here is one way to go about it:
public class CustomColumnTemplate :
ITemplate, IDisposable
{
private readonly ICollection<Control> labels = new List<Control>();
public void Dispose()
{
foreach (Control label in this.labels)
label.Dispose();
}
public void InstantiateIn(Control container)
{
//create a new label
Label contentLabel = new Label();
this.labels.Add(contentLabel);
...
//add the label to the cell
container.Controls.Add( contentLabel );
}
}
Now you are still faced with the problem of disposing the template. But at least your template will be a responsible memory consumer because when you call Dispose on the template all of its labels will be disposed with it.
UPDATE
This link on MSDN suggests that perhaps it is not necessary for your template to implement IDisposable because the controls will be rooted in the page's control tree and automatically disposed by the framework!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29976",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How should I test a method that populates a list from a DataReader? So I'm working on some legacy code that's heavy on the manual database operations. I'm trying to maintain some semblance of quality here, so I'm going TDD as much as possible.
The code I'm working on needs to populate, let's say a List<Foo> from a DataReader that returns all the fields required for a functioning Foo. However, if I want to verify that the code in fact returns one list item per one database row, I'm writing test code that looks something like this:
Expect.Call(reader.Read()).Return(true);
Expect.Call(reader["foo_id"]).Return((long) 1);
// ....
Expect.Call(reader.Read()).Return(true);
Expect.Call(reader["foo_id"]).Return((long) 2);
// ....
Expect.Call(reader.Read()).Return(false);
Which is rather tedious and rather easily broken, too.
How should I be approaching this issue so that the result won't be a huge mess of brittle tests?
Btw I'm currently using Rhino.Mocks for this, but I can change it if the result is convincing enough. Just as long as the alternative isn't TypeMock, because their EULA was a bit too scary for my tastes last I checked.
Edit: I'm also currently limited to C# 2.
A: To make this less tedious, you will need to encapsulate/refactor the mapping between the DataReader and the Object you hold in the list. There is quite of few steps to encapsulate that logic out. If that is the road you want to take, I can post code for you. I am just not sure how practical it would be to post the code here on StackOverflow, but I can give it a shot to keep it concise and to the point. Otherwise, you are stuck with the tedious task of repeating each expectation on the index accessor for the reader. The encapsulation process will also get rid of the strings and make those strings more reusable through your tests.
Also, I am not sure at this point how much you want to make the existing code more testable. Since this is legacy code that wasn't built with testing in mind.
A: I thought about posting some code and then I remembered about JP Boodhoo's Nothin But .NET course. He has a sample project that he is sharing that was created during one of his classes. The project is hosted on Google Code and it is a nice resource. I am sure it has some nice tips for you to use and give you ideas on how to refactor the mapping. The whole project was built with TDD.
A: You can put the Foo instances in a list and compare the objects with what you read:
var arrFoos = new Foos[]{...}; // what you expect
var expectedFoos = new List<Foo>(arrFoos); // make a list from the hardcoded array of expected Foos
var readerResult = ReadEntireList(reader); // read everything from reader and put in List<Foo>
Expect.ContainSameFoos(expectedFoos, readerResult); // compare the two lists
A: Kokos,
Couple of things wrong there. First, doing it that way means I have to construct the Foos first, then feed their values to the mock reader which does nothing to reduce the amount of code I'm writing. Second, if the values pass through the reader, the Foos won't be the same Foos (reference equality). They might be equal, but even that's assuming too much of the Foo class that I don't dare touch at this point.
A: Just to clarify, you want to be able to test your call into SQL Server returned some data, or that if you had some data you could map it back into the model?
If you want to test your call into SQL returned some data checkout my answer found here
A: @Toran: What I'm testing is the programmatic mapping from data returned from the database to quote-unquote domain model. Hence I want to mock out the database connection. For the other kind of test, I'd go for all-out integration testing.
@Dale: I guess you nailed it pretty well there, and I was afraid that might be the case. If you've got pointers to any articles or suchlike where someone has done the dirty job and decomposed it into more easily digestible steps, I'd appreciate it. Code samples wouldn't hurt either. I do have a clue on how to approach that problem, but before I actually dare do that, I'm going to need to get other things done, and if testing that will require tedious mocking, then that's what I'll do.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29980",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How to send email from a program _without_ using a preexisting account? I'd like my program to be able to email me error reports. How can I do this without hard-coding a username/password/SMTP server/etc. into the code? (Doing so would allow users to decompile the program and take over this email account.)
I've been told you could do some stuff with telneting to port 25, but I'm very fuzzy on the details. Most of the code snippets on Google assume you have a preexisting account, which doesn't work in this situation.
I am using .NET v3.5 (C# in particular), but I would imagine the ideas are similar enough in most languages. As long as you realize I'm doing this for an offline app, and don't supply me with PHP code or something, we should be fine.
A: As long as your account is on gmail.com, set up gmail-smtp-in.l.google.com as the outgoing SMTP-server in your program. You do not need to provide a password to send email to gmail-accounts when using that server.
A: I would create a webservice to connect to. This webservice should send the email based on the data your program provide. All sensitive access-data is kept on the webservice side, so it's safer.
A: If the program has to email you directly, it has to get that information somehow, so a determined attacker could gain that information as well.
Have you considered hosting a simple http form or web service somewhere, so that you could post the information you need there from the application (no authentication required), and either save it to manually look at later, or send the email from that server?
A: I think the best plan would be to submit the error information to some service (in the simple case, a web form) running under your control, which could then send an email (or log it in some other appropriate way).
If sending the email is assumed to be of benefit to the end user, another option would be to have the user enter their own SMTP server (and username / password if required) - On Unix systems, you can possibly just use sendmail and rely on the user to have it configured correctly. I used to work on a system which used this approach to send the user reports of the system's scheduled tasks, and that worked quite well.
A: What you need to do is knowing the SMTP server of the ISP you are using because when you use the SMTP server of your own ISP, you aren't considered as doing mail-relay spam so you can go through without any authentication.
Unfortunately, it's not always possible to get that information easily OR there's some small exception which wouldn't let you log in anonymously like when you're using Exchange in a company...
At first I thought about encrypting your informations and decrypt them during runtime but it would still be available by sniffing the network card. You could however combine that with a webservice somewhere which would do the decrypt and the mailing for you (or directly send information to the web service, which ever you prefere) like already suggested by others.
A: MailMessage msg = new MailMessage("[email protected]", "[email protected]");
msg.Subject = "Check it out!";
msg.Body = "Visit stackoverflow.com!";
SmtpClient client = new SmtpClient("some.smtp.server", 25);
client.Send(msg);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29988",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Video Codec startcodes Does anybody know (or know of a resource that contains) a list of frame start codes for common video formats (MPEG-1/2/4, .wmv, .mov etc.).
For example, an MPEG-1 video frame will (I think) always start with "00 00 01 00".
In essence I'd like to know these so that I could write a program that can automatically find the start of frames throughout a video for a number of different video formats.
A: What do you want to achieve?
Is this a question how video container types are structured?
See for example : http://www.daubnet.com/formats/AVI.html
That is a description how avi files are structured. Google may help you in finding other container file formats.
When you record a video, it is normally composed of individual frames, think of individual bitmap files in a directory.
To only have 1 file of a video, this stream of frames is put in a container, which has a header describing the contents and a certain layout in which the frames are stored sequentially in the file.
Simple example for my own container :
{
struct header
{
unsigned int frametype;
unsigned int framesize;
};
byte* readFrame( header* pHdr, int frameNum )
{
byte* pFirstFrame = ((byte*) pHdr) + sizeof( header );
return pFristFrame + frameNum * pHdr->framesize;
}
}
There are several other container types. AVI is only one of these container types.
To get to the individual frames you must interpret the header in the file and then based on that information calculate the position of the frame you want to parse.
I posted you a link to the definition of the avi file format. There are other places where you can get information on the mpeg/mkv/ogm file formats.
You need this information to get your program to work.
On a side note, compressed formats do not safe all individual frames independently. They safe an individual frame and then several intermediate frames, which only contain the information on how the current frame differs from the last complete frame. So you cannot extract complete frames at every frame number.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29993",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Simple programming practice (Fizz Buzz, Print Primes) I want to practice my skills away from a keyboard (i.e. pen and paper) and I'm after simple practice questions like Fizz Buzz, Print the first N primes.
What are your favourite simple programming questions?
A: Problem:
Insert + or - sign anywhere between the digits 123456789 in such a way that the expression evaluates to 100. The condition is that the order of the digits must not be changed.
e.g.: 1 + 2 + 3 - 4 + 5 + 6 + 78 + 9 = 100
Programming Problem:
Write a program in your favorite language which outputs all possible solutions of the above problem.
A: If you want a pen and paper kind of exercises I'd recommend more designing than coding.
Actually coding in paper sucks and it lets you learn almost nothing. Work environment does matter so typing on a computer, compiling, seeing what errors you've made, using refactor here and there, just doesn't compare to what you can do on a piece of paper and so, what you can do on a piece of paper, while being an interesting mental exercise is not practical, it will not improve your coding skills so much.
On the other hand, you can design the architecture of a medium or even complex application by hand in a paper. In fact, I usually do. Engineering tools (such as Enterprise Architect) are not good enough to replace the good all by-hand diagrams.
Good projects could be, How would you design a game engine? Classes, Threads, Storage, Physics, the data structures which will hold everything and so on. How would you start a search engine? How would you design an pattern recognition system?
I find that kind of problems much more rewarding than any paper coding you can do.
A: There are some good examples of simple-ish programming questions in Steve Yegge's article Five Essential Phone Screen Questions (under Area Number One: Coding). I find these are pretty good for doing on pen and paper. Also, the questions under OOP Design in the same article can be done on pen and paper (or even in your head) and are, I think, good exercises to do.
A: I've been working on http://projecteuler.net/
A: Towers of Hannoi is great for practice on recursion.
I'd also do a search on sample programming interview questions.
A: Quite a few online sites for competitive programming are full of sample questions/challenges, sorted by 'difficulty'. Quite often, the simpler categories in the 'algorithms' questions would suit you I think.
For example, check out TopCoder (algorithms section)!
Apart from that, 2 samples:
*
*You are given a list of N points in the plane by their coordinates (x_i, y_i), and a number R>0. Output the maximum number out of the N given points that can be simultaneously covered by a disk of radius R (for bonus points: complexity?).
*You are given an array of N numbers a1 to aN, and you want to compute a1 * a2 * ... * aN / ai for all values of i (so the output is again an array of N elements) without using division. Provide a (non-naive) method (complexity should be in O(N) multiplications).
A: I also like project euler, but I would like to point out that the questions get really tricky really fast. After the first 20 questions or so, they start to be problems most people won't be able to figure out in 1/2 an hour. Another problem is that a lot of them deal with math with really large numbers, that don't fit into standard integer or even long variable types.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29995",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: How to compare an html entity with jQuery I have the following html code:
<h3 id="headerid"><span onclick="expandCollapse('headerid')">⇑</span>Header title</h3>
I would like to toggle between up arrow and down arrow each time the user clicks the span tag.
function expandCollapse(id) {
var arrow = $("#"+id+" span").html(); // I have tried with .text() too
if(arrow == "⇓") {
$("#"+id+" span").html("⇑");
} else {
$("#"+id+" span").html("⇓");
}
}
My function is going always the else path. If I make a javacript:alert of arrow variable I am getting the html entity represented as an arrow. How can I tell jQuery to interpret the arrow variable as a string and not as html.
A: If you do an alert of arrow what does it return? Does it return the exact string that you're matching against? If you are getting the actual characters '⇓' and '⇑' you may have to match it against "\u21D1" and "\u21D3".
Also, you may want to try ⇑ and ⇓ since not all browsers support those entities.
Update: here's a fully working example:
http://jsbin.com/edogop/3/edit#html,live
window.expandCollapse = function (id) {
var $arrowSpan = $("#" + id + " span"),
arrowCharCode = $arrowSpan.text().charCodeAt(0);
// 8659 is the unicode value of the html entity
if (arrowCharCode === 8659) {
$arrowSpan.html("⇑");
} else {
$arrowSpan.html("⇓");
}
// one liner:
//$("#" + id + " span").html( ($("#" + id + " span").text().charCodeAt(0) === 8659) ? "⇑" : "⇓" );
};
A: When the HTML is parsed, what JQuery sees in the DOM is a UPWARDS DOUBLE ARROW ("⇑"), not the entity reference. Thus, in your Javascript code you should test for "⇑" or "\u21d1". Also, you need to change what you're switching to:
function expandCollapse(id) {
var arrow = $("#"+id+" span").html();
if(arrow == "\u21d1") {
$("#"+id+" span").html("\u21d3");
} else {
$("#"+id+" span").html("\u21d1");
}
}
A: Check out the .toggle() effect.
Here is something similar i was playing with earlier.
HTML:
<div id="inplace">
<div id="myStatic">Hello World!</div>
<div id="myEdit" style="display: none">
<input id="myNewTxt" type="text" />
<input id="myOk" type="button" value="OK" />
<input id="myX" type="button" value="X" />
</div></div>
SCRIPT:
$("#myStatic").bind("click", function(){
$("#myNewTxt").val($("#myStatic").text());
$("#myStatic,#myEdit").toggle();
});
$("#myOk").click(function(){
$("#myStatic").text($("#myNewTxt").val());
$("#myStatic,#myEdit").toggle();
});
$("#myX").click(function(){
$("#myStatic,#myEdit").toggle();
});
A: Use a class to signal the current state of the span.
The html could look like this
<h3 id="headerId"><span class="upArrow">⇑</span>Header title</h3>
Then in the javascript you do
$( '.upArrow, .downArrow' ).click( function( span ) {
if ( span.hasClass( 'upArrow' ) )
span.text( "⇓" );
else
span.text( "⇑" );
span.toggleClass( 'upArrow' );
span.toggleClass( 'downArrow' );
} );
This may not be the best way, but it should work. Didnt test it tough
A: Maybe you're not getting an exact match because the browser is lower-casing the entity or something. Try using a carat (^) and lower-case "v" just for testing.
Edited - My first theory was plain wrong.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30003",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Can you use LINQ tools such as SQLMetal with an access database? I'm creating a small database application to teach myself the following concepts
*
*C# programming
*.Net 3.5 framework
*WPF
*LINQ ORM
I want to use Microsoft Access as the database but I can't seem to find any mention of whether its possible to use SQLMetal to generate the ORM code from a Microsoft Access database.
Does anyone know if this is possible?
If not, are there any small database or embedded databases I could use? I think SQL express would be overkill for me at this point.
A: For an embedded database, you can use SQL Server Compact Edition. Unlike SQL Server Express, it is not compatible with the LINQ to SQL designer, but it is fully compatible with the command-line SQLMetal. It has a few advantages over SQL Express, like to ability to use embedded or run from a file. Microsoft has a very handy chart outlining the differences between Express and Compact.
A: I don't think SQL Express would be overkill if you want to learn real-world skills - quite the opposite in fact! That'd be my choice, and whatever I chose, I'd stay clear of Access.
Good luck
A: AFAIK, Linq to SQL is MSSQL server provider specific. To be honest, SQL Express is pretty lightweight on todays machines.
BTW don't confuse LINQ with Linq to SQL. Linq is the underlying technology to provide "query" like support to .NET (amongst other things), where as L2S is effectively a Data Access technology built on top of Linq. Vanilla Linq will work with any ADO.NET provider, which of course Access is one.
Entity Framework will work with any compatible provider also but if SQLExpress is too heavy for you then I wouldn't recommend going down this path...
A: Thanks for all the responses. I never expected to get an answer this quick. For my test application I think SQL Server Compact Edition would be the way to go. I'm basically creating a money managment app similar to Microsoft Money and although it is an exercise to learn skills, I would eventually want to use it to manage my finances (provided its not too crap!)
This why I thought a fully blown database would be overkill.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30004",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How do I fire an event when a iframe has finished loading in jQuery? I have to load a PDF within a page.
Ideally I would like to have a loading animated gif which is replaced once the PDF has loaded.
A: This did it for me (not pdf, but another "onload resistant" content):
<iframe id="frameid" src="page.aspx"></iframe>
<script language="javascript">
iframe = document.getElementById("frameid");
WaitForIFrame();
function WaitForIFrame() {
if (iframe.readyState != "complete") {
setTimeout("WaitForIFrame();", 200);
} else {
done();
}
}
function done() {
//some code after iframe has been loaded
}
</script>
Hope this helps.
A: I am trying this and seems to be working for me:
http://jsfiddle.net/aamir/BXe8C/
Bigger pdf file:
http://jsfiddle.net/aamir/BXe8C/1/
A: $("#iFrameId").ready(function (){
// do something once the iframe is loaded
});
have you tried .ready instead?
A: I tried an out of the box approach to this, I havent tested this for PDF content but it did work for normal HTML based content, heres how:
Step 1: Wrap your Iframe in a div wrapper
Step 2: Add a background image to your div wrapper:
.wrapperdiv{
background-image:url(img/loading.gif);
background-repeat:no-repeat;
background-position:center center; /*Can place your loader where ever you like */
}
Step 3: in ur iframe tag add ALLOWTRANSPARENCY="false"
The idea is to show the loading animation in the wrapper div till the iframe loads after it has loaded the iframe would cover the loading animation.
Give it a try.
A: Using both jquery Load and Ready neither seemed to really match when the iframe was TRULY ready.
I ended up doing something like this
$('#iframe').ready(function () {
$("#loader").fadeOut(2500, function (sender) {
$(sender).remove();
});
});
Where #loader is an absolutely positioned div over top the iframe with a spinner gif.
A: I'm pretty certain that it cannot be done.
Pretty much anything else than PDF works, even Flash. (Tested on Safari, Firefox 3, IE 7)
Too bad.
A: Have you tried:
$("#iFrameId").on("load", function () {
// do something once the iframe is loaded
});
A: @Alex aw that's a bummer. What if in your iframe you had an html document that looked like:
<html>
<head>
<meta http-equiv="refresh" content="0;url=/pdfs/somepdf.pdf" />
</head>
<body>
</body>
</html>
Definitely a hack, but it might work for Firefox. Although I wonder if the load event would fire too soon in that case.
A: I had to show a loader while pdf in iFrame is loading so what i come up with:
loader({href:'loader.gif', onComplete: function(){
$('#pd').html('<iframe onLoad="loader.close();" src="pdf" width="720px" height="600px" >Please wait... your report is loading..</iframe>');
}
});
I'm showing a loader. Once I'm sure that customer can see my loader, i'm calling onCompllet loaders method that loads an iframe. Iframe has an "onLoad" event. Once PDF is loaded it triggers onloat event where i'm hiding the loader :)
The important part:
iFrame has "onLoad" event where you can do what you need (hide loaders etc.)
A:
function frameLoaded(element) {
alert('LOADED');
};
<iframe src="https://google.com" title="W3Schools Free Online Web Tutorials" onload="frameLoaded(this)"></iframe>
A: Here is what I do for any action and it works in Firefox, IE, Opera, and Safari.
<script type="text/javascript">
$(document).ready(function(){
doMethod();
});
function actionIframe(iframe)
{
... do what ever ...
}
function doMethod()
{
var iFrames = document.getElementsByTagName('iframe');
// what ever action you want.
function iAction()
{
// Iterate through all iframes in the page.
for (var i = 0, j = iFrames.length; i < j; i++)
{
actionIframe(iFrames[i]);
}
}
// Check if browser is Safari or Opera.
if ($.browser.safari || $.browser.opera)
{
// Start timer when loaded.
$('iframe').load(function()
{
setTimeout(iAction, 0);
}
);
// Safari and Opera need something to force a load.
for (var i = 0, j = iFrames.length; i < j; i++)
{
var iSource = iFrames[i].src;
iFrames[i].src = '';
iFrames[i].src = iSource;
}
}
else
{
// For other good browsers.
$('iframe').load(function()
{
actionIframe(this);
}
);
}
}
</script>
A: If you can expect the browser's open/save interface to pop up for the user once the download is complete, then you can run this when you start the download:
$( document ).blur( function () {
// Your code here...
});
When the dialogue pops up on top of the page, the blur event will trigger.
A: Since after the pdf file is loaded, the iframe document will have a new DOM element <embed/>, so we can do the check like this:
window.onload = function () {
//creating an iframe element
var ifr = document.createElement('iframe');
document.body.appendChild(ifr);
// making the iframe fill the viewport
ifr.width = '100%';
ifr.height = window.innerHeight;
// continuously checking to see if the pdf file has been loaded
self.interval = setInterval(function () {
if (ifr && ifr.contentDocument && ifr.contentDocument.readyState === 'complete' && ifr.contentDocument.embeds && ifr.contentDocument.embeds.length > 0) {
clearInterval(self.interval);
console.log("loaded");
//You can do print here: ifr.contentWindow.print();
}
}, 100);
ifr.src = src;
}
A: The solution I have applied to this situation is to simply place an absolute loading image in the DOM, which will be covered by the iframe layer after the iframe is loaded.
The z-index of the iframe should be (loading's z-index + 1), or just higher.
For example:
.loading-image { position: absolute; z-index: 0; }
.iframe-element { position: relative; z-index: 1; }
Hope this helps if no javaScript solution did. I do think that CSS is best practice for these situations.
Best regards.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30005",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "99"
} |
Q: How do I select an XML-node based on its content? How can I use XPath to select an XML-node based on its content?
If I e.g. have the following xml and I want to select the <author>-node that contains Ritchie to get the author's full name:
<books>
<book isbn='0131103628'>
<title>The C Programming Language</title>
<authors>
<author>Ritchie, Dennis M.</author>
<author>Kernighan, Brian W.</author>
</authors>
</book>
<book isbn='1590593898'>
<title>Joel on Software</title>
<authors>
<author>Spolsky, Joel</author>
</authors>
</book>
</books>
A: The XPath for this is:
/books/book/authors/author[contains(., 'Ritchie')]
In C# the following code would return "Ritchie, Dennis M.":
xmlDoc.SelectSingleNode("/books/book/authors/author[contains(., 'Ritchie')]").InnerText;
A: //author[contains(text(), 'Ritchie')]
A: /books/book/authors/author[contains(., 'Ritchie')]
or
//author[contains(., 'Ritchie')]
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30018",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: Features common to all regex flavors? I've seen a lot of commonality in regex capabilities of different regex-enabled tools/languages (e.g. perl, sed, java, vim, etc), but I've also many differences.
Is there a standard subset of regex capabilities that all regex-enabled tools/languages will support? How do regex capabilities vary between tools/languages?
A: http://en.wikipedia.org/wiki/Comparison_of_regular_expression_engines
Even more detailed: http://www.regular-expressions.info/refflavors.html
A: Compare Regular Expression Flavors
http://www.regular-expressions.info/refflavors.html
A: If you took the grep regexp grammar, not the egrep one, or the sed regexp grammar and used that you should be using a safe subset across many platforms and tools.
About the only thing that may bite you then is when you go shift between regexp implementations using Finite State Automatons (FSA) and ones using backtracking, e.g. quantifier implementations will vary from grep to Perl.
FSA based implementations will find longest match starting at the first possible position. Backtracking ones will find the left-biased first match, starting at the first possible position. That is, it will try each branch in the order in the pattern until a match is found.
Consider the string "xyxyxyzz", and the pattern "(xy)*(xyz)?". FSA based engines will match the longest possible substring, "xyxyxyz". Back-tracking based engines will match the left-biased first substring, "xyxyxy".
A: Most regular expression tools/languages support these basic capabilities:
*
*Character Classes/Sets and their Negation - []
*Anchors - ^$
*Alternation - |
*Quantifiers - ?+*{n,m}
*Metacharacters - \w, \s, \d, ...
*Backreferences - \1, \2, ...
*Dot - .
*Simple modifiers like /g and /i for global and ignore case
*Escaping Characters
More advanced tools/languages support:
*
*Lookaheads and behinds
*POSIX character classes
*Word boundaries
*Inline Switches like allowing case insensitivity for only a small section of the regex
*Modifiers like /x to allow extra formatting and comments, /m for multiline
*Named Captures
*Unicode
A: There's no standard engine. However, the POSIX Extended Regular Expression format is a valid subset of most engines and is probably as close as you'll get to a standardised subset.
A: See emacs's regular expression syntax: http://www.gnu.org/software/emacs/manual/html_node/emacs/Regexps.html#Regexps.
I recall reading that emacs's syntax is set in stone (for backwards compatibility reasons), so if you want to be compatible with everything, make everything compatible with this. Some tools might support it, others might not.
While you have a worthy goal, I think it'll be exceedingly difficult to reach, and I've also found emacs's regexps a pain to work with. Maybe 99% of everything is good enough if it makes you happier and more productive?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30026",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: JavaScript and Threads Is there some way to do multi-threading in JavaScript?
A: Different way to do multi-threading and Asynchronous in JavaScript
Before HTML5 JavaScript only allowed the execution of one thread per page.
There was some hacky way to simulate an asynchronous execution with Yield, setTimeout(), setInterval(), XMLHttpRequest or event handlers (see the end of this post for an example with yield and setTimeout()).
But with HTML5 we can now use Worker Threads to parallelize the execution of functions. Here is an example of use.
Real multi-threading
Multi-threading: JavaScript Worker Threads
HTML5 introduced Web Worker Threads (see: browsers compatibilities)
Note: IE9 and earlier versions do not support it.
These worker threads are JavaScript threads that run in background without affecting the performance of the page. For more information about Web Worker read the documentation or this tutorial.
Here is a simple example with 3 Web Worker threads that count to MAX_VALUE and show the current computed value in our page:
//As a worker normally take another JavaScript file to execute we convert the function in an URL: http://stackoverflow.com/a/16799132/2576706
function getScriptPath(foo){ return window.URL.createObjectURL(new Blob([foo.toString().match(/^\s*function\s*\(\s*\)\s*\{(([\s\S](?!\}$))*[\s\S])/)[1]],{type:'text/javascript'})); }
var MAX_VALUE = 10000;
/*
* Here are the workers
*/
//Worker 1
var worker1 = new Worker(getScriptPath(function(){
self.addEventListener('message', function(e) {
var value = 0;
while(value <= e.data){
self.postMessage(value);
value++;
}
}, false);
}));
//We add a listener to the worker to get the response and show it in the page
worker1.addEventListener('message', function(e) {
document.getElementById("result1").innerHTML = e.data;
}, false);
//Worker 2
var worker2 = new Worker(getScriptPath(function(){
self.addEventListener('message', function(e) {
var value = 0;
while(value <= e.data){
self.postMessage(value);
value++;
}
}, false);
}));
worker2.addEventListener('message', function(e) {
document.getElementById("result2").innerHTML = e.data;
}, false);
//Worker 3
var worker3 = new Worker(getScriptPath(function(){
self.addEventListener('message', function(e) {
var value = 0;
while(value <= e.data){
self.postMessage(value);
value++;
}
}, false);
}));
worker3.addEventListener('message', function(e) {
document.getElementById("result3").innerHTML = e.data;
}, false);
// Start and send data to our worker.
worker1.postMessage(MAX_VALUE);
worker2.postMessage(MAX_VALUE);
worker3.postMessage(MAX_VALUE);
<div id="result1"></div>
<div id="result2"></div>
<div id="result3"></div>
We can see that the three threads are executed in concurrency and print their current value in the page. They don't freeze the page because they are executed in the background with separated threads.
Multi-threading: with multiple iframes
Another way to achieve this is to use multiple iframes, each one will execute a thread. We can give the iframe some parameters by the URL and the iframe can communicate with his parent in order to get the result and print it back (the iframe must be in the same domain).
This example doesn't work in all browsers! iframes usually run in the same thread/process as the main page (but Firefox and Chromium seem to handle it differently).
Since the code snippet does not support multiple HTML files, I will just provide the different codes here:
index.html:
//The 3 iframes containing the code (take the thread id in param)
<iframe id="threadFrame1" src="thread.html?id=1"></iframe>
<iframe id="threadFrame2" src="thread.html?id=2"></iframe>
<iframe id="threadFrame3" src="thread.html?id=3"></iframe>
//Divs that shows the result
<div id="result1"></div>
<div id="result2"></div>
<div id="result3"></div>
<script>
//This function is called by each iframe
function threadResult(threadId, result) {
document.getElementById("result" + threadId).innerHTML = result;
}
</script>
thread.html:
//Get the parameters in the URL: http://stackoverflow.com/a/1099670/2576706
function getQueryParams(paramName) {
var qs = document.location.search.split('+').join(' ');
var params = {}, tokens, re = /[?&]?([^=]+)=([^&]*)/g;
while (tokens = re.exec(qs)) {
params[decodeURIComponent(tokens[1])] = decodeURIComponent(tokens[2]);
}
return params[paramName];
}
//The thread code (get the id from the URL, we can pass other parameters as needed)
var MAX_VALUE = 100000;
(function thread() {
var threadId = getQueryParams('id');
for(var i=0; i<MAX_VALUE; i++){
parent.threadResult(threadId, i);
}
})();
Simulate multi-threading
Single-thread: emulate JavaScript concurrency with setTimeout()
The 'naive' way would be to execute the function setTimeout() one after the other like this:
setTimeout(function(){ /* Some tasks */ }, 0);
setTimeout(function(){ /* Some tasks */ }, 0);
[...]
But this method does not work because each task will be executed one after the other.
We can simulate asynchronous execution by calling the function recursively like this:
var MAX_VALUE = 10000;
function thread1(value, maxValue){
var me = this;
document.getElementById("result1").innerHTML = value;
value++;
//Continue execution
if(value<=maxValue)
setTimeout(function () { me.thread1(value, maxValue); }, 0);
}
function thread2(value, maxValue){
var me = this;
document.getElementById("result2").innerHTML = value;
value++;
if(value<=maxValue)
setTimeout(function () { me.thread2(value, maxValue); }, 0);
}
function thread3(value, maxValue){
var me = this;
document.getElementById("result3").innerHTML = value;
value++;
if(value<=maxValue)
setTimeout(function () { me.thread3(value, maxValue); }, 0);
}
thread1(0, MAX_VALUE);
thread2(0, MAX_VALUE);
thread3(0, MAX_VALUE);
<div id="result1"></div>
<div id="result2"></div>
<div id="result3"></div>
As you can see this second method is very slow and freezes the browser because it uses the main thread to execute the functions.
Single-thread: emulate JavaScript concurrency with yield
Yield is a new feature in ECMAScript 6, it only works on the oldest version of Firefox and Chrome (in Chrome you need to enable Experimental JavaScript appearing in chrome://flags/#enable-javascript-harmony).
The yield keyword causes generator function execution to pause and the value of the expression following the yield keyword is returned to the generator's caller. It can be thought of as a generator-based version of the return keyword.
A generator allows you to suspend execution of a function and resume it later. A generator can be used to schedule your functions with a technique called trampolining.
Here is the example:
var MAX_VALUE = 10000;
Scheduler = {
_tasks: [],
add: function(func){
this._tasks.push(func);
},
start: function(){
var tasks = this._tasks;
var length = tasks.length;
while(length>0){
for(var i=0; i<length; i++){
var res = tasks[i].next();
if(res.done){
tasks.splice(i, 1);
length--;
i--;
}
}
}
}
}
function* updateUI(threadID, maxValue) {
var value = 0;
while(value<=maxValue){
yield document.getElementById("result" + threadID).innerHTML = value;
value++;
}
}
Scheduler.add(updateUI(1, MAX_VALUE));
Scheduler.add(updateUI(2, MAX_VALUE));
Scheduler.add(updateUI(3, MAX_VALUE));
Scheduler.start()
<div id="result1"></div>
<div id="result2"></div>
<div id="result3"></div>
A: Here is just a way to simulate multi-threading in Javascript
Now I am going to create 3 threads which will calculate numbers addition, numbers can be divided with 13 and numbers can be divided with 3 till 10000000000. And these 3 functions are not able to run in same time as what Concurrency means. But I will show you a trick that will make these functions run recursively in the same time : jsFiddle
This code belongs to me.
Body Part
<div class="div1">
<input type="button" value="start/stop" onclick="_thread1.control ? _thread1.stop() : _thread1.start();" /><span>Counting summation of numbers till 10000000000</span> = <span id="1">0</span>
</div>
<div class="div2">
<input type="button" value="start/stop" onclick="_thread2.control ? _thread2.stop() : _thread2.start();" /><span>Counting numbers can be divided with 13 till 10000000000</span> = <span id="2">0</span>
</div>
<div class="div3">
<input type="button" value="start/stop" onclick="_thread3.control ? _thread3.stop() : _thread3.start();" /><span>Counting numbers can be divided with 3 till 10000000000</span> = <span id="3">0</span>
</div>
Javascript Part
var _thread1 = {//This is my thread as object
control: false,//this is my control that will be used for start stop
value: 0, //stores my result
current: 0, //stores current number
func: function () { //this is my func that will run
if (this.control) { // checking for control to run
if (this.current < 10000000000) {
this.value += this.current;
document.getElementById("1").innerHTML = this.value;
this.current++;
}
}
setTimeout(function () { // And here is the trick! setTimeout is a king that will help us simulate threading in javascript
_thread1.func(); //You cannot use this.func() just try to call with your object name
}, 0);
},
start: function () {
this.control = true; //start function
},
stop: function () {
this.control = false; //stop function
},
init: function () {
setTimeout(function () {
_thread1.func(); // the first call of our thread
}, 0)
}
};
var _thread2 = {
control: false,
value: 0,
current: 0,
func: function () {
if (this.control) {
if (this.current % 13 == 0) {
this.value++;
}
this.current++;
document.getElementById("2").innerHTML = this.value;
}
setTimeout(function () {
_thread2.func();
}, 0);
},
start: function () {
this.control = true;
},
stop: function () {
this.control = false;
},
init: function () {
setTimeout(function () {
_thread2.func();
}, 0)
}
};
var _thread3 = {
control: false,
value: 0,
current: 0,
func: function () {
if (this.control) {
if (this.current % 3 == 0) {
this.value++;
}
this.current++;
document.getElementById("3").innerHTML = this.value;
}
setTimeout(function () {
_thread3.func();
}, 0);
},
start: function () {
this.control = true;
},
stop: function () {
this.control = false;
},
init: function () {
setTimeout(function () {
_thread3.func();
}, 0)
}
};
_thread1.init();
_thread2.init();
_thread3.init();
I hope this way will be helpful.
A: You could use Narrative JavaScript, a compiler that will transforms your code into a state machine, effectively allowing you to emulate threading. It does so by adding a "yielding" operator (notated as '->') to the language that allows you to write asynchronous code in a single, linear code block.
A: The new v8 engine which should come out today supports it (i think)
A: In raw Javascript, the best that you can do is using the few asynchronous calls (xmlhttprequest), but that's not really threading and very limited. Google Gears adds a number of APIs to the browser, some of which can be used for threading support.
A: If you can't or don't want to use any AJAX stuff, use an iframe or ten! ;) You can have processes running in iframes in parallel with the master page without worrying about cross browser comparable issues or syntax issues with dot net AJAX etc, and you can call the master page's JavaScript (including the JavaScript that it has imported) from an iframe.
E.g, in a parent iframe, to call egFunction() in the parent document once the iframe content has loaded (that's the asynchronous part)
parent.egFunction();
Dynamically generate the iframes too so the main html code is free from them if you want.
A: Another possible method is using an javascript interpreter in the javascript environment.
By creating multiple interpreters and controlling their execution from the main thread, you can simulate multi-threading with each thread running in its own environment.
The approach is somewhat similar to web workers, but you give the interpreter access to the browser global environment.
I made a small project to demonstrate this.
A more detailed explanation in this blog post.
A: With the HTML5 "side-specs" no need to hack javascript anymore with setTimeout(), setInterval(), etc.
HTML5 & Friends introduces the javascript Web Workers specification. It is an API for running scripts asynchronously and independently.
Links to the specification and a tutorial.
A: See http://caniuse.com/#search=worker for the most up-to-date support info.
The following was the state of support circa 2009.
The words you want to google for are JavaScript Worker Threads
Apart from from Gears there's nothing available right now, but there's plenty of talk about how to implement this so I guess watch this question as the answer will no doubt change in future.
Here's the relevant documentation for Gears: WorkerPool API
WHATWG has a Draft Recommendation for worker threads: Web Workers
And there's also Mozilla’s DOM Worker Threads
Update: June 2009, current state of browser support for JavaScript threads
Firefox 3.5 has web workers. Some demos of web workers, if you want to see them in action:
*
*Simulated Annealing ("Try it" link)
*Space Invaders (link at end of post)
*MoonBat JavaScript Benchmark (first link)
The Gears plugin can also be installed in Firefox.
Safari 4, and the WebKit nightlies have worker threads:
*
*JavaScript Ray Tracer
Chrome has Gears baked in, so it can do threads, although it requires a confirmation prompt from the user (and it uses a different API to web workers, although it will work in any browser with the Gears plugin installed):
*
*Google Gears WorkerPool Demo (not a good example as it runs too fast to test in Chrome and Firefox, although IE runs it slow enough to see it blocking interaction)
IE8 and IE9 can only do threads with the Gears plugin installed
A: There's no true threading in JavaScript. JavaScript being the malleable language that it is, does allow you to emulate some of it. Here is an example I came across the other day.
A: There is no true multi-threading in Javascript, but you can get asynchronous behavior using setTimeout() and asynchronous AJAX requests.
What exactly are you trying to accomplish?
A: Javascript doesn't have threads, but we do have workers.
Workers may be a good choice if you don't need shared objects.
Most browser implementations will actually spread workers across all cores allowing you to utilize all cores. You can see a demo of this here.
I have developed a library called task.js that makes this very easy to do.
task.js Simplified interface for getting CPU intensive code to run on all cores (node.js, and web)
A example would be
function blocking (exampleArgument) {
// block thread
}
// turn blocking pure function into a worker task
const blockingAsync = task.wrap(blocking);
// run task on a autoscaling worker pool
blockingAsync('exampleArgumentValue').then(result => {
// do something with result
});
A: With HTML5 specification you do not need to write too much JS for the same or find some hacks.
One of the feature introduced in HTML5 is Web Workers which is JavaScript running in the background,independently of other scripts, without affecting the performance of the page.
It is supported in almost all browsers :
Chrome - 4.0+
IE - 10.0+
Mozilla - 3.5+
Safari - 4.0+
Opera - 11.5+
A: Topaz is lock free Multithreaded Javascript Engine for .NET: https://github.com/koculu/topaz
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30036",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "154"
} |
Q: Should DOM splitText and normalise compose to give the identity? I got embroiled in a discussion about DOM implementation quirks yesterday, with gave rise to an interesting question regarding Text.splitText and Element.normalise behaviours, and how they should behave.
In DOM Level 1 Core, Text.splitText is defined as...
Breaks this Text node into two Text nodes at the specified offset, keeping both in the tree as siblings. This node then only contains all the content up to the offset point. And a new Text node, which is inserted as the next sibling of this node, contains all the content at and after the offset point.
Normalise is...
Puts all Text nodes in the full depth of the sub-tree underneath this Element into a "normal" form where only markup (e.g., tags, comments, processing instructions, CDATA sections, and entity references) separates Text nodes, i.e., there are no adjacent Text nodes. This can be used to ensure that the DOM view of a document is the same as if it were saved and re-loaded, and is useful when operations (such as XPointer lookups) that depend on a particular document tree structure are to be used.
So, if I take a text node containing "Hello World", referenced in textNode, and do
textNode.splitText(3)
textNode now has the content "Hello", and a new sibling containing " World"
If I then
textNode.parent.normalize()
what is textNode? The specification doesn't make it clear that textNode has to still be a child of it's previous parent, just updated to contain all adjacent text nodes (which are then removed). It seems to be to be a conforment behaviour to remove all the adjacent text nodes, and then recreate a new node with the concatenation of the values, leaving textNode pointing to something that is no longer part of the tree. Or, we can update textNode in the same fashion as in splitText, so it retains it's tree position, and gets a new value.
The choice of behaviour is really quite different, and I can't find a clarification on which is correct, or if this is simply an oversight in the specification (it doesn't seem to be clarified in levels 2 or 3). Can any DOM/XML gurus out there shed some light?
A: I was on the DOM Working Group in the early days; I'm sure we meant for textNode to contain the new joined value, but if we didn't say it in the spec, it's possible that some implementation might create a new node instead of reusing textNode, though that would require more work for the implementors.
When in doubt, program defensively.
A: While it would seem like a reasonable assumption, I agree that it is not explicityly made clear in the specification. All I can add is that the way I read it, one of either textNode or it's new sibling (i.e. return value from splitText) would contain the new joined value - the statement specifies that all nodes in the sub-tree are put in normal form, not that the sub-tree is normalised to a new structure. I guess the only safe thing is to keep a reference to the parent before normalising.
A: I think all bets are off here; I certainly wouldn't depend on any given behaviour. The only safe thing to do is to get the node from its parent again.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30049",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: How can I launch the Google Maps iPhone application from within my own native application? The Apple Developer Documentation (link is dead now) explains that if you place a link in a web page and then click it whilst using Mobile Safari on the iPhone, the Google Maps application that is provided as standard with the iPhone will launch.
How can I launch the same Google Maps application with a specific address from within my own native iPhone application (i.e. not a web page through Mobile Safari) in the same way that tapping an address in Contacts launches the map?
NOTE: THIS ONLY WORKS ON THE DEVICE ITSELF. NOT IN THE SIMULATOR.
A: If you are using ios 10 then please don't forget to add Query Schemes in Info.plist
<key>LSApplicationQueriesSchemes</key>
<array>
<string>comgooglemaps</string>
</array>
If you are using objective-c
if ([[UIApplication sharedApplication] canOpenURL: [NSURL URLWithString:@"comgooglemaps:"]]) {
NSString *urlString = [NSString stringWithFormat:@"comgooglemaps://?ll=%@,%@",destinationLatitude,destinationLongitude];
[[UIApplication sharedApplication] openURL:[NSURL URLWithString:urlString]];
} else {
NSString *string = [NSString stringWithFormat:@"http://maps.google.com/maps?ll=%@,%@",destinationLatitude,destinationLongitude];
[[UIApplication sharedApplication] openURL:[NSURL URLWithString:string]];
}
If you are using swift 2.2
if UIApplication.sharedApplication().canOpenURL(NSURL(string: "comgooglemaps:")!) {
var urlString = "comgooglemaps://?ll=\(destinationLatitude),\(destinationLongitude)"
UIApplication.sharedApplication().openURL(NSURL(string: urlString)!)
}
else {
var string = "http://maps.google.com/maps?ll=\(destinationLatitude),\(destinationLongitude)"
UIApplication.sharedApplication().openURL(NSURL(string: string)!)
}
If you are using swift 3.0
if UIApplication.shared.canOpenURL(URL(string: "comgooglemaps:")!) {
var urlString = "comgooglemaps://?ll=\(destinationLatitude),\(destinationLongitude)"
UIApplication.shared.openURL(URL(string: urlString)!)
}
else {
var string = "http://maps.google.com/maps?ll=\(destinationLatitude),\(destinationLongitude)"
UIApplication.shared.openURL(URL(string: string)!)
}
A: For iOS 5.1.1 and lower, use the openURL method of UIApplication. It will perform the normal iPhone magical URL reinterpretation. so
[someUIApplication openURL:[NSURL URLWithString:@"http://maps.google.com/maps?q=London"]]
should invoke the Google maps app.
From iOS 6, you'll be invoking Apple's own Maps app. For this, configure an MKMapItem object with the location you want to display, and then send it the openInMapsWithLaunchOptions message. To start at the current location, try:
[[MKMapItem mapItemForCurrentLocation] openInMapsWithLaunchOptions:nil];
You'll need to be linked against MapKit for this (and it will prompt for location access, I believe).
A: Exactly. The code that you need to achieve this is something like that:
UIApplication *app = [UIApplication sharedApplication];
[app openURL:[NSURL URLWithString: @"http://maps.google.com/maps?q=London"]];
since as per the documentation, UIApplication is only available in the Application Delegate unless you call sharedApplication.
A: To open Google Maps at specific co-ordinates, try this code:
NSString *latlong = @"-56.568545,1.256281";
NSString *url = [NSString stringWithFormat: @"http://maps.google.com/maps?ll=%@",
[latlong stringByAddingPercentEscapesUsingEncoding:NSUTF8StringEncoding]];
[[UIApplication sharedApplication] openURL:[NSURL URLWithString:url]];
You can replace the latlong string with the current location from CoreLocation.
You can also specify the zoom level, using the (”z“) flag. Values are 1-19. Here's an example:
[[UIApplication sharedApplication] openURL:[NSURL URLWithString:@"http://maps.google.com/maps?z=8"]];
A: For the phone question, are you testing on the simulator? This only works on the device itself.
Also, openURL returns a bool, which you can use to check if the device you're running on supports the functionality. For example, you can't make calls on an iPod Touch :-)
A: Just call this method and add Google Maps URL Scheme into your .plist file same as this Answer.
Swift-4 :-
func openMapApp(latitude:String, longitude:String, address:String) {
var myAddress:String = address
//For Apple Maps
let testURL2 = URL.init(string: "http://maps.apple.com/")
//For Google Maps
let testURL = URL.init(string: "comgooglemaps-x-callback://")
//For Google Maps
if UIApplication.shared.canOpenURL(testURL!) {
var direction:String = ""
myAddress = myAddress.replacingOccurrences(of: " ", with: "+")
direction = String(format: "comgooglemaps-x-callback://?daddr=%@,%@&x-success=sourceapp://?resume=true&x-source=AirApp", latitude, longitude)
let directionsURL = URL.init(string: direction)
if #available(iOS 10, *) {
UIApplication.shared.open(directionsURL!)
} else {
UIApplication.shared.openURL(directionsURL!)
}
}
//For Apple Maps
else if UIApplication.shared.canOpenURL(testURL2!) {
var direction:String = ""
myAddress = myAddress.replacingOccurrences(of: " ", with: "+")
var CurrentLocationLatitude:String = ""
var CurrentLocationLongitude:String = ""
if let latitude = USERDEFAULT.value(forKey: "CurrentLocationLatitude") as? Double {
CurrentLocationLatitude = "\(latitude)"
//print(myLatitude)
}
if let longitude = USERDEFAULT.value(forKey: "CurrentLocationLongitude") as? Double {
CurrentLocationLongitude = "\(longitude)"
//print(myLongitude)
}
direction = String(format: "http://maps.apple.com/?saddr=%@,%@&daddr=%@,%@", CurrentLocationLatitude, CurrentLocationLongitude, latitude, longitude)
let directionsURL = URL.init(string: direction)
if #available(iOS 10, *) {
UIApplication.shared.open(directionsURL!)
} else {
UIApplication.shared.openURL(directionsURL!)
}
}
//For SAFARI Browser
else {
var direction:String = ""
direction = String(format: "http://maps.google.com/maps?q=%@,%@", latitude, longitude)
direction = direction.replacingOccurrences(of: " ", with: "+")
let directionsURL = URL.init(string: direction)
if #available(iOS 10, *) {
UIApplication.shared.open(directionsURL!)
} else {
UIApplication.shared.openURL(directionsURL!)
}
}
}
Hope, this is what you're looking for. Any concern get back to me. :)
A: There is also now the App Store Google Maps app, documented at https://developers.google.com/maps/documentation/ios/urlscheme
So you'd first check that it's installed:
[[UIApplication sharedApplication] canOpenURL:
[NSURL URLWithString:@"comgooglemaps://"]];
And then you can conditionally replace http://maps.google.com/maps?q= with comgooglemaps://?q=.
A: Here's the Apple URL Scheme Reference for Map Links: https://developer.apple.com/library/archive/featuredarticles/iPhoneURLScheme_Reference/MapLinks/MapLinks.html
The rules for creating a valid map link are as follows:
*
*The domain must be google.com and the subdomain must be maps or ditu.
*The path must be /, /maps, /local, or /m if the query contains site as the key and local as the value.
*The path cannot be /maps/*.
*All parameters must be supported. See Table 1 for list of supported parameters**.
*A parameter cannot be q=* if the value is a URL (so KML is not picked up).
*The parameters cannot include view=text or dirflg=r.
**See the link above for the list of supported parameters.
A: "g" change to "q"
[[UIApplication sharedApplication] openURL:[NSURL URLWithString: @"http://maps.google.com/maps?q=London"]]
A: If you're still having trouble, this video shows how to get "My maps" from google to show up on the iphone -- you can then take the link and send it to anybody and it works.
http://www.youtube.com/watch?v=Xo5tPjsFBX4
A: For move to Google map use this api and send destination latitude and longitude
NSString* addr = nil;
addr = [NSString stringWithFormat:@"http://maps.google.com/maps?daddr=%1.6f,%1.6f&saddr=Posizione attuale", destinationLat,destinationLong];
NSURL* url = [[NSURL alloc] initWithString:[addr stringByAddingPercentEscapesUsingEncoding:NSUTF8StringEncoding]];
[[UIApplication sharedApplication] openURL:url];
A: If you need more flexibility than the Google URL format gives you or you would like to embed a map in your application instead of launching the map app here is an example.
It will even supply you with the source code to do all of the embedding.
A: If you need more flexabilty than the Google URL format gives you or you would like to embed a map in your application instead of launching the map app there is an example found at https://sourceforge.net/projects/quickconnect.
A: iPhone4 iOS 6.0.1 (10A523)
For both Safari & Chrome.
Both latest version up till now (2013-Jun-10th).
The URL scheme below also work.
But in case of Chrome only works inside the page doesn't work from the address bar.
maps:q=GivenTitle@latitude,longtitude
A: **Getting Directions between 2 locations**
NSString *googleMapUrlString = [NSString stringWithFormat:@"http://maps.google.com/?saddr=%@,%@&daddr=%@,%@", @"30.7046", @"76.7179", @"30.4414", @"76.1617"];
[[UIApplication sharedApplication] openURL:[NSURL URLWithString:googleMapUrlString]];
A: Working Code as on Swift 4:
Step 1 - Add Following to info.plist
<key>LSApplicationQueriesSchemes</key>
<array>
<string>googlechromes</string>
<string>comgooglemaps</string>
</array>
Step 2 - Use Following Code to Show Google Maps
let destinationLatitude = "40.7128"
let destinationLongitude = "74.0060"
if UIApplication.shared.canOpenURL(URL(string: "comgooglemaps:")!) {
if let url = URL(string: "comgooglemaps://?ll=\(destinationLatitude),\(destinationLongitude)"), !url.absoluteString.isEmpty {
UIApplication.shared.open(url, options: [:], completionHandler: nil)
}
}else{
if let url = URL(string: "http://maps.google.com/maps?ll=\(destinationLatitude),\(destinationLongitude)"), !url.absoluteString.isEmpty {
UIApplication.shared.open(url, options: [:], completionHandler: nil)
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30058",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "70"
} |
Q: Boolean Field in Oracle Yesterday I wanted to add a boolean field to an Oracle table. However, there isn't actually a boolean data type in Oracle. Does anyone here know the best way to simulate a boolean? Googling the subject discovered several approaches
*
*Use an integer and just don't bother assigning anything other than 0 or 1 to it.
*Use a char field with 'Y' or 'N' as the only two values.
*Use an enum with the CHECK constraint.
Do experienced Oracle developers know which approach is preferred/canonical?
A: I found this link useful.
Here is the paragraph highlighting some of the pros/cons of each approach.
The most commonly seen design is to imitate the many Boolean-like
flags that Oracle's data dictionary views use, selecting 'Y' for true
and 'N' for false. However, to interact correctly with host
environments, such as JDBC, OCCI, and other programming environments,
it's better to select 0 for false and 1 for true so it can work
correctly with the getBoolean and setBoolean functions.
Basically they advocate method number 2, for efficiency's sake, using
*
*values of 0/1 (because of interoperability with JDBC's getBoolean() etc.) with a check constraint
*a type of CHAR (because it uses less space than NUMBER).
Their example:
create table tbool (bool char check (bool in (0,1));
insert into tbool values(0);
insert into tbool values(1);`
A: Either 1/0 or Y/N with a check constraint on it. ether way is fine. I personally prefer 1/0 as I do alot of work in perl, and it makes it really easy to do perl Boolean operations on database fields.
If you want a really in depth discussion of this question with one of Oracles head honchos, check out what Tom Kyte has to say about this Here
A: The database I did most of my work on used 'Y' / 'N' as booleans. With that implementation, you can pull off some tricks like:
*
*Count rows that are true:
SELECT SUM(CASE WHEN BOOLEAN_FLAG = 'Y' THEN 1 ELSE 0) FROM X
*When grouping rows, enforce "If one row is true, then all are true" logic:
SELECT MAX(BOOLEAN_FLAG) FROM Y
Conversely, use MIN to force the grouping false if one row is false.
A: A working example to implement the accepted answer by adding a "Boolean" column to an existing table in an oracle database (using number type):
ALTER TABLE my_table_name ADD (
my_new_boolean_column number(1) DEFAULT 0 NOT NULL
CONSTRAINT my_new_boolean_column CHECK (my_new_boolean_column in (1,0))
);
This creates a new column in my_table_name called my_new_boolean_column with default values of 0. The column will not accept NULL values and restricts the accepted values to either 0 or 1.
A: Oracle itself uses Y/N for Boolean values. For completeness it should be noted that pl/sql has a boolean type, it is only tables that do not.
If you are using the field to indicate whether the record needs to be processed or not you might consider using Y and NULL as the values. This makes for a very small (read fast) index that takes very little space.
A: To use the least amount of space you should use a CHAR field constrained to 'Y' or 'N'. Oracle doesn't support BOOLEAN, BIT, or TINYINT data types, so CHAR's one byte is as small as you can get.
A: The best option is 0 and 1 (as numbers - another answer suggests 0 and 1 as CHAR for space-efficiency but that's a bit too twisted for me), using NOT NULL and a check constraint to limit contents to those values. (If you need the column to be nullable, then it's not a boolean you're dealing with but an enumeration with three values...)
Advantages of 0/1:
*
*Language independent. 'Y' and 'N' would be fine if everyone used it. But they don't. In France they use 'O' and 'N' (I have seen this with my own eyes). I haven't programmed in Finland to see whether they use 'E' and 'K' there - no doubt they're smarter than that, but you can't be sure.
*Congruent with practice in widely-used programming languages (C, C++, Perl, Javascript)
*Plays better with the application layer e.g. Hibernate
*Leads to more succinct SQL, for example, to find out how many bananas are ready to eat select sum(is_ripe) from bananas instead of select count(*) from bananas where is_ripe = 'Y' or even (yuk) select sum(case is_ripe when 'Y' then 1 else 0) from bananas
Advantages of 'Y'/'N':
*
*Takes up less space than 0/1
*It's what Oracle suggests, so might be what some people are more used to
Another poster suggested 'Y'/null for performance gains. If you've proven that you need the performance, then fair enough, but otherwise avoid since it makes querying less natural (some_column is null instead of some_column = 0) and in a left join you'll conflate falseness with nonexistent records.
A: In our databases we use an enum that ensures we pass it either TRUE or FALSE. If you do it either of the first two ways it is too easy to either start adding new meaning to the integer without going through a proper design, or ending up with that char field having Y, y, N, n, T, t, F, f values and having to remember which section of code uses which table and which version of true it is using.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30062",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "149"
} |
Q: How to redirect all stderr in bash? I'm looking for a way to redirect all the stderr streams in interactive bash (ideally to its calling parent process).
I don't want to redirect stderr stream from each individual command, which I could do by appending 2> a_file to each command.
By default, these stderr streams are redirected to the stdout of an interactive bash. I would like to get them on the stderr of this interactive bash process in order to prevent my stdout to be polluted by error messages and be able to treat them separatly.
Any ideas?
I still haven't found an answer ... But maybe it's actually a tty parameter. Does anybody knows something about tty/interactive shell responsibility for handling stderr ?
A: You could launch a new bash process redirecting the stderr of that process:
$ bash -i 2> stderr.log
$
A: I find a good way is to surround the commands by parentheses, '()', (launch a sub-shell) or curly-braces, '{}' (no sub-shell; faster):
{
cmd1
cmd2
...
cmdN
} 2> error.log
Of course, this can be done on 1 line:
{ cmd1; cmd2; ... cmdN; } 2> error.log
A: Try your commands in doublequotes, like so:
ssh remotehost "command" 2>~/stderr
Tested on my local system using a nonexistant file on the remote host.
$ ssh remotehost "tail x;head x" 2>~/stderr
$ cat stderr
tail: cannot open `x' for reading: No such file or directory
head: cannot open `x' for reading: No such file or directory
A: I don't see your problem it works as designed:
$ ssh remotehost 'ls nosuchfile; ls /etc/passwd' >/tmp/stdout 2>/tmp/stderr
$ cat /tmp/stdout
/etc/passwd
$ cat /tmp/stderr
nosuchfile not found
A: Two things:
*
*Using 2>&1 in a remote ssh command results in the error ending up inside the local tarfile, resulting in a 'broken' backup.
*If you want to apply a redirect on the other side of the ssh, remember to escape the redirect command.
My suggestion would be to redirect stderr on the remote side to a file and pick it up later, in case of an error.
example:
ssh -t remotehost tar -cf - /mnt/backup 2\>backup.err > localbackup.tar
EXITSTATUS=$?
if [ $EXITSTATUS != "0" ] then
echo Error occurred!
ssh remotehost cat backup.err >localbackup.errors
cat localbackup.errors
ssh remotehost rm backup.err
else
echo Backup completed successfully!
ssh remotehost rm backup.err
fi
A: Use the exec builtin in bash:
exec 2> /tmp/myfile
A: Tried ssh -t to create a pseudo-TTY at the remote end?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30066",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Should I migrate to ASP.NET MVC? I just listened to the StackOverflow team's 17th podcast, and they talked so highly of ASP.NET MVC that I decided to check it out.
But first, I want to be sure it's worth it. I already created a base web application (for other developers to build on) for a project that's starting in a few days and wanted to know, based on your experience, if I should take the time to learn the basics of MVC and re-create the base web application with this model.
Are there really big pros that'd make it worthwhile?
EDIT: It's not an existing project, it's a project about to start, so if I'm going to do it it should be now...
I just found this
It does not, however, use the existing post-back model for interactions back to the server. Instead, you'll route all end-user interactions to a Controller class instead - which helps ensure clean separation of concerns and testability (it also means no viewstate or page lifecycle with MVC based views).
How would that work? No viewstate? No events?
A: I would create a test site first, and see what the team thinks, but for me I wouldn't go back to WebForms after using MVC.
Some people don't like code mixed with HTML, and I can understand that, but I far prefer the flexibility over things like Page Lifecycle, rendering HTML and biggy for me - no viewstate cruft embedded in the page source.
Some people prefer MVC for better testibility, but personally most of my code is in the middle layer and easily tested anyway...
A: If you are quite happy with WebForms today, then maybe ASP.NET MVC isn't for you.
I have been frustrated with WebForms for a really long time. I'm definitely not alone here. The smart-client, stateful abstraction over the web breaks down severely in complex scenarios. I happen to love HTML, Javascript, and CSS. WebForms tries to hide that from me. It also has some really complex solutions to problems that are really not that complex. Webforms is also inherently difficult to test, and while you can use MVP, it's not a great solution for a web environment...(compared to MVC).
MVC will appeal to you if...
- you want more control over your HTML
- want a seamless ajax experience like every other platform has
- want testability through-and-through
- want meaningful URLs
- HATE dealing with postback & viewstate issues
And as for the framework being Preview 5, it is quite stable, the design is mostly there, and upgrading is not difficult. I started an app on Preview 1 and have upgraded within a few hours of the newest preview being available.
A: @Juan Manuel Did you ever work in classic ASP? When you had to program all of your own events and "viewstatish" items (like a dropdown recalling its selected value after form submission)?
If so, then ASP.NET MVC will not feel that awkward off the bat. I would check out Rob Conery's Awesome Series "MVC Storefront" where he has been walking through the framework and building each expected component for a storefront site. It's really impressive and easy to follow along (catching up is tough because Rob has been reall active and posted A LOT in that series).
Personally, and quite contrary to Jeff Atwood's feelings on the topic, I rather liked the webform model. It was totally different than the vbscript/classic ASP days for sure but keeping viewstate in check and writing your own CSS friendly controls was enjoyable, actually.
Then again, note that I said "liked". ASP.NET MVC is really awesome and more alike other web technologies out there. It certainly is easier to shift from ASP.NET MVC to RAILS if you like to or need to work on multiple platforms. And while, yes, it is very stable obviously (this very site), if your company disallows "beta" software of any color; implementing it into production at the this time might be an issue.
A: @Jonathan Holland I saw that you were voted down, but that is a VERY VALID point. I have been reading some posts around the intertubes where people seem to be confusing ASP.NET MVC the framework and MVC the pattern.
MVC in of itself is a DESIGN PATTERN. If all you are looking for is a "separation of concerns" then you can certainly achieve that with webforms. Personally, I am a big fan of the MVP pattern in a standard n-tier environment.
If you really want TOTAL control of your mark-up in the ASP.NET world, then MVC the ramework is for you.
A: If you are a professional ASP.NET developer, and have some time to spare on learning new stuff, I would certainly recommend that you spend some time trying out ASP.NET MVC. It may not be the solution to all your problems, and there are lots of projects that may benefit more from a traditional webform implementation, but while trying to figure out MVC you will certainly learn a lot, and it might bring up lots of ideas that you can apply on your job.
One good thing that I noticed while going through many blog posts and video tutorials while trying to develop a MVC pet-project is that most of them follow the current best practices (TDD, IoC, Dependency Injection, and to a lower extent POCO), plus a lot of JQuery to make the experience more interesting for the user, and that is stuff that I can apply on my current webform apps, and that I wasn't exposed in such depth before.
The ASP.NET MVC way of doing things is so different from webforms that it will shake up a bit your mind, and that for a developer is very good!
OTOH for a total beginner to web development I think MVC is definitely a better start because it offers a good design pattern out of the box and is closer to the way that the web really works (HTML is stateless, after all). On MVC you decide on every byte that goes back and forth on the wire (at least while you don't go crazy on html helpers). Once the guy gets that, he or she will be better equipped to move to the "artificial" facilities provided by ASP.NET webforms and server controls.
A: If you like to use server controls which do a lot of work for you, you will NOT like MVC because you will need to do a lot of hand coding in MVC. If you like the GridView, expect to write one yourself or use someone else's.
MVC is not for everyone, specially if you're not into unit testing the GUI part. If you're comfortable with web forms, stay with it. Web Forms 4.0 will fix some of the current shortcomings like the ID's which are automatically assigned by ASP.NET. You will have control of these in the next version.
A: Unless the developers you are working with are familiar with MVC pattern I wouldn't. At a minimum I'd talk with them first before making such a big change.
A: I'm trying to make that same decision about ASP.NET MVC, Juan Manuel. I'm now waiting for the right bite-sized project to come along with which I can experiment. If the experiment goes well--my gut says it will--then I'm going to architect my new large projects around the framework.
With ASP.NET MVC you lose the viewstate/postback model of ASP.NET Web Forms. Without that abstraction, you work much more closely with the HTML and the HTTP POST and GET commands. I believe the UI programming is somewhat in the direction of classic ASP.
With that inconvenience, comes a greater degree of control. I've very often found myself fighting the psuedo-session garbage of ASP.NET and the prospect of regaining complete control of the output HTML seems very refreshing.
It's perhaps either the best--or the worst--of both worlds.
A: 5 Reasons You Should Take a Closer Look at ASP.NET MVC
A: It's important to keep in mind that MVC and WebForms are not competing, and one is not better than the other. They are simply different tools. Most people seem to approach MVC vs WebForms as "one must be a better hammer than the other". That is wrong. One is a hammer, the other is a screwdriver. Both are used in the process of putting things together, but have different strengths and weaknesses.
If one left you with a bad taste, you were probably trying to use a screwdriver to pound a nail. Certain problems are cumbersome with WebForms that become elegant and simple with MVC, and vice-versa.
A:
I dont´t know ASP.NET MVC, but I am very familiar with MVC pattern. I don´t see another way to build professional applications without MVC. And it has to be MVC model 2, like Spring or Struts. By the way, how you people were building web applications without MVC? When you have a situation that some kind of validation is necessary on every request, as validating if user is authenticated, what is your solution? Some kind of include(validate.aspx) in every page?
Have you never heard of N-Tier development?
A: Ajax, RAD (webforms with ajax are anti-RAD very often), COMPLETE CONTROL (without developing whole bunch of code and cycles). webforms are good only to bind some grid and such and not for anything else, and one more really important thing - performance. when u get stuck into the web forms hell u will switch on MVC sooner or later.
A: I have used ASP.NET MVC (I even wrote a HTTPModule that lets you define the routes in web.config), and I still get a bitter taste in my mouth about it.
It seems like a giant step backwards in organization and productivity. Maybe its not for some, but I've got webforms figured out, and they present no challenge to me as far as making them maintainable.
That, and I don't endorse the current "TEST EVERYTHING" fad...
A: ASP.NET MVC basically allows you to separate the responsibility of different sections of the code. This enable you to test your application. You can test your Views, Routes etc. It also does speed up the application since now there is no ViewState or Postback.
BUT, there are also disadvantages. Since, you are no using WebForms you cannot use any ASP.NET control. It means if you want to create a GridView you will be running a for loop and create the table manually. If you want to use the ASP.NET Wizard in MVC then you will have to create on your own.
It is a nice framework if you are sick and tired of ASP.NET webform and want to perform everything on your own. But you need to keep in mind that would you benefit from creating all the stuff again or not?
In general I prefer Webforms framework due to the rich suite of controls and the automatic plumbing.
A: I wouldn't recommend just making the switch on an existing project. Perhaps start a small "demo" project that the team can use to experiment with the technology and (if necessary) learn what they need to and demonstrate to management that it is worthwhile to make the switch. In the end, even the dev team might realize they aren't ready or it's not worth it.
Whatever you do, be sure to document it. Perhaps if you use a demo project, write a postmortem for future reference.
A: I dont´t know ASP.NET MVC, but I am very familiar with MVC pattern. I don´t see another way to build professional applications without MVC. And it has to be MVC model 2, like Spring or Struts. By the way, how you people were building web applications without MVC? When you have a situation that some kind of validation is necessary on every request, as validating if user is authenticated, what is your solution? Some kind of include(validate.aspx) in every page?
A: No, you shouldn't. Feel free to try it out on a new project, but a lot of people familiar with ASP.NET webforms aren't loving it yet, due to having to muck around with raw HTML + lots of different concepts + pretty slim pickings on documentation/tutorials.
A: Is the fact that ASP.net MVC is only in 'Preview 5' be a cause for concern when looking into it?
I know that StackOverflow was created using it, but is there a chance that Microsoft could implement significant changes to the framework before it is officially out of beta/alpha/preview release?
A: If you are dead set on using an MVC framework, then I would rather set out to use Castle project's one...
When that's said I personally think WebControls have a lot of advantages, like for instance being able to create event driven applications which have a stateful client and so on. Most of the arguments against WebControls are constructed because of lack of understanding the WebControl model etc. And not because they actually are truly bad...
MVC is not a Silver Bullet, especially not Microsoft MVC...
A: I have seen some implementation of MVC framework where for the sake of testability, someone rendered the whole HTML in code. In this case the view is also a testable code. But I said, my friend, putting HTML in code is a maintenance nightmare and he said well I like everything compiled and tested. I didn't argue, but later found that he did put this HTML into resource files and the craziness continued...
Little did he realized that the whole idea of separating View also solved the maintenance part. It outweighs the testability in some applications. We do not need to test the HTML design if we are using WYSWYG tool. WebForms are good for that reason.
I have often seen people abusing postback and viewstate and blaming it on the ASP .NET model.
Remember the best webpages are still the .HTMLs and that's where is the Power of ASP .NET MVC.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30067",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "52"
} |
Q: Monitoring files - how to know when a file is complete We have several .NET applications that monitor a directory for new files, using FileSystemWatcher. The files are copied from another location, uploaded via FTP, etc. When they come in, the files are processed in one way or another. However, one problem that I have never seen a satisfactory answer for is: for large files, how does one know when the files being monitored are still being written to? Obviously, we need to wait until the files are complete and closed before we begin processing them. The event args in the FileSystemWatcher events do not seem to address this.
A: If you are in control on the program that is writing the files into the directory, you can have the program write the files to a temporary directory and then move them into the watched directory. The move should be an atomic operation, so the watcher shouldn't see the file until it is fully in the directory.
If you are not in control of what is writing to the watched directory, you can set a time in the watcher where the file is considered complete when it has remained the same size for the given time. If immediate processing isn't a concern, setting this timer to something relatively large is a fairly safe way to know that either the file is complete or it never will be.
A: The "Changed" event on the FileSystemWatcher should shouldn't fire until the file is closed. See my answer to a similar question. There is a possibility that the FTP download mechanism closes the file multiple times during download as new data comes in, but I would think that is a little unlikely.
A: Unless the contents of a file can be verified for completion (it has a verifiable format or includes a checksum of the contents) only the sender can verify that a whole file has arrived.
I have used a locking method for sending large files via FTP in the past.
File is sent with an alternative extension and is renamed once the sender is happy it is all there.
The above is obviously combined with a process which periodically tidies up old files with the temporary extension.
An alternative is to create a zero length file with the same name but with an additonal .lck extension. Once the real file is fully uploaded the lck file is deleted. The receiving process obviously ignores files which have the name of a lock file.
Without a system like this the receiver can never be sure that the whole file has arrived.
Checking for files that haven't been changed in x minutes is prone to all sorts of problems.
A: The following method tries to open a file with write permissions. It will block execution until a file is completely written to disk:
/// <summary>
/// Waits until a file can be opened with write permission
/// </summary>
public static void WaitReady(string fileName)
{
while (true)
{
try
{
using (System.IO.Stream stream = System.IO.File.Open(fileName, FileMode.Open, FileAccess.ReadWrite, FileShare.ReadWrite))
{
if (stream != null)
{
System.Diagnostics.Trace.WriteLine(string.Format("Output file {0} ready.", fileName));
break;
}
}
}
catch (FileNotFoundException ex)
{
System.Diagnostics.Trace.WriteLine(string.Format("Output file {0} not yet ready ({1})", fileName, ex.Message));
}
catch (IOException ex)
{
System.Diagnostics.Trace.WriteLine(string.Format("Output file {0} not yet ready ({1})", fileName, ex.Message));
}
catch (UnauthorizedAccessException ex)
{
System.Diagnostics.Trace.WriteLine(string.Format("Output file {0} not yet ready ({1})", fileName, ex.Message));
}
Thread.Sleep(500);
}
}
(from my answer to a related question)
A: You probably have to go with some out of band signaling: have the producer of "file.ext" write a dummy "file.ext.end".
A: Have you tried getting a write lock on the file? If it's being written to, that should fail, and you know to leave it alone for a bit...
A: +1 for using a file.ext.end signaler if possible, where the contents of file.ext.end is a checksum for the larger file. This isn't for security so much — if someone can insert their own file into the large stream they can replace the checksum as well. But it does help make sure nothing was garbled along the way.
A: The way I check in Windows if a file has been completely uploaded by ftp is to try to rename it. If renaming fails, the file isn't complete. Not very elegant, I admit, but it works.
A: A write lock doesn't help if the file upload failed part way through and the sender hasn't tried resending (and relocking) the file yet.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30074",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: How to know if a line intersects a plane in C#? I have two points (a line segment) and a rectangle. I would like to know how to calculate if the line segment intersects the rectangle.
A: Do http://mathworld.wolfram.com/Line-LineIntersection.html for the line and each side of the rectangle.
Or: http://mathworld.wolfram.com/Line-PlaneIntersection.html
A: From my "Geometry" class:
public struct Line
{
public static Line Empty;
private PointF p1;
private PointF p2;
public Line(PointF p1, PointF p2)
{
this.p1 = p1;
this.p2 = p2;
}
public PointF P1
{
get { return p1; }
set { p1 = value; }
}
public PointF P2
{
get { return p2; }
set { p2 = value; }
}
public float X1
{
get { return p1.X; }
set { p1.X = value; }
}
public float X2
{
get { return p2.X; }
set { p2.X = value; }
}
public float Y1
{
get { return p1.Y; }
set { p1.Y = value; }
}
public float Y2
{
get { return p2.Y; }
set { p2.Y = value; }
}
}
public struct Polygon: IEnumerable<PointF>
{
private PointF[] points;
public Polygon(PointF[] points)
{
this.points = points;
}
public PointF[] Points
{
get { return points; }
set { points = value; }
}
public int Length
{
get { return points.Length; }
}
public PointF this[int index]
{
get { return points[index]; }
set { points[index] = value; }
}
public static implicit operator PointF[](Polygon polygon)
{
return polygon.points;
}
public static implicit operator Polygon(PointF[] points)
{
return new Polygon(points);
}
IEnumerator<PointF> IEnumerable<PointF>.GetEnumerator()
{
return (IEnumerator<PointF>)points.GetEnumerator();
}
public IEnumerator GetEnumerator()
{
return points.GetEnumerator();
}
}
public enum Intersection
{
None,
Tangent,
Intersection,
Containment
}
public static class Geometry
{
public static Intersection IntersectionOf(Line line, Polygon polygon)
{
if (polygon.Length == 0)
{
return Intersection.None;
}
if (polygon.Length == 1)
{
return IntersectionOf(polygon[0], line);
}
bool tangent = false;
for (int index = 0; index < polygon.Length; index++)
{
int index2 = (index + 1)%polygon.Length;
Intersection intersection = IntersectionOf(line, new Line(polygon[index], polygon[index2]));
if (intersection == Intersection.Intersection)
{
return intersection;
}
if (intersection == Intersection.Tangent)
{
tangent = true;
}
}
return tangent ? Intersection.Tangent : IntersectionOf(line.P1, polygon);
}
public static Intersection IntersectionOf(PointF point, Polygon polygon)
{
switch (polygon.Length)
{
case 0:
return Intersection.None;
case 1:
if (polygon[0].X == point.X && polygon[0].Y == point.Y)
{
return Intersection.Tangent;
}
else
{
return Intersection.None;
}
case 2:
return IntersectionOf(point, new Line(polygon[0], polygon[1]));
}
int counter = 0;
int i;
PointF p1;
int n = polygon.Length;
p1 = polygon[0];
if (point == p1)
{
return Intersection.Tangent;
}
for (i = 1; i <= n; i++)
{
PointF p2 = polygon[i % n];
if (point == p2)
{
return Intersection.Tangent;
}
if (point.Y > Math.Min(p1.Y, p2.Y))
{
if (point.Y <= Math.Max(p1.Y, p2.Y))
{
if (point.X <= Math.Max(p1.X, p2.X))
{
if (p1.Y != p2.Y)
{
double xinters = (point.Y - p1.Y) * (p2.X - p1.X) / (p2.Y - p1.Y) + p1.X;
if (p1.X == p2.X || point.X <= xinters)
counter++;
}
}
}
}
p1 = p2;
}
return (counter % 2 == 1) ? Intersection.Containment : Intersection.None;
}
public static Intersection IntersectionOf(PointF point, Line line)
{
float bottomY = Math.Min(line.Y1, line.Y2);
float topY = Math.Max(line.Y1, line.Y2);
bool heightIsRight = point.Y >= bottomY &&
point.Y <= topY;
//Vertical line, slope is divideByZero error!
if (line.X1 == line.X2)
{
if (point.X == line.X1 && heightIsRight)
{
return Intersection.Tangent;
}
else
{
return Intersection.None;
}
}
float slope = (line.X2 - line.X1)/(line.Y2 - line.Y1);
bool onLine = (line.Y1 - point.Y) == (slope*(line.X1 - point.X));
if (onLine && heightIsRight)
{
return Intersection.Tangent;
}
else
{
return Intersection.None;
}
}
}
A: since it is missing i'll just add it for completeness
public static Intersection IntersectionOf(Line line1, Line line2)
{
// Fail if either line segment is zero-length.
if (line1.X1 == line1.X2 && line1.Y1 == line1.Y2 || line2.X1 == line2.X2 && line2.Y1 == line2.Y2)
return Intersection.None;
if (line1.X1 == line2.X1 && line1.Y1 == line2.Y1 || line1.X2 == line2.X1 && line1.Y2 == line2.Y1)
return Intersection.Intersection;
if (line1.X1 == line2.X2 && line1.Y1 == line2.Y2 || line1.X2 == line2.X2 && line1.Y2 == line2.Y2)
return Intersection.Intersection;
// (1) Translate the system so that point A is on the origin.
line1.X2 -= line1.X1; line1.Y2 -= line1.Y1;
line2.X1 -= line1.X1; line2.Y1 -= line1.Y1;
line2.X2 -= line1.X1; line2.Y2 -= line1.Y1;
// Discover the length of segment A-B.
double distAB = Math.Sqrt(line1.X2 * line1.X2 + line1.Y2 * line1.Y2);
// (2) Rotate the system so that point B is on the positive X axis.
double theCos = line1.X2 / distAB;
double theSin = line1.Y2 / distAB;
double newX = line2.X1 * theCos + line2.Y1 * theSin;
line2.Y1 = line2.Y1 * theCos - line2.X1 * theSin; line2.X1 = newX;
newX = line2.X2 * theCos + line2.Y2 * theSin;
line2.Y2 = line2.Y2 * theCos - line2.X2 * theSin; line2.X2 = newX;
// Fail if segment C-D doesn't cross line A-B.
if (line2.Y1 < 0 && line2.Y2 < 0 || line2.Y1 >= 0 && line2.Y2 >= 0)
return Intersection.None;
// (3) Discover the position of the intersection point along line A-B.
double posAB = line2.X2 + (line2.X1 - line2.X2) * line2.Y2 / (line2.Y2 - line2.Y1);
// Fail if segment C-D crosses line A-B outside of segment A-B.
if (posAB < 0 || posAB > distAB)
return Intersection.None;
// (4) Apply the discovered position to line A-B in the original coordinate system.
return Intersection.Intersection;
}
note that the method rotates the line segments so as to avoid direction-related problems
A: If it is 2d, then all lines are on the only plane.
So, this is basic 3-D geometry. You should be able to do this with a straightforward equation.
Check out this page:
http://local.wasp.uwa.edu.au/~pbourke/geometry/planeline/.
The second solution should be easy to implement, as long as you translate the coordinates of your rectangle into the equation of a plane.
Furthermore, check that your denominator isn't zero (line doesn't intersect or is contained in the plane).
A: Use class:
System.Drawing.Rectangle
Method:
IntersectsWith();
A: I hate browsing the MSDN docs (they're awfully slow and weird :-s) but I think they should have something similar to this Java method... and if they haven't, bad for them! XD (btw, it works for segments, not lines).
In any case, you can peek the open source Java SDK to see how is it implemented, maybe you'll learn some new trick (I'm always surprised when I look other people's code)
A: Isn't it possible to check the line against each side of the rectangle using simple line segment formula.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30080",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "35"
} |
Q: Table Scan vs. Add Index - which is quicker? I have a table with many millions of rows. I need to find all the rows with a specific column value. That column is not in an index, so a table scan results.
But would it be quicker to add an index with the column at the head (prime key following), do the query, then drop the index?
I can't add an index permanently as the user is nominating what column they're looking for.
A: Two questions to think about:
*
*How many columns could be nominated for the query?
*Does the data change frequently? A lot of it?
If you have a small number of candidate columns, and the data doesn't change a lot, then you might want to consider adding a permanent index on any or even all candidate column.
"Blasphemy!", I hear. Most sources tell you to "never" index every column of a table, but that advised is rooted on the generic assumption that tables are modified frequently.
You will pay a price in additional storage, as well as a performance hit when the data changes.
How small is small and how much is a lot, and is the tradeoff worth it?
There is no way to tell a priory because "too slow" is usually a subjective measurement.
You will have to try it, measure the size of your indexes and then the effect they have in the searches. You will have to balance the costs against the increase in satisfaction of your customers.
[Added] Oh, one more thing: temporary indexes are not only physically slower than a table scan, but they would destroy your concurrency. Re-indexing a table usually (always?) requires a full table lock, so in effect only one user search could be done at a time.
Good luck.
A: I'm no DBA, but I would guess that building the index would require scanning the table anyway.
Unless there are going to be multiple queries on that column, I would recommend not creating the index.
Best to check the explain plans/execution times for both ways, though!
A: As everyone else has said, it most certainly would not be faster to add an index than it would be to do a full scan of that column.
However, I would suggest tracking the query pattern and find out which column(s) are searched for the most, and add indexes at least for them. You may find out that 3-4 indexes speeds up 90% of your queries.
A: Adding an index requires a table scan, so if you can't add a permanent index it sounds like a single scan will be (slightly) faster.
A: No, that would not be quicker. What would be quicker is to just add the index and leave it there!
Of course, it may not be practical to index every column, but then again it may. How is data added to the table?
A: It wouldn't be. Creating an index is more complex than simply scanning the column, even if the computational complexity is the same.
That said - how many columns do you have? Are you sure you can't just create an index for each of them if the query time for a single find is too long?
A: It depends on the complexity of your query. If you're retrieving the data once, then doing a table scan is faster. However, if you're going back to the table more than once for related information in the same query, then the index is faster.
Another related strategy is to do the table scan, and put all the data in a temporary table. Then index THAT and then you can do all your subsequent selects, groupings, and as many other queries on the subset of indexed data. The benefit being that looking up related information in related tables using the temp table is MUCH faster.
However, space is cheap these days, so you'd probably best be served by examining how your users actually USE your system and adding indexes on those frequent columns. I have yet to see users use ALL the search parameters ALL the time.
A: Your solution will not scale unless you add a permanent index to each column, with all of the columns that are returned in the query in the list of included columns (a covering index). These indexes will be very large, and inserts and updates to that table will be a bit slower, but you don't have much of a choice if you are allowing a user to arbitrarily select a search column.
How many columns are there? How often does the data get updated? How fast do inserts and updates need to run? There are trade-offs involved, depending on the answers to those questions. Do plenty of experimentation and testing so you know for sure how things will perform.
But to your original question, adding and dropping an index for the purpose of a single query is only beneficial if you do more than one select during the query (for example, the select is in a sub-query that gets run for each row returned).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30094",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: C++ - What does "Stack automatic" mean? In my browsings amongst the Internet, I came across this post, which includes this
"(Well written) C++ goes to great
lengths to make stack automatic
objects work "just like" primitives,
as reflected in Stroustrup's advice to
"do as the ints do". This requires a
much greater adherence to the
principles of Object Oriented
development: your class isn't right
until it "works like" an int,
following the "Rule of Three" that
guarantees it can (just like an int)
be created, copied, and correctly
destroyed as a stack automatic."
I've done a little C, and C++ code, but just in passing, never anything serious, but I'm just curious, what it means exactly?
Can someone give an example?
A: In addition to the other answers:
The C++ language actually has the auto keyword to explicitly declare the storage class of an object. Of course, it's completely needless because this is the implied storage class for local variables and cannot be used anywhere. The opposite of auto is static (both locally and globall).
The following two declarations are equivalent:
int main() {
int a;
auto int b;
}
Because the keyword is utterly useless, it will actually be recycled in the next C++ standard (“C++0x”) and gets a new meaning, namely, it lets the compiler infer the variable type from its initialization (like var in C#):
auto a = std::max(1.0, 4.0); // `a` now has type double.
A: Stack objects are handled automatically by the compiler.
When the scope is left, it is deleted.
{
obj a;
} // a is destroyed here
When you do the same with a 'newed' object you get a memory leak :
{
obj* b = new obj;
}
b is not destroyed, so we lost the ability to reclaim the memory b owns. And maybe worse, the object cannot clean itself up.
In C the following is common :
{
FILE* pF = fopen( ... );
// ... do sth with pF
fclose( pF );
}
In C++ we write this :
{
std::fstream f( ... );
// do sth with f
} // here f gets auto magically destroyed and the destructor frees the file
When we forget to call fclose in the C sample the file is not closed and may not be used by other programs. (e.g. it cannot be deleted).
Another example, demonstrating the object string, which can be constructed, assigned to and which is destroyed on exiting the scope.
{
string v( "bob" );
string k;
v = k
// v now contains "bob"
} // v + k are destroyed here, and any memory used by v + k is freed
A: Variables in C++ can either be declared on the stack or the heap. When you declare a variable in C++, it automatically goes onto the stack, unless you explicitly use the new operator (it goes onto the heap).
MyObject x = MyObject(params); // onto the stack
MyObject * y = new MyObject(params); // onto the heap
This makes a big difference in the way the memory is managed. When a variable is declared on the stack, it will be deallocated when it goes out of scope. A variable on the heap will not be destroyed until delete is explicitly called on the object.
A: Stack automatic are variables which are allocated on the stack of the current method. The idea behind designing a class which can acts as Stack automatic is that it should be possible to fully initialize it with one call and destroy it with another. It is essential that the destructor frees all resources allocated by the object and its constructor returns an object which has been fully initialized and ready for use. Similarly for the copy operation - the class should be able to be easily made copies, which are fully functional and independent.
The usage of such class should be similar to how primitive int, float, etc. are used. You define them (eventually give them some initial value) and then pass them around and in the end leave the compiler to the cleaning.
A: Correct me if i'm wrong, but i think that copy operation is not mandatory to take full advantage of automatic stack cleaning.
For example consider a classic MutexGuard object, it doesn't need a copy operation to be useful as stack automatic, or does it ?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30099",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Is there an automatic code formatter for C#? In my work I deal mostly with C# code nowadays, with a sprinkle of java from time to time. What I absolutely love about Eclipse (and I know people using it daily love it even more) is a sophisticated code formatter, able to mould code into any coding standard one might imagine. Is there such a tool for C#? Visual Studio code formatting (Crtl+K, Crtl+D) is subpar and StyleCop only checks the source without fixing it.
My dream tool would run from console (for easy inclusion in automated builds or pre-commit hooks and for execution on Linux + Mono), have text-file based configuration easy to store in a project repository and a graphical rule editor with preview - just like the Eclipse Code Formatter does.
A: Another option: NArrange;
*
*free
*console based (so good for commit hooks etc, but can still be used as an "External Tool" in VS)
*flexible config file
A: For me, Ctrl + Shift + F maps to Find in Files. When I need to format code, I highlight it and hit Ctrl + K, Ctrl + F.
I understand this doesn't really address automated formatting. I just wanted to clarify for those who may not know this feature even exists in Visual Studio.
A: For Visual Studio, take a look at ReSharper. It's an awesome tool and a definite must-have. Versions after 4.0 have the code formatting and clean-up feature that you are looking for. There's also plugin integration with StyleCop, including formatting settings file.
You'll probably want Agent Smith plugin as well, for spell-checking the identifiers and comments. ReSharper supports per-solution formatting setting files, which can be checked into version control system and shared by the whole team. The keyboard shortcut for code cleanup is Ctrl + E, C.
In 'vanilla' Visual Studio, the current file can be automatically formatted with Ctrl + K, Ctrl + D, and Ctrl + K, Ctrl + F formats the selected text.
As for a runs-everywhere command line tool to be used with commit hooks, try NArrange. It's free, can process whole directories at once and runs on Mono as well as on Microsoft .NET.
Some people also use the Artistic Style command line tool, although it requires Perl and works better with C/C++ code than with C#.
A: I've heard only good things about ReSharper. It's on my to-learn list.
A: http://www.sourceformat.com/
This tool is around (~30$). I tried it and it works nice (with multiple languages).
I like this tool the best because it doesn't check code file for correctness. I can post code snippets from the Internet and it will translate them correctly no matter if they are in missing parts of the code. Other tools I try complain in that cases. The tool can also be integrated easily into editors as it allows command line driving.
Other tools:
http://www.polystyle.com/index.jsp
http://astyle.sourceforge.net/ (open source)
A: The .NET Foundation just released their code formatting tool on GitHub
https://github.com/dotnet/codeformatter
It uses the Roslyn compiler services to parse project documents and convert them to their expected formatting conventions. They're applying this to some of the very old CLR code in order to make all the projects consistent.
A: Not directly, but I use the Agent Smith plugin for R# to do this. Unfortunately, R# isn't free.
A: Also take a look at Microsoft StyleCop
A: Further to @Chris Karcher's answer - you can also automatically format the whole document by hitting Ctrl+K, Ctrl+D.
These formatting features work on a variety of file formats - it works wonders on ugly HTML.
A: I haven't tried this (found it through Google). It might work. http://www.semdesigns.com/Products/Formatters/CSharpFormatter.html. It's fairly cheap at USD50, but a trial is not available.
A: See this previous question:
Is there any tool for reformatting C# code?
Searching for [c#] astyle shows up some more previous questions too.
A: Maybe you could be interested in this free Addin for Visual Studio 2010/2012.
A: Here is an open source code formatting tool which has amazing features
CodeMaid
A: If you want to do it online, have a freecodeformat.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30101",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "65"
} |
Q: Making a JavaScript regex equivalent to a PHP regex After my web form is submitted, a regex will be applied to user input on the server side (via PHP). I'd like to have the identical regex running in real-time on the client side to show the user what the real input will be. This will be pretty much the same as the Preview section on the Ask Question pages on Stack Overflow except with PHP on the back-end instead of .NET.
What do I need to keep in mind in order to have my PHP and JavaScript regular expressions act exactly the same as each other?
A: If the regular expressions are simple then there should be no issue, as the basics of regular expressions are common across most implementations.
For particulars then it would be best to study both implementations:
http://www.regular-expressions.info/php.html
http://www.regular-expressions.info/javascript.html
Javascripts implementation is probably the more basic, so if you are going for a lowest common denominator approach then aim for that one.
A: Hehe this was sort of asked moments ago and Jeff pointed out:
http://www.regular-expressions.info/refflavors.html.
There is a comparison of regular expression capabilities across tools and languages.
A: I've found that different implementations of regular expressions often have subtle differences in what exactly they support. If you want to be entirely sure that the result will be the same in both frontend and backend, the savest choice would be to make an Ajax call to your PHP backend and use the same piece of PHP code for both regex evaluations.
A: @LKM AJAX is the clear winner here. This will also allow you to follow the DRY principle. Why would you want to write your parsing code in Javascript and PHP?
A: Both JavaScript's regex and PHP's preg_match are based on Perl, so there shouldn't be any porting problems. Do note, however, that Javascript only supports a subset of modifiers that Perl supports.
For more info for comparing the two:
*
*Javascript Regular Expressions
*PHP Regular Expressions
As for delivery method, I'd suggest you'd use JSON, the slimmest data interchange format as of date (AFAIK) and directly translatable to a JavaScript object through eval(). Just put that bad boy through an AJAX session and you should be set to go.
I hope this helps :)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30121",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Ethernet MAC address as activation code for an appliance? Let's suppose you deploy a network-attached appliances (small form factor PCs) in the field. You want to allow these to call home after being powered on, then be identified and activated by end users.
Our current plan involves the user entering the MAC address into an activation page on our web site. Later our software (running on the box) will read the address from the interface and transmit this in a "call home" packet. If it matches, the server response with customer information and the box is activated.
We like this approach because it's easy to access, and usually printed on external labels (FCC requirement?).
Any problems to watch out for? (The hardware in use is small form factor so all NICs, etc are embedded and would be very hard to change. Customers don't normally have direct acccess to the OS in any way).
I know Microsoft does some crazy fuzzy-hashing function for Windows activation using PCI device IDs, memory size, etc. But that seems overkill for our needs.
--
@Neall Basically, calling into our server, for purposes of this discussion you could call us the manufacturer.
Neall is correct, we're just using the address as a constant. We will read it and transmit it within another packet (let's say HTTP POST), not depending on getting it somehow from Ethernet frames.
A: I don't think that the well-known spoofability of MAC addresses is an issue in this case. I think tweakt is just wanting to use them for initial identification. The device can read its own MAC address, and the installer can (as long as it's printed on a label) read the same number and know, "OK - this is the box that I put at location A."
tweakt - would these boxes be calling into the manufacturer's server, or the server of the company/person using them (or are those the same thing in this case)?
A: I don't think there's anything magic about what you're doing here - couldn't what you're doing be described as:
"At production we burn a unique number into each of our devices which is both readable by the end user (it's on the label) and accessible to the internal processor. Our users have to enter this number into our website along with their credit-card details, and the box subsequently contacts to the website for permission to operate"
"Coincidentally we also use this number as the MAC address for network packets as we have to uniquely assign that during production anyway, so it saved us duplicating this bit of work"
I would say the two obvious hazards are:
*
*People hack around with your device and change this address to one which someone else has already activated. Whether this is likely to happen depends on some relationship between how hard it is and how expensive whatever they get to steal is. You might want to think about how easily they can take a firmware upgrade file and get the code out of it.
*Someone uses a combination of firewall/router rules and a bit of custom software to generate a server which replicates the operation of your 'auth server' and grants permission to the device to proceed. You could make this harder with some combination of hashing/PKE as part of the protocol.
As ever, some tedious, expensive one-off hack is largely irrelevant, what you don't want is a class-break which can be distributed over the internet to every thieving dweep.
A: The MAC address is as unique as a serial number printed on a manual/sticker.
Microsoft does hashing to prevent MAC address spoofing, and to allow a bit more privacy.
With the only MAC approach, you can easily match a device to a customer by only being in the same subnet. The hash prevents that, by being opaque to what criteria are used and no way to reverse engineer individual parts.
(see password hashing)
A: From a security perspective, I know that it is possible to spoof a MAC, though I am not entirely sure how difficult it is or what it entails.
Otherwise, if the customers don't have easy access to the hardware or the OS, you should be fairly safe doing this... probably best to put a warning sticker on saying that messing with anything will disrupt communication to the server.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30145",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Manual steps to upgrade VS.NET solution and target .NET framework? After you've let the VS.NET (2008 in this case) wizard upgrade your solution, do you perform any manual steps to upgrade specific properties of your solution and projects? For instance, you have to go to each project and target a new version of the framework (from 2.0 to 3.5 in this case). Even after targeting a new version of the framework, I find assembly references still point to the assemblies from the old version (2.0 rather than 3.5 even after changing the target). Does this mean I lose out on the performance benefits of assemblies in the new version of the framework?
A: All versions of the .Net Framework from 2.0 onwards (i.e. 3.0 and 3.5) use exactly the same core framework files (i.e. the CLR, you'll notice there are no directories relating to 3.0 or 3.5 in the C:\Windows\Microsoft.Net\Framework directory) therefore you shouldn't worry too much about any performance issues.
The Core parts are referred to in Microsoft Speak as the 'Red Bits' and the rest as the 'Green Bits'.
A: All the VS 2008 wizard does is upgrade the project & solution files to be used with VS 2008 - it still targets the framework you started with. If you want to move your projects to a newer version of the framework, you'll have to edit the project settings on each. Too much of a chance of breaking changes for MSFT to do this automatically.
A: Apart from the target framework feature, you'll need to manually add a reference to System.Core.dll to utilise some of the latest features like Linq.
A: the target framework feature, you'll need to manually add a reference to System.Core.dll to utilise some of the latest features like Linq.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30148",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How does WinXP's "Send to Compressed (zipped) Folder" decide what to include in zip file? I'm not going to be too surprised if I get shot-down for asking a "non programming" question, but maybe somebody knows ...
I was zipping the contents of my subversion sandbox using WinXP's inbuilt "Send to Compressed (zipped) Folder" capability and was surprised to find that the .zip file created did not contain the .svn directories and their contents.
I had always assumed that all files were included and I can't locate which property/option/attribute controls inclusion or otherwise. Can anybody help?
Thanks, Tom
EDIT:
So, isnt there a smart way to handle the problem? The real problem (show hidden files set to true. .svn folders are not compressed because windows does not consider them as valid folders) is still un-answered.
Thanks...
A: Send to zipped Folder does not traverse into folders without names before dot (like ".svn"). If you had other folders that begin with dots, those would not be included either. Files without names are not excluded. Hidden attribute does not come into play.
Might be a bug, might be by design. Remember that Windows explorer does not allow creating folders beginning with dot, even though the underlying system can handle them.
A: It may not include files that you normally wouldn't see. Or, the files may be there, but you may be unable to see them when reopening the .zip file in explorer, because they are hidden. You may go into Tools->Folder Options, go to the View tab, and select the radio button to view hidden files and folders.
A: "Send to --> Compressed (zipped) Folder" creates a zip file. What it puts in there is based on your settings. It does not include hidden files with the default settings. If you have your explorer view settings set as Kibbee mentioned to "Show hidden files and folders", then "Send to --> Compressed (zipped) Folder" will put the hidden files into the zip file.
There is what I would call a bug in XP where hidden folders aren't include when recursing a folder tree. You can get them if they are in the folder that you are in. Recursing works in Vista.
Files starting with "." have no special to windows except that Windows Explorer won't let you create one. It is a valid file name though.
I would recommend using something like 7-Zip if your folders contain hidden/system files/folders.
A: The Windows 7 implementation of Send to Compressed Folder behaves differently - it does include files / folders beginning with a dot (e.g. ".SVN") in the zip file.
A: It looks like the Compressed Folder shell extension ignores directories (but not files) whose names begin with a dot, unless explicitly given as a parameter (i.e. selected for the Send To command).
It's hard to find out what else it excludes, as I can't even find out what the "compressed folder" sendto item is doing in the first place, without referring to 3rd party documentation.
Edit:
OK, the "Send to compressed folder" sendto shortcut has an extension of .ZFSendToTarget, which is handled by zipfldr.dll, which is doing all the work.
@Kibbee:
Mine does include hidden folders while zipping, though I do have "show hidden files" enabled.
A: Finally, I found that there is no straight forward way to ZIP the .svn folders and hence I moved to winRAR instead. Alternativaly you can also use winZip.
A: A compressed folder doesn't mean that it will be a .ZIP file, only the contents of the folder are compressed and to you it will look like a normal folder
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30152",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Is there a Java Console/Editor similar to the GroovyConsole? I'm giving a presentation to a Java User's Group on Groovy and I'm going to be doing some coding during the presentation to show some side-by-side Java/Groovy. I really like the GroovyConsole as it's simple and I can resize the text easily.
I'm wondering if there is anything similar for Java? I know I could just use Eclipse but I'd rather have a smaller app to use without having to customize a view. What's the community got?
Screen shot of GroovyConsole:
A: DrJava is your best bet. It also has an Eclipse plugin to use the interactions pane like GroovyConsole.
A: try beanshell. its a scripting wrapper over java. http://www.beanshell.org/
A: Why not use the GroovyConsole ? Groovy accepts the vast majority of Java syntax
A: One good reason for not using something like the Groovy Console (wonderful though it is) is when you want to test what something would be like in Java, without actually going to the trouble of a boilerplate class and main method to run snippets of code. There are some differences between what Groovy does and what Java does. It'd be nice to have a simple way to test something and know for sure it will work when you put it in Java.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30160",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Avoiding repeated constants in CSS Are there any useful techniques for reducing the repetition of constants in a CSS file?
(For example, a bunch of different selectors which should all apply the same colour, or the same font size)?
A: You should comma seperate each id or class for example:
h1,h2 {
color: #fff;
}
A: Elements can belong to more than one class, so you can do something like this:
.DefaultBackColor
{
background-color: #123456;
}
.SomeOtherStyle
{
//other stuff here
}
.DefaultForeColor
{
color:#654321;
}
And then in the content portion somewhere:
<div class="DefaultBackColor SomeOtherStyle DefaultForeColor">Your content</div>
The weaknesses here are that it gets pretty wordy in the body and you're unlikely to be able to get it down to listing a color only once. But you might be able to do it only two or three times and you can group those colors together, perhaps in their own sheet. Now when you want to change the color scheme they're all together and the change is pretty simple.
But, yeah, my biggest complain with CSS is the inability to define your own constants.
A: You can use global variables to avoid duplicacy.
p{
background-color: #ccc;
}
h1{
background-color: #ccc;
}
Here, you can initialize a global variable in :root pseudo class selector. :root is top level of the DOM.
:root{
--main--color: #ccc;
}
p{
background-color: var(--main-color);
}
h1{
background-color: var(--main-color);
}
NOTE: This is an experimental technology
Because this technology's specification has not stabilized, check the compatibility table for the proper prefixes to use in various browsers. Also note that the syntax and behavior of an experimental technology is subject to change in future versions of browsers as the spec changes. More Info here
However, you can always use the Syntactically Awesome Style Sheets i.e.
In case Sass, you have to use $variable_name at the top to initialize the global variable.
$base : #ccc;
p{
background-color: $base;
}
h1{
background-color: $base;
}
A: You can use dynamic css frameworks like less.
A: Recently, variables have been added to the official CSS specs.
Variables allow you to so something like this :
body, html {
margin: 0;
height: 100%;
}
.theme-default {
--page-background-color: #cec;
--page-color: #333;
--button-border-width: 1px;
--button-border-color: #333;
--button-background-color: #f55;
--button-color: #fff;
--gutter-width: 1em;
float: left;
height: 100%;
width: 100%;
background-color: var(--page-background-color);
color: var(--page-color);
}
button {
background-color: var(--button-background-color);
color: var(--button-color);
border-color: var(--button-border-color);
border-width: var(--button-border-width);
}
.pad-box {
padding: var(--gutter-width);
}
<div class="theme-default">
<div class="pad-box">
<p>
This is a test
</p>
<button>
Themed button
</button>
</div>
</div>
Unfortunately, browser support is still very poor. According to CanIUse, the only browsers that support this feature today (march 9th, 2016), are Firefox 43+, Chrome 49+, Safari 9.1+ and iOS Safari 9.3+ :
Alternatives :
Until CSS variables are widely supported, you could consider using a CSS pre-processor language like Less or Sass.
CSS pre-processors wouldn't just allow you to use variables, but pretty much allow you to do anything you can do with a programming language.
For example, in Sass, you could create a function like this :
@function exponent($base, $exponent) {
$value: $base;
@if $exponent > 1 {
@for $i from 2 through $exponent {
$value: $value * $base;
}
}
@if $exponent < 1 {
@for $i from 0 through -$exponent {
$value: $value / $base;
}
}
@return $value;
}
A: As far as I know, without programmatically generating the CSS file, there's no way to, say, define your favorite shade of blue (#E0EAF1) in one and only one spot.
You could pretty easily write a computer program to generate the file. Execute a simple find-and-replace operation and then save as a .css file.
Go from this source.css…
h1,h2 {
color: %%YOURFAVORITECOLOR%%;
}
div.something {
border-color: %%YOURFAVORITECOLOR%%;
}
to this target.css…
h1,h2 {
color: #E0EAF1;
}
div.something {
border-color: #E0EAF1;
}
with code like this… (VB.NET)
Dim CssText As String = System.IO.File.ReadAllText("C:\source.css")
CssText = CssText.Replace("%%YOURFAVORITECOLOR%%", "#E0EAF1")
System.IO.File.WriteAllText("C:\target.css", CssText)
A: Personally, I just use comma-separed selector, but there some solution for writing css programmatically. Maybe this is a little overkill for you simpler needs, but take a look at CleverCSS (Python)
A: Try Global variables to avoid duplicate coding
h1 {
color: red;
}
p {
font-weight: bold;
}
Or you can create different classes
.deflt-color {
color: green;
}
.dflt-nrml-font {
font-size: 12px;
}
.dflt-header-font {
font-size: 18px;
}
A: You can use multiple inheritance in your html elements (e.g. <div class="one two">) but I'm not aware of a way of having constants in the CSS files themselves.
This link (the first found when googling your question) seems to have a fairly indepth look at the issue:
http://icant.co.uk/articles/cssconstants/
A: CSS Variables, if it ever becomes implemented in all major browsers, may one day resolve this issue.
Until then, you'll either have to copy and paste, or use a preprocessor of whatever sort, like others have suggested (typically using server-sider scripting).
A: :root {
--primary-color: red;
}
p {
color: var(--primary-color);
}
<p> some red text </p>
You can change color by JS
var styles = getComputedStyle(document.documentElement);
var value = String(styles.getPropertyValue('--primary-color')).trim();
document.documentElement.style.setProperty('--primary-color', 'blue');
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30170",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Why won't .NET deserialize my primitive array from a web service? Help! I have an Axis web service that is being consumed by a C# application. Everything works great, except that arrays of long values always come across as [0,0,0,0] - the right length, but the values aren't deserialized. I have tried with other primitives (ints, doubles) and the same thing happens. What do I do? I don't want to change the semantics of my service.
A: Here's what I ended up with. I have never found another solution out there for this, so if you have something better, by all means, contribute.
First, the long array definition in the wsdl:types area:
<xsd:complexType name="ArrayOf_xsd_long">
<xsd:complexContent mixed="false">
<xsd:restriction base="soapenc:Array">
<xsd:attribute wsdl:arrayType="soapenc:long[]" ref="soapenc:arrayType" />
</xsd:restriction>
</xsd:complexContent>
</xsd:complexType>
Next, we create a SoapExtensionAttribute that will perform the fix. It seems that the problem was that .NET wasn't following the multiref id to the element containing the double value. So, we process the array item, go find the value, and then insert it the value into the element:
[AttributeUsage(AttributeTargets.Method)]
public class LongArrayHelperAttribute : SoapExtensionAttribute
{
private int priority = 0;
public override Type ExtensionType
{
get { return typeof (LongArrayHelper); }
}
public override int Priority
{
get { return priority; }
set { priority = value; }
}
}
public class LongArrayHelper : SoapExtension
{
private static ILog log = LogManager.GetLogger(typeof (LongArrayHelper));
public override object GetInitializer(LogicalMethodInfo methodInfo, SoapExtensionAttribute attribute)
{
return null;
}
public override object GetInitializer(Type serviceType)
{
return null;
}
public override void Initialize(object initializer)
{
}
private Stream originalStream;
private Stream newStream;
public override void ProcessMessage(SoapMessage m)
{
switch (m.Stage)
{
case SoapMessageStage.AfterSerialize:
newStream.Position = 0; //need to reset stream
CopyStream(newStream, originalStream);
break;
case SoapMessageStage.BeforeDeserialize:
XmlWriterSettings settings = new XmlWriterSettings();
settings.Indent = false;
settings.NewLineOnAttributes = false;
settings.NewLineHandling = NewLineHandling.None;
settings.NewLineChars = "";
XmlWriter writer = XmlWriter.Create(newStream, settings);
XmlDocument xmlDocument = new XmlDocument();
xmlDocument.Load(originalStream);
List<XmlElement> longArrayItems = new List<XmlElement>();
Dictionary<string, XmlElement> multiRefs = new Dictionary<string, XmlElement>();
FindImportantNodes(xmlDocument.DocumentElement, longArrayItems, multiRefs);
FixLongArrays(longArrayItems, multiRefs);
xmlDocument.Save(writer);
newStream.Position = 0;
break;
}
}
private static void FindImportantNodes(XmlElement element, List<XmlElement> longArrayItems,
Dictionary<string, XmlElement> multiRefs)
{
string val = element.GetAttribute("soapenc:arrayType");
if (val != null && val.Contains(":long["))
{
longArrayItems.Add(element);
}
if (element.Name == "multiRef")
{
multiRefs[element.GetAttribute("id")] = element;
}
foreach (XmlNode node in element.ChildNodes)
{
XmlElement child = node as XmlElement;
if (child != null)
{
FindImportantNodes(child, longArrayItems, multiRefs);
}
}
}
private static void FixLongArrays(List<XmlElement> longArrayItems, Dictionary<string, XmlElement> multiRefs)
{
foreach (XmlElement element in longArrayItems)
{
foreach (XmlNode node in element.ChildNodes)
{
XmlElement child = node as XmlElement;
if (child != null)
{
string href = child.GetAttribute("href");
if (href == null || href.Length == 0)
{
continue;
}
if (href.StartsWith("#"))
{
href = href.Remove(0, 1);
}
XmlElement multiRef = multiRefs[href];
if (multiRef == null)
{
continue;
}
child.RemoveAttribute("href");
child.InnerXml = multiRef.InnerXml;
if (log.IsDebugEnabled)
{
log.Debug("Replaced multiRef id '" + href + "' with value: " + multiRef.InnerXml);
}
}
}
}
}
public override Stream ChainStream(Stream s)
{
originalStream = s;
newStream = new MemoryStream();
return newStream;
}
private static void CopyStream(Stream from, Stream to)
{
TextReader reader = new StreamReader(from);
TextWriter writer = new StreamWriter(to);
writer.WriteLine(reader.ReadToEnd());
writer.Flush();
}
}
Finally, we tag all methods in the Reference.cs file that will be deserializing a long array with our attribute:
[SoapRpcMethod("", RequestNamespace="http://some.service.provider",
ResponseNamespace="http://some.service.provider")]
[return : SoapElement("getFooReturn")]
[LongArrayHelper]
public Foo getFoo()
{
object[] results = Invoke("getFoo", new object[0]);
return ((Foo) (results[0]));
}
This fix is long-specific, but it could probably be generalized to handle any primitive type having this problem.
A: Here's a more or less copy-pasted version of a blog post I wrote on the subject.
Executive summary: You can either change the way .NET deserializes the result set (see Chris's solution above), or you can reconfigure Axis to serialize its results in a way that's compatible with the .NET SOAP implementation.
If you go the latter route, here's how:
... the generated
classes look and appear to function
normally, but if you'll look at the
deserialized array on the client
(.NET/WCF) side you'll find that the
array has been deserialized
incorrectly, and all values in the
array are 0. You'll have to manually
look at the SOAP response returned by
Axis to figure out what's wrong;
here's a sample response (again,
edited for clarity):
<?xml version="1.0" encoding="UTF-8"?>
<soapenv:Envelope xmlns:soapenv=http://schemas.xmlsoap.org/soap/envelope/>
<soapenv:Body>
<doSomethingResponse>
<doSomethingReturn>
<doSomethingReturn href="#id0"/>
<doSomethingReturn href="#id1"/>
<doSomethingReturn href="#id2"/>
<doSomethingReturn href="#id3"/>
<doSomethingReturn href="#id4"/>
</doSomethingReturn>
</doSomethingResponse>
<multiRef id="id4">5</multiRef>
<multiRef id="id3">4</multiRef>
<multiRef id="id2">3</multiRef>
<multiRef id="id1">2</multiRef>
<multiRef id="id0">1</multiRef>
</soapenv:Body>
</soapenv:Envelope>
You'll notice that Axis does not
generate values directly in the
returned element, but instead
references external elements for
values. This might make sense when
there are many references to
relatively few discrete values, but
whatever the case this is not properly
handled by the WCF basicHttpBinding
provider (and reportedly by gSOAP and
classic .NET web references as well).
It took me a while to find a solution:
edit your Axis deployment's
server-config.wsdd file and find the
following parameter:
<parameter name="sendMultiRefs" value="true"/>
Change it to false,
then redeploy via the command line,
which looks (under Windows) something
like this:
java -cp %AXISCLASSPATH% org.apache.axis.client.AdminClient server-config.wsdl
The web service's
response should now be deserializable
by your .NET client.
A: Found this link that may offer a better alternative: http://www.tomergabel.com/GettingWCFAndApacheAxisToBeFriendly.aspx
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30171",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How do I enable Edit and Continue on a 64-bit application and VB2008 Express? When I try to do that I get the following error:
Changes to 64-bit applications are not allowed.
@Wilka: That option wasn't available until I selected Tools > Options > Projects and Solutions > General and check "Show advanced build configurations". Though I found this hint from your MSDN link. So if you edit your comment, I can make it the accepted answer...
Thanks everybody!
Please see my first comment on this question, it's not there... Somehow... I can select Target framework though (2.0, 3.0 and 3.5), not that I see any use of that for this particular problem...
It doesn't have to be a 64bit program, actually, I rather prefer it to be 32bit anyway since it is more like a utility and it should work on 32bit systems.
Also, I'm running Vista at 64bit. Maybe that has something to do with it?
@Rob Cooper: Now I think of it, I never had the chance of selecting either a 64bit or a 32bit application when creating the solution/project/application...
And according to your link "64-Bit Debugging (X64)" is possible with MS VB2008 express edition.
Oh btw, I found the following:
If you are debugging a 64-bit application and want to use Edit and Continue, you must change the target platform and compile the application as a 32-bit application. You can change this setting by opening the Project Properties and going to the Compile page. On that page, click Advanced Compile Options and change the Target CPU setting to x86 in the Advanced Compiler Settings dialog box. Link
But I dont see the Target CPU setting...
A: The dialog you're looking for is this one in the project properties:
by default, the target will be "Any CPU" which means it'll run as 64bit on a 64bit OS (like you're using), or 32bit on a 32bit OS - so this wont stop it from working on 32bit systems. But like you said, to use Edit & Continue you will need to target x86 (so it runs as 32bit).
Edit: fixed screenshot (I had the C# one, not the VB one)
A: The "Edit and Continue" feature for 64-bit code will be supported under Visual Studio 2013.
More information here.
A: You could try:
In Visual Basic 2008 Express Edition:
Build menu > Configuration Manager...
Change Active solution platform: to
"...", choose "x86", save the new
platform.
Now the "x86" option is available in
the Compile settings.
You may need to enable "Show advanced build configurations" first, in Tools > Options >
Projects and Solutions > General
(from this post on MSDN forums)
A: AFAIK Visual Studio Express does not come with 64bit support.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30183",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29"
} |
Q: Winforms - Click/drag anywhere in the form to move it as if clicked in the form caption I am creating a small modal form that is used in Winforms application. It is basically a progress bar of sorts. But I would like the user to be able to click anywhere in the form and drag it to move it around on the desktop while it is still being displayed.
How can I implement this behavior?
A: The following code assumes that the ProgressBarForm form has a ProgressBar control with Dock property set to Fill
public partial class ProgressBarForm : Form
{
private bool mouseDown;
private Point lastPos;
public ProgressBarForm()
{
InitializeComponent();
}
private void progressBar1_MouseMove(object sender, MouseEventArgs e)
{
if (mouseDown)
{
int xoffset = MousePosition.X - lastPos.X;
int yoffset = MousePosition.Y - lastPos.Y;
Left += xoffset;
Top += yoffset;
lastPos = MousePosition;
}
}
private void progressBar1_MouseDown(object sender, MouseEventArgs e)
{
mouseDown = true;
lastPos = MousePosition;
}
private void progressBar1_MouseUp(object sender, MouseEventArgs e)
{
mouseDown = false;
}
}
A: Microsoft KB Article 320687 has a detailed answer to this question.
Basically, you override the WndProc method to return HTCAPTION to the WM_NCHITTEST message when the point being tested is in the client area of the form -- which is, in effect, telling Windows to treat the click exactly the same as if it had occured on the caption of the form.
private const int WM_NCHITTEST = 0x84;
private const int HTCLIENT = 0x1;
private const int HTCAPTION = 0x2;
protected override void WndProc(ref Message m)
{
switch(m.Msg)
{
case WM_NCHITTEST:
base.WndProc(ref m);
if ((int)m.Result == HTCLIENT)
m.Result = (IntPtr)HTCAPTION;
return;
}
base.WndProc(ref m);
}
A: The accepted answer is a cool trick, but it doesn't always work if the Form is covered by a Fill-docked child control like a Panel (or derivates) for example, because this control will eat all most Windows messages.
Here is a simple approach that works also in this case: derive the control in question (use this class instead of the standard one) an handle mouse messages like this:
private class MyTableLayoutPanel : Panel // or TableLayoutPanel, etc.
{
private Point _mouseDown;
private Point _formLocation;
private bool _capture;
// NOTE: we cannot use the WM_NCHITTEST / HTCAPTION trick because the table is in control, not the owning form...
protected override void OnMouseDown(MouseEventArgs e)
{
_capture = true;
_mouseDown = e.Location;
_formLocation = ((Form)TopLevelControl).Location;
}
protected override void OnMouseUp(MouseEventArgs e)
{
_capture = false;
}
protected override void OnMouseMove(MouseEventArgs e)
{
if (_capture)
{
int dx = e.Location.X - _mouseDown.X;
int dy = e.Location.Y - _mouseDown.Y;
Point newLocation = new Point(_formLocation.X + dx, _formLocation.Y + dy);
((Form)TopLevelControl).Location = newLocation;
_formLocation = newLocation;
}
}
}
A: Here is a way to do it using a P/Invoke.
public const int WM_NCLBUTTONDOWN = 0xA1;
public const int HTCAPTION = 0x2;
[DllImport("User32.dll")]
public static extern bool ReleaseCapture();
[DllImport("User32.dll")]
public static extern int SendMessage(IntPtr hWnd, int Msg, int wParam, int lParam);
void Form_Load(object sender, EventArgs e)
{
this.MouseDown += new MouseEventHandler(Form_MouseDown);
}
void Form_MouseDown(object sender, MouseEventArgs e)
{
if (e.Button == MouseButtons.Left)
{
ReleaseCapture();
SendMessage(Handle, WM_NCLBUTTONDOWN, HTCAPTION, 0);
}
}
A: VC++ 2010 Version (of FlySwat's):
#include <Windows.h>
namespace DragWithoutTitleBar {
using namespace System;
using namespace System::Windows::Forms;
using namespace System::ComponentModel;
using namespace System::Collections;
using namespace System::Data;
using namespace System::Drawing;
public ref class Form1 : public System::Windows::Forms::Form
{
public:
Form1(void) { InitializeComponent(); }
protected:
~Form1() { if (components) { delete components; } }
private:
System::ComponentModel::Container ^components;
HWND hWnd;
#pragma region Windows Form Designer generated code
void InitializeComponent(void)
{
this->SuspendLayout();
this->AutoScaleDimensions = System::Drawing::SizeF(6, 13);
this->AutoScaleMode = System::Windows::Forms::AutoScaleMode::Font;
this->ClientSize = System::Drawing::Size(640, 480);
this->FormBorderStyle = System::Windows::Forms::FormBorderStyle::None;
this->Name = L"Form1";
this->Text = L"Form1";
this->Load += gcnew EventHandler(this, &Form1::Form1_Load);
this->MouseDown += gcnew System::Windows::Forms::MouseEventHandler(this, &Form1::Form1_MouseDown);
this->ResumeLayout(false);
}
#pragma endregion
private: System::Void Form1_Load(Object^ sender, EventArgs^ e) {
hWnd = static_cast<HWND>(Handle.ToPointer());
}
private: System::Void Form1_MouseDown(Object^ sender, System::Windows::Forms::MouseEventArgs^ e) {
if (e->Button == System::Windows::Forms::MouseButtons::Left) {
::ReleaseCapture();
::SendMessage(hWnd, /*WM_NCLBUTTONDOWN*/ 0xA1, /*HT_CAPTION*/ 0x2, 0);
}
}
};
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30184",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: How can I format a javascript date to be serialized by jQuery I am trying to set a javascript date so that it can be submitted via JSON to a .NET type, but when attempting to do this, jQuery sets the date to a full string, what format does it have to be in to be converted to a .NET type?
var regDate = student.RegistrationDate.getMonth() + "/" + student.RegistrationDate.getDate() + "/" + student.RegistrationDate.getFullYear();
j("#student_registrationdate").val(regDate); // value to serialize
I am using MonoRail on the server to perform the binding to a .NET type, that aside I need to know what to set the form hidden field value to, to get properly sent to .NET code.
A: This MSDN article has some example Date strings that are parse-able is that what you're looking for?
string dateString = "5/1/2008 8:30:52 AM";
DateTime date1 = DateTime.Parse(dateString, CultureInfo.InvariantCulture);
A: As travis suggests, you could simply change the parameter or class property (depending on what you are passing back) to a string, the parse it as his example.
You may also want to take a look at this article. It suggests that direct conversion for DateTime JSON serialization uses something more like the ticks property.
A: I suggest you use the YYYY-MM-DD notation which offer the best combo of unambiguity and readability.
so:
var regDate = student.RegistrationDate.getFullYear() + "-" + student.RegistrationDate.getMonth() + "-" + student.RegistrationDate.getDate();
j("#student_registrationdate").val(regDate);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30188",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: TFS Lifecycle Management for Build Environment How would you manage the lifecycle and automated build process when some of the projects (C# .csproj projects) are part of the actual build system?
Example:
A .csproj is a project that uses MSBuild tasks that are implemented in BuildEnv.csproj.
Both projects are part of the same product (meaning, BuildEnv.csproj frequently changes as the product is being developed and not a 3rd party that is rarely updated)
A: You must factor this out into two separate "projects" otherwise you'll spend ages chasing your tail trying to find out if a broken build is due to changes in the build system or chages in the code being developed.
Previously we've factored the two systems out into separate projects in CVS.
You want to be able to vary one thing while keeping the other constant to limit what you would have to look at when performing forensic analysis.
Hope that helps.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30209",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Can Windows' built-in ZIP compression be scripted? Is the ZIP compression that is built into Windows XP/Vista/2003/2008 able to be scripted at all? What executable would I have to call from a BAT/CMD file? or is it possible to do it with VBScript?
I realize that this is possible using WinZip, 7-Zip and other external applications, but I'm looking for something that requires no external applications to be installed.
A: Just for clarity: GZip is not an MS-only algorithm as suggested by Guy Starbuck in his comment from August.
The GZipStream in System.IO.Compression uses the Deflate algorithm, just the same as the zlib library, and many other zip tools. That class is fully interoperable with unix utilities like gzip.
The GZipStream class is not scriptable from the commandline or VBScript, to produce ZIP files, so it alone would not be an answer the original poster's request.
The free DotNetZip library does read and produce zip files, and can be scripted from VBScript or Powershell. It also includes command-line tools to produce and read/extract zip files.
Here's some code for VBScript:
dim filename
filename = "C:\temp\ZipFile-created-from-VBScript.zip"
WScript.echo("Instantiating a ZipFile object...")
dim zip
set zip = CreateObject("Ionic.Zip.ZipFile")
WScript.echo("using AES256 encryption...")
zip.Encryption = 3
WScript.echo("setting the password...")
zip.Password = "Very.Secret.Password!"
WScript.echo("adding a selection of files...")
zip.AddSelectedFiles("*.js")
zip.AddSelectedFiles("*.vbs")
WScript.echo("setting the save name...")
zip.Name = filename
WScript.echo("Saving...")
zip.Save()
WScript.echo("Disposing...")
zip.Dispose()
WScript.echo("Done.")
Here's some code for Powershell:
[System.Reflection.Assembly]::LoadFrom("c:\\dinoch\\bin\\Ionic.Zip.dll");
$directoryToZip = "c:\\temp";
$zipfile = new-object Ionic.Zip.ZipFile;
$e= $zipfile.AddEntry("Readme.txt", "This is a zipfile created from within powershell.")
$e= $zipfile.AddDirectory($directoryToZip, "home")
$zipfile.Save("ZipFiles.ps1.out.zip");
In a .bat or .cmd file, you can use the zipit.exe or unzip.exe tools. Eg:
zipit NewZip.zip -s "This is string content for an entry" Readme.txt src
A: There are VBA methods to zip and unzip using the windows built in compression as well, which should give some insight as to how the system operates. You may be able to build these methods into a scripting language of your choice.
The basic principle is that within windows you can treat a zip file as a directory, and copy into and out of it. So to create a new zip file, you simply make a file with the extension .zip that has the right header for an empty zip file. Then you close it, and tell windows you want to copy files into it as though it were another directory.
Unzipping is easier - just treat it as a directory.
In case the web pages are lost again, here are a few of the relevant code snippets:
ZIP
Sub NewZip(sPath)
'Create empty Zip File
'Changed by keepITcool Dec-12-2005
If Len(Dir(sPath)) > 0 Then Kill sPath
Open sPath For Output As #1
Print #1, Chr$(80) & Chr$(75) & Chr$(5) & Chr$(6) & String(18, 0)
Close #1
End Sub
Function bIsBookOpen(ByRef szBookName As String) As Boolean
' Rob Bovey
On Error Resume Next
bIsBookOpen = Not (Application.Workbooks(szBookName) Is Nothing)
End Function
Function Split97(sStr As Variant, sdelim As String) As Variant
'Tom Ogilvy
Split97 = Evaluate("{""" & _
Application.Substitute(sStr, sdelim, """,""") & """}")
End Function
Sub Zip_File_Or_Files()
Dim strDate As String, DefPath As String, sFName As String
Dim oApp As Object, iCtr As Long, I As Integer
Dim FName, vArr, FileNameZip
DefPath = Application.DefaultFilePath
If Right(DefPath, 1) <> "\" Then
DefPath = DefPath & "\"
End If
strDate = Format(Now, " dd-mmm-yy h-mm-ss")
FileNameZip = DefPath & "MyFilesZip " & strDate & ".zip"
'Browse to the file(s), use the Ctrl key to select more files
FName = Application.GetOpenFilename(filefilter:="Excel Files (*.xl*), *.xl*", _
MultiSelect:=True, Title:="Select the files you want to zip")
If IsArray(FName) = False Then
'do nothing
Else
'Create empty Zip File
NewZip (FileNameZip)
Set oApp = CreateObject("Shell.Application")
I = 0
For iCtr = LBound(FName) To UBound(FName)
vArr = Split97(FName(iCtr), "\")
sFName = vArr(UBound(vArr))
If bIsBookOpen(sFName) Then
MsgBox "You can't zip a file that is open!" & vbLf & _
"Please close it and try again: " & FName(iCtr)
Else
'Copy the file to the compressed folder
I = I + 1
oApp.Namespace(FileNameZip).CopyHere FName(iCtr)
'Keep script waiting until Compressing is done
On Error Resume Next
Do Until oApp.Namespace(FileNameZip).items.Count = I
Application.Wait (Now + TimeValue("0:00:01"))
Loop
On Error GoTo 0
End If
Next iCtr
MsgBox "You find the zipfile here: " & FileNameZip
End If
End Sub
UNZIP
Sub Unzip1()
Dim FSO As Object
Dim oApp As Object
Dim Fname As Variant
Dim FileNameFolder As Variant
Dim DefPath As String
Dim strDate As String
Fname = Application.GetOpenFilename(filefilter:="Zip Files (*.zip), *.zip", _
MultiSelect:=False)
If Fname = False Then
'Do nothing
Else
'Root folder for the new folder.
'You can also use DefPath = "C:\Users\Ron\test\"
DefPath = Application.DefaultFilePath
If Right(DefPath, 1) <> "\" Then
DefPath = DefPath & "\"
End If
'Create the folder name
strDate = Format(Now, " dd-mm-yy h-mm-ss")
FileNameFolder = DefPath & "MyUnzipFolder " & strDate & "\"
'Make the normal folder in DefPath
MkDir FileNameFolder
'Extract the files into the newly created folder
Set oApp = CreateObject("Shell.Application")
oApp.Namespace(FileNameFolder).CopyHere oApp.Namespace(Fname).items
'If you want to extract only one file you can use this:
'oApp.Namespace(FileNameFolder).CopyHere _
'oApp.Namespace(Fname).items.Item("test.txt")
MsgBox "You find the files here: " & FileNameFolder
On Error Resume Next
Set FSO = CreateObject("scripting.filesystemobject")
FSO.deletefolder Environ("Temp") & "\Temporary Directory*", True
End If
End Sub
A: Yes, this can be scripted with VBScript. For example the following code can create a zip from a directory:
Dim fso, winShell, MyTarget, MySource, file
Set fso = CreateObject("Scripting.FileSystemObject")
Set winShell = createObject("shell.application")
MyTarget = Wscript.Arguments.Item(0)
MySource = Wscript.Arguments.Item(1)
Wscript.Echo "Adding " & MySource & " to " & MyTarget
'create a new clean zip archive
Set file = fso.CreateTextFile(MyTarget, True)
file.write("PK" & chr(5) & chr(6) & string(18,chr(0)))
file.close
winShell.NameSpace(MyTarget).CopyHere winShell.NameSpace(MySource).Items
do until winShell.namespace(MyTarget).items.count = winShell.namespace(MySource).items.count
wscript.sleep 1000
loop
Set winShell = Nothing
Set fso = Nothing
You may also find http://www.naterice.com/blog/template_permalink.asp?id=64 helpful as it includes a full Unzip/Zip implementation in VBScript.
If you do a size check every 500 ms rather than a item count it works better for large files. Win 7 writes the file instantly although it's not finished compressing:
set fso=createobject("scripting.filesystemobject")
Set h=fso.getFile(DestZip)
do
wscript.sleep 500
max = h.size
loop while h.size > max
Works great for huge amounts of log files.
A: There are both zip and unzip executables (as well as a boat load of other useful applications) in the UnxUtils package available on SourceForge (http://sourceforge.net/projects/unxutils). Copy them to a location in your PATH, such as 'c:\windows', and you will be able to include them in your scripts.
This is not the perfect solution (or the one you asked for) but a decent work-a-round.
A: to create a compressed archive you can use the utility MAKECAB.EXE
A: Here'a my attempt to summarize built-in capabilities windows for compression and uncompression - How can I compress (/ zip ) and uncompress (/ unzip ) files and folders with batch file without using any external tools?
with a few given solutions that should work on almost every windows machine.
As regards to the shell.application and WSH I preferred the jscript
as it allows a hybrid batch/jscript file (with .bat extension) that not require temp files.I've put unzip and zip capabilities in one file plus a few more features.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30211",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "61"
} |
Q: SQL Server: Get data for only the past year I am writing a query in which I have to get the data for only the last year. What is the best way to do this?
SELECT ... FROM ... WHERE date > '8/27/2007 12:00:00 AM'
A: Well, I think something is missing here. User wants to get data from the last year and not from the last 365 days. There is a huge diference. In my opinion, data from the last year is every data from 2007 (if I am in 2008 now). So the right answer would be:
SELECT ... FROM ... WHERE YEAR(DATE) = YEAR(GETDATE()) - 1
Then if you want to restrict this query, you can add some other filter, but always searching in the last year.
SELECT ... FROM ... WHERE YEAR(DATE) = YEAR(GETDATE()) - 1 AND DATE > '05/05/2007'
A: The most readable, IMO:
SELECT * FROM TABLE WHERE Date >
DATEADD(yy, -1, CONVERT(datetime, CONVERT(varchar, GETDATE(), 101)))
Which:
*
*Gets now's datetime GETDATE() = #8/27/2008 10:23am#
*Converts to a string with format 101 CONVERT(varchar, #8/27/2008 10:23am#, 101) = '8/27/2007'
*Converts to a datetime CONVERT(datetime, '8/27/2007') = #8/27/2008 12:00AM#
*Subtracts 1 year DATEADD(yy, -1, #8/27/2008 12:00AM#) = #8/27/2007 12:00AM#
There's variants with DATEDIFF and DATEADD to get you midnight of today, but they tend to be rather obtuse (though slightly better on performance - not that you'd notice compared to the reads required to fetch the data).
A: Look up dateadd in BOL
dateadd(yy,-1,getdate())
A: The following adds -1 years to the current date:
SELECT ... From ... WHERE date > DATEADD(year,-1,GETDATE())
A: GETDATE() returns current date and time.
If last year starts in midnight of current day last year (like in original example) you should use something like:
DECLARE @start datetime
SET @start = dbo.getdatewithouttime(DATEADD(year, -1, GETDATE())) -- cut time (hours, minutes, ect.) -- getdatewithouttime() function doesn't exist in MS SQL -- you have to write one
SELECT column1, column2, ..., columnN FROM table WHERE date >= @start
A: I found this page while looking for a solution that would help me select results from a prior calendar year. Most of the results shown above seems return items from the past 365 days, which didn't work for me.
At the same time, it did give me enough direction to solve my needs in the following code - which I'm posting here for any others who have the same need as mine and who may come across this page in searching for a solution.
SELECT .... FROM .... WHERE year(*your date column*) = year(DATEADD(year,-1,getdate()))
Thanks to those above whose solutions helped me arrive at what I needed.
A: I, like @D.E. White, came here for similar but different reasons than the original question. The original question asks for the last 365 days. @samjudson's answer provides that. @D.E. White's answer returns results for the prior calendar year.
My query is a bit different in that it works for the prior year up to and including the current date:
SELECT .... FROM .... WHERE year(date) > year(DATEADD(year, -2, GETDATE()))
For example, on Feb 17, 2017 this query returns results from 1/1/2016 to 2/17/2017
A: For some reason none of the results above worked for me.
This selects the last 365 days.
SELECT ... From ... WHERE date BETWEEN CURDATE() - INTERVAL 1 YEAR AND CURDATE()
A: The other suggestions are good if you have "SQL only".
However I suggest, that - if possible - you calculate the date in your program and insert it as string in the SQL query.
At least for for big tables (i.e. several million rows, maybe combined with joins) that will give you a considerable speed improvement as the optimizer can work with that much better.
A: argument for DATEADD function :
DATEADD (*datepart* , *number* , *date* )
datepart can be: yy, qq, mm, dy, dd, wk, dw, hh, mi, ss, ms
number is an expression that can be resolved to an int that is added to a datepart of date
date is an expression that can be resolved to a time, date, smalldatetime, datetime, datetime2, or datetimeoffset value.
A: declare @iMonth int
declare @sYear varchar(4)
declare @sMonth varchar(2)
set @iMonth = 0
while @iMonth > -12
begin
set @sYear = year(DATEADD(month,@iMonth,GETDATE()))
set @sMonth = right('0'+cast(month(DATEADD(month,@iMonth,GETDATE())) as varchar(2)),2)
select @sYear + @sMonth
set @iMonth = @iMonth - 1
end
A: I had a similar problem but the previous coder only provided the date in mm-yyyy format. My solution is simple but might prove helpful to some (I also wanted to be sure beginning and ending spaces were eliminated):
SELECT ... FROM ....WHERE
CONVERT(datetime,REPLACE(LEFT(LTRIM([MoYr]),2),'-
','')+'/01/'+RIGHT(RTRIM([MoYr]),4)) >= DATEADD(year,-1,GETDATE())
A: Here's my version.
YEAR(NOW())- 1
Example:
YEAR(c.contractDate) = YEAR(NOW())- 1
A: For me this worked well
SELECT DATE_ADD(Now(),INTERVAL -2 YEAR);
A: If you are trying to calculate "rolling" days, you can simplify it by using:
Select ... FROM ... WHERE [DATE] > (GETDATE()-[# of Days])
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30222",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "125"
} |
Q: enter key to insert newline in asp.net multiline textbox control I have some C# / asp.net code I inherited which has a textbox which I want to make multiline. I did so by adding textmode="multiline" but when I try to insert a newline, the enter key instead submits the form :P
I googled around and it seems like the default behavior should be for enter (or control-enter) to insert a newline. Like I said I inherited the code so I'm not sure if there's javascript monkeying around or if there's just a simple asp.net thing I have to do.
A: It turns out this is a bug with Firefox + ASP.NET where the generated javascript for the defaultButton stuff doesn't work in Firefox. I had to put a replacement for the WebForm_FireDefatultButton function as described here:
function WebForm_FireDefaultButton(event, target) {
var element = event.target || event.srcElement;
if (event.keyCode == 13 &&
!(element &&
element.tagName.toLowerCase() == "textarea"))
{
var defaultButton;
if (__nonMSDOMBrowser)
{
defaultButton = document.getElementById(target);
}
else
{
defaultButton = document.all[target];
}
if (defaultButton && typeof defaultButton.click != "undefined")
{
defaultButton.click();
event.cancelBubble = true;
if (event.stopPropagation)
{
event.stopPropagation();
}
return false;
}
}
return true;
}
A: I created a sample page with a TextBox and a Button and it worked fine for me:
<asp:TextBox runat="server" ID="textbox1" TextMode="MultiLine" />
<br />
<br />
<asp:Button runat="server" ID="button1" Text="Button 1" onclick="button1_Click" />
So it most likely depends on either some other property you have set, or some other control on the form.
Edit: TextChanged event is only triggered when the TextBox loses focus, so that can't be the issue.
A:
I can't find that "WebForm_FireDefaultButton" javascript anywhere, is it something asp.net is generating?
Yes.
That's generated to support the DefaultButton functionality of the form and/or Panel containing your controls. This is the source for it:
function WebForm_FireDefaultButton(event, target) {
if (event.keyCode == 13) {
var src = event.srcElement || event.target;
if (!src || (src.tagName.toLowerCase() != "textarea")) {
var defaultButton;
if (__nonMSDOMBrowser) {
defaultButton = document.getElementById(target);
}
else {
defaultButton = document.all[target];
}
if (defaultButton && typeof (defaultButton.click) != "undefined") {
defaultButton.click();
event.cancelBubble = true;
if (event.stopPropagation) event.stopPropagation();
return false;
}
}
}
return true;
}
A: I suspect it's (like you say) some custom javascript code.
The original asp.net control works fine... you are going to have to check the code
A: Are you handling the textchanged event for the textbox? That would mean ASP.Net sets the textbox to cause a postback (submit the page) for anything the might cause the textbox to lose focus, including the enter key.
A: @dave-ward, I just dug through mounds of javascript. most was ASP.NET generated stuff for validation and AJAX, there's a bunch starting with "WebForm_" that I guess is standard stuff to do the defaultbutton, etc. the only javascript we put on the page is for toggling visibility and doing some custom validation...
edit: I did find the below. I don't understand it though :P the beginning of the form the textarea is in, and a script found later: (note, something on stackoverflow is messing with the underscores)
<form name="Form1" method="post" action="default.aspx" onsubmit="javascript:return WebForm_OnSubmit();" id="Form1">
<script type="text/javascript">
//<![CDATA[
var theForm = document.forms['Form1'];
if (!theForm) {
theForm = document.Form1;
}
function __doPostBack(eventTarget, eventArgument) {
if (!theForm.onsubmit || (theForm.onsubmit() != false)) {
theForm.__EVENTTARGET.value = eventTarget;
theForm.__EVENTARGUMENT.value = eventArgument;
theForm.submit();
}
}
//]]>
</script>
A: http://blog.codesta.com/codesta_weblog/2007/12/net-gotchas---p.html worked for me.
A: this worked for me
<asp:TextBox ID="emailTo" TextMode="MultiLine" Rows="5" Columns="25" Wrap="true" Style="white-space:normal" runat="server"></asp:TextBox>
A: you can use \n for enter key
i.e.
[a-zA-Z 0-9/.\n]{20,500}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30230",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: WPF setting a MenuItem.Icon in code I have an images folder with a png in it. I would like to set a MenuItem's icon to that png. How do I write this in procedural code?
A: menutItem.Icon = new System.Windows.Controls.Image
{
Source = new BitmapImage(new Uri("images/sample.png", UriKind.Relative))
};
A: This is a bit shorter :D
<MenuItem Header="Example">
<MenuItem.Icon>
<Image Source="pack://siteoforigin:,,,/Resources/Example.png"/>
</MenuItem.Icon>
</MenuItem>
A: <MenuItem>
<MenuItem.Icon>
<Image>
<Image.Source>
<BitmapImage UriSource="/your_assembly;component/your_path_here/Image.png" />
</Image.Source>
</Image>
</MenuItem.Icon>
</MenuItem>
Just make sure your image in also included in the project file and marked as resource, and you are good to go :)
A: Arcturus's answer is good because it means you have the image file in your project rather than an independent folder.
So, in code that becomes...
menutItem.Icon = new Image
{
Source = new BitmapImage(new Uri("pack://application:,,,/your_assembly;component/yourpath/Image.png"))
}
A: This is how I used it (this way it dont need to be built into the assembly):
MenuItem item = new MenuItem();
string imagePath = "D:\\Images\\Icon.png");
Image icon = new Image();
icon.Source= new BitmapImage(new Uri(imagePath, UriKind.Absolute));
item.Icon = icon;
A: This is what worked for me
<MenuItem Header="delete ctrl-d" Click="cmiDelete_Click">
<MenuItem.Icon>
<Image>
<Image.Source>
<ImageSource>Resources/Images/delete.png</ImageSource>
</Image.Source>
</Image>
</MenuItem.Icon>
</MenuItem>
A: For those of you using vb.net, to do this you need to use this:
menuItem.Icon = New Image() With {.Source = New BitmapImage(New Uri("pack://application:,,,/your_assembly;component/yourpath/Image.png"))}
A: You can also use your Visual Studio to insert a icon. This is the easiest way
*
*Right click at you project in the solution explorer
*chose Properties
*Make sure you're in the application page.
*@ recources you see: Icon and Manifest
*@ Icon: Click browse and pick your icon.
Problem solved.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30239",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "39"
} |
Q: Tables instead of DIVs
Possible Duplicate:
Why not use tables for layout in HTML?
Under what conditions should you choose tables instead of DIVs in HTML coding?
A: Agree with Thomas -- the general rule of thumb is if it makes sense on a spreedsheet, you can use a table. Otherwise not.
Just don't use tables as your layout for the page, that's the main problem people have with them.
A: I can see the argument for tables for forms, but there is a nicer alternative... you just have to roll up your sleeves and learn CSS.
for example:
<fieldset>
<legend>New Blog Post</legend>
<label for="title">Title:</label>
<input type="text" name="title" />
<label for="body">Body:</label>
<textarea name="body" rows="6" cols="40">
</textarea>
</fieldset>
You can take that html and layout the form either side-by-side labels, or labels on top of the textboxes (which is easier). Having the flexibility really helps. It's also less HTML than the table equivalent of either.
For some excellent examples of CSS forms, check out these excellent examples:
http://jeffhowden.com/code/css/forms/
http://www.sitepoint.com/article/fancy-form-design-css/
http://www.smashingmagazine.com/2006/11/11/css-based-forms-modern-solutions/
A: I will usually opt for tables to display form-type information (First Name, Last Name, Address, etc.) where lining labels and fields across multiple rows is important. DIVs I use for layout.
Of course the table is wrapped in a DIV :)
A: When the data I am presenting is, indeed, tabular.
I find it ridiculous that some web designers used divs on tabular data on some sites.
One other use I would have for it would be forms, particularly label : textbox pairs. This could technically be done in div boxes, but it's much, much easier to do this in tables, and one can argue that label:textbox pairs are in fact tabular in nature.
A: Tables were designed for tabular content, not for layout.
So, don't ever feel bad if you use them to display data.
A: I use tables in two cases:
1) Tabular data
2) Any time I want my layout to dynamically size itself to its contents
A: If your data can be laid out in a two-dimensional grid, use <table>. If it can't, don't. Using <table> for anything else is a hack (though frequently not one with proper alternatives, especially when it comes to compatibility with older browsers). Not using <table> for something that clearly should be one is equally bad. <div> and <span> aren't for everything; in fact, being completely meaningless on a semantic level, they are to be avoided at all costs in favor of more semantic alternatives.
A: On this subject, I thought this site was pretty funny.
A: I used to do pure CSS but I abandoned that pursuit in favor of hybrid table/css approach as the most pragmatic approach. Ironically, it's also because of accessibility. Ever try doing CSS on Sidekick? What a nightmare! Ever seen how CSS-based websites are rendered on new browsers? Elements would overlap or just don't display correctly that I had to turn off the CSS. Ever try resizing CSS-based websites? They look awful and often detrimental to the blind if they use zooming features in the browser! If you do that with tables, they scale much better. When people talk about accessibility, I find that many have no clue and it annoys me because I am disabled and they aren't. Have they really worked with the blind? The deaf? If accessibility is a main concern, why the hell are 99% of videos not closed captioned? Many CSS purists use AJAX but fail to realize that AJAX often makes content inaccessible.
Pragmatically, it's ok to use a single table as a main layout as LONG as you provide the information in a logical flow if the cells are stacked (something you'd see on mobiles). The CSS theory sounds great but partially workable in real life with too many hacks, something that is against the ideals of "purity."
Since using the CSS with tables approach, I've saved so much time designing a website and maintanance is much easier. Fewer hacks, more intuitive. I get fewer calls from people saying "I inserted a DIV and now it looks all screwed up!" And even more importantly, absolutely NO accessibility issues.
A: Usually whenever you're not using the table to provide a layout.
Tables -> data
Divs -> layout
(mainly)
A: Note: At the time the question was asked, there were practical reasons for using tables for some layout purposes. This is not necessary anymore due to browser improvements, so I have updated the answer.
HTML <table>-elements should be used when the data logically has a two dimensional structure. If the data can be structured in rows and columns and you can meaningfully apply headers to both rows and columns, then you probably have tabular data.
I you only have a single row or single column of data, then it is not tabular data - it is just linear content. You need at least two rows and two columns before it can be considered tabular data.
Some examples:
Using tables for placing sidebars and page headers/footers. This is not tabular data but page layout. Something like css grid or flexbox is more appropriate.
Using tables for newspaper-style columns. This is not tabular data - you would still read it linearly. Something like css columns is more appropriate.
A: 1) For displaying tabular data. A calendar is one example of tabular data that isn't always obvious at first.
2) I work for a medical billing company, and nearly all of the layout for our internal work is done using CSS. However, from time to time we get paper forms from insurance companies that our billers have to use, and a program will convert them to an html format that they can fill out and print via the intranet. To make sure the forms are accepted they need to match the original paper version very closely. For these it's just simple to fall back to tables.
A: Tables are used for tabular data. If it makes sense to put it in a spreadsheet then use a table. Otherwise there is a better tag for you to be using such as div, span, ul, etc.
A: The whole "Tables vs Divs" thing just barely misses the mark. It's not "table" or "div". It's about using semantic html.
Even the div tag plays only a small part in a well laid out page. Don't overuse it. You shouldn't need that many if you put your html together correctly. Things like lists, field sets, legends, labels, paragraphs, etc can replace much of what a div or span is often used to accomplish. Div should be used primarily when it makes sense to indicate a logical division, and only appropriated for extra layout when absolutely necessary. The same is true for table; use it when you have tabular data, but not otherwise.
Then you have a more semantic page and you don't need quite as many classes defined in your CSS; you can target the tags directly instead. Possibly most importantly, you have a page that will score much better with Google (anecdotally) than the equivalent table or div-heavy page. Most of all it will help you better connect with a portion of your audience.
So if we go back and look at it in terms of table vs div, it's my opinion that we've actually come to the point where div is over-used and table is under-used. Why? Because when you really think about it, there are a lot of things out there that fall into the category of "tabular data" that tend to be overlooked. Answers and comments on this very web page, for example. They consist of multiple records, each with the same set of fields. They're even stored in a sql server table, for crying out loud. This is the exact definition of tabular data. This means an html table tag would absolutely be a good semantic choice to layout something like the posts here on Stack Overflow. The same principle applies to many other things as well. It may not be a good idea to use a table tag to set up a three column layout, but it's certainly just fine to use it for grids and lists... except, of course, when you can actually use the ol or ul (list) tags.
A: I would make a distinction between HTML for public websites (tables no-no-no, divs yes-yes-yes) and HTML for semi-public or private web applications, where I tend to prefer tables even for page layout.
Most of the respectable reasons why "Tables are bad" are usually an issue only for public websites, but not so much of a problem with webapps. If I can get the same layout and have a more consistent look across browsers by using a TABLE than a complicated CSS+DIV, then I usually go ahead and aprove the TABLE.
A: As many posters have already mentioned, you should use tables to display for tabular data.
Tables were introduced in HTML 3.2 here is the relevant paragraph from the spec on their usage:
[tables] can be used to markup tabular material or for layout purposes...
A: I believe just tabular content. For example, if you printed out a database table or spreadsheet-like data to HTML.
A: If you would like to have semantically correct HTML, then you should use tables only for tabular data.
Otherwise you use tables for everything you want, but there probably is a way to do the same thing using divs and CSS.
A: @Marius:
Is the layout tabular data? No, while it was standard a few years ago it's not now :-)
One other use I would have for it would be forms, particularly label : textbox pairs. This could technically be done in div boxes, but it's much, much easier to do this in tables, and one can argue that label:textbox pairs are in fact tabular in nature.
I tend to give the label a fixed width, or display it on the line above.
A: @Jon Limjap
For label : textbox, neither divs nor tables are appropriate: <dl>s are
A:
One other use I would have for it
would be forms, particularly label :
textbox pairs. This could technically
be done in div boxes, but it's much,
much easier to do this in tables, and
one can argue that label:textbox pairs
are in fact tabular in nature.
I see that a fair amount, especially among MS developers. And I've done it a fair amount in the past. It works, but it ignores some accessibility and best-practice factors. You should use labels, inputs, fieldsets, legends, and CSS to layout your forms. Why? Because that's what they are for, it's more efficient, and I think accessibility is important. But that's just my personal preference. I think everyone should try it that way first before condemning it. It's quick, easy, and clean.
A: When ever a page containg tables is loaded by any browser it takes more time for the browser to render properly the tag. Where as if the div is used ,the browser takes less time as it is lighter. And more over we can apply the css to make the divs appear as table,
The tables are normally heavy wieght and div are light weight.
A: Divs are simple divisions, they are mean't to be used to group sections of the page that are in a semantic sense linked. They carry no implicit meaning other than that.
Tables were originally intended to display scientific data, such as lab results on screen. Dave Raggett certainly didn't intend them to become used to implement layout.
I find it keeps it fairly clear in your mind if you remember the above, if its something you would normally expect to read in a table, then that's the appropriate tag, if its pure layout, then use something else to accomplish your needs.
A: It is clear that the DIV are used for Layout but It happened to me to being "forced" to use spreadsheets to do a grid layout within a div structure for this reasons:
the addition of percentage values did not allow a proper alignment with the div, while the same values expressed on cells of tables gave the expected result.
So I think that tables are still useful not only for data, but also for the situation above, on top of that, tables are still W3C compliant browser and alternative browsers (for the disabled for example) interpret theirs correctly.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30251",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "116"
} |
Q: How Popular is the Seam Framework I'm using JBoss Seam Framework, but it's seems to me isn't very popular among java developers.
I want to know how many java programmers here are using it, and in what kind of projects.
Is as good as django, or RoR?
A: I have used JBoss Seam now for about a year and like it very much over Spring. Unfortunately, I don't use this at work, more for side projects and personal projects. For me, it saves me a lot of time developing new projects for clients. And, one big reason I use it primarily is, the tight integration with each layer and I never get any lazy load errors that I used to get with Spring (even after the filter and other hacks).
An equivalent Spring application would have much more boilerplate code within it to get stuff working. Spring does not integrate each layer very well, it more or less is a wrapper for a lot of different things, but doesn't glue itself together very well.
The other nice thing I like with Seam is they practice what they preach. Take a look at their website. Take a guess what it is running, hmm, a live example of their code. Seam Wiki, Seam Forums, etc. If you truly believe in your code, stand behind it. I would be happy to have their pager 24x7x365, I bet it rarely goes off.
While you write a lot less code, the learning curve is about twice as steep. The further I get in, the more I understand how to write good code. I would like to see more comments, but as far as coding style, it is well written.
On the negative side, just as any product you try to market, Seam was years after Spring had already become popular so Spring is by far still more popular. Search on Indeed and Seam only has a few hits. If you look on Spring, there are roughly 40k registered users, while Seam has about 7k.
Depends on what is important to you, as a Java developer/engineer/programmer, you should be able to work with both technologies and chances are, you will most likely encounter a Spring application before a Seam one. Learn both and how to leverage both. If you use both properly and know the nuances and quirks of each, development becomes much easier whether you're using Spring or Seam.
I don't agree with the statement, "Seam is the next Struts". Struts was a view technology whereas Seam integrates all layers. I will agree that it is a new concept like Struts and will bring the same impact to the Java community that Struts did. I don't think we'll see that until Java EE 6 and CDI become more popular, and of course Seam 3 is released.
Walter
A: Seam is fixed JSF based on annotations. No more crappy XML. I used it at work.
A: Hope this helps a little, but at my college our web applications course just got revamped. So now we are going the jsp, servlet, hibernate route with the second part of the course on mostly JBoss Seam. So who knows, it probably just needs time to grow in the community.
A: It really works for us....JSF+EJB3.0 with the help of seam framework is really fantastic.But i have a question...why this is not becoming more popular for developing large scale application.I have seen that many are using other frameworks for developing large scale j2ee application.It seems to me that seam really helps the developers to build a j2ee application...but still ...why this but coming in?
A: I like Seam, have been using it for the past year professionally.
However, the question concerns its popularity. I can see the following indications that it is not very popular (at least in comparison to plain JSF or Spring):
*
*Its forum is very inactive (at least at this point, they are working hard on Seam 3). http://seamframework.org/Community/SeamCommunityForumSlightlyInactive
*You can also take a look at its comparison with Spring in Google insights for search: http://www.google.com/insights/search/?hl=en-US#cat=732&q=seam%2Cspring&cmpt=q
*I only know one other company here in Athens where they use it, and I know a handful of companies that use plain JSF, Struts or Spring (of course, Athens is not representative for all the world).
A: In our JBoss Seam in Action presentation at the Javapolis conference last year, my colleague and I said that 'Seam is the next Struts'. This needed some explanation, which I later wrote-up as Seam is the new Struts. Needless to say, we like Seam.
One indication of Seam's popularity is the level of traffic on the Seam Users Forum.
A: I would say that seam is a rather popular framework, it has great documentation, a great and helpful community and a forum with many many questions and problems answered.
It should be popular among developers who use jsf beacuse it works great with jsf, but not only that... it fixes jsf in many ways (s:convertEntity tag, and unified component model are my favourite examples).
A: We have been using Seam for a while in huge projects.
Easy to kick-off a new project, reverse engineering is very handy.
A: I have used JBoss Seam on two commercial projects for two different clients. Yet JBoss Seam is still a new approach to developing JSF Web Applications. One measure is the results from a Indeed Job Search.
Indeed Job Search
A: When Java was introduced in the 90s as oak the community did not embrace it because it was too powerful for its time and was appreciated later on and is now running the show. Seam will get popular soon. if not it can be rebranded just as oak to java.
A: I have been using Seam from Seam 1.2 since 2007 in mid-size and large projects, sometimes in small projects no more than 200 users. My main concern is the productivity. Although my team has already gained obvious productivity from Spring since 2005, for some tricky clients developers have to code javascripts which is time consuming and error-prone. Seam was really helpful in this scenario because at that time most developers in my team had no experience with JSF. Happy to see Seam being more popular.
A: Seam has been discontinued in 2012. However, Apache DeltaSpike is the modern version of Seam, and this project is actively maintained, and it even won the 2014 Duke's Choice Award.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30281",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: NullReferenceException on User Control handle I have an Asp.NET application (VS2008, Framework 2.0). When I try to set a property on one of the user controls like
myUserControl.SomeProperty = someValue;
I get a NullReferenceException. When I debug, I found out that myUserControl is null. How is it possible that a user control handle is null? How do I fix this or how do I find what causes this?
A: Where are you trying to access the property? If you are in onInit, the control may not be loaded yet.
A: Where exactly in the code are you attempting to do this? It is possible that you are attempting to access the control too early in the page lifecycle and it has not been instantiated yet.
A: If you created the UserControl during runtime (through ControlCollection.Add), you need to create it on postback too.
Another case can be your UserControl does not match the designer.cs page
A: I was trying to set the property from markup on an outside user control. When I took the property to OnLoad, it worked.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30286",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How can you test to see if two arrays are the same using CFML? Using CFML (ColdFusion Markup Langauge, aka ColdFusion), how can you compare if two single dimension arrays are the same?
A: To build on James' answer, I thought that JSON might be preferrable over WDDX. In fact, it proves to be considerably more efficient. Comparing hashes is not that expensive, but serializing the data and then generating the hash could be (for larger and/or more complex data structures).
<cfsilent>
<!--- create some semi-complex test data --->
<cfset data = StructNew() />
<cfloop from="1" to="50" index="i">
<cfif variables.i mod 2 eq 0>
<cfset variables.data[variables.i] = StructNew()/>
<cfset tmp = variables.data[variables.i] />
<cfloop from="1" to="#variables.i#" index="j">
<cfset variables.tmp[variables.j] = 1 - variables.j />
</cfloop>
<cfelseif variables.i mod 3 eq 0>
<cfset variables.data[variables.i] = ArrayNew(1)/>
<cfset tmp = variables.data[variables.i] />
<cfloop from="1" to="#variables.i#" index="j">
<cfset variables.tmp[variables.j] = variables.j mod 6 />
</cfloop>
<cfset variables.data[variables.i] = variables.tmp />
<cfelse>
<cfset variables.data[variables.i] = variables.i />
</cfif>
</cfloop>
</cfsilent>
<cftimer label="JSON" type="inline">
<cfset jsonData = serializeJson(variables.data) />
<cfset jsonHash = hash(variables.jsonData) />
<cfoutput>
JSON: done.<br />
len=#len(variables.jsonData)#<br/>
hash=#variables.jsonHash#<br />
</cfoutput>
</cftimer>
<br/><br/>
<cftimer label="WDDX" type="inline">
<cfwddx action="cfml2wddx" input="#variables.data#" output="wddx" />
<cfset wddxHash = hash(variables.wddx) />
<cfoutput>
WDDX: done.<br />
len=#len(variables.wddx)#<br/>
hash=#variables.wddxHash#<br />
</cfoutput>
</cftimer>
Here's the output that the above code generates on my machine:
JSON: done.
len=7460
hash=5D0DC87FDF68ACA4F74F742528545B12
JSON: 0ms
WDDX: done.
len=33438
hash=94D9B792546A4B1F2FAF9C04FE6A00E1
WDDX: 47ms
While the data structure I'm serializing is fairly complex, it could easily be considered small. This should make the efficiency of JSON serialization over WDDX even more preferrable.
At any rate, if I were to try to write a "compareAnything" method using hash comparison, I would use JSON serialization over WDDX.
A: The arrayCompare() user-defined function at cflib should do it
http://cflib.org/index.cfm?event=page.udfbyid&udfid=1210
A: Jasons answer is certainly the best, but something I've done before is performed a hash comparison on objects that have been serialised to WDDX.
This method is only useful for relatively small arrays, but it's another option if you want to keep it purely CFML. It also has the benefit that you can apply the method to other data types as well...
Edit: Adams' entirely right (as you can see from his numbers) - JSON is much more economical, not only in this situation, but for serialization in general. In my defense I'm stuck using CFMX 6.1 that has no inbuilt JSON functions, and was trying to avoid external libs.
A: I was looking into using CF's native Java objects awhile back and this question reminded me of a few blog posts that I had bookmarked as a result of my search.
ColdFusion array is actually an implementation of java list (java.util.List). So all the list methods are actually available for Array.
CF provides most of the list functionality using Array functions but there are few things possible with java list which you can not do directly with CF functions.
*
*Merge two arrays
*Append an array in the middle of another array
*Search for an element in an array
*Search array 1 to see if array 2's elements are all found
*Equality check
*Remove elements in array 1 from array 2
From: http://coldfused.blogspot.com/2007/01/extend-cf-native-objects-harnessing.html
Another resource I found shows how you can use the native Java Array class to get unique values and to create custom sorts functions in case you need to sort an array of dates, for instance.
http://www.barneyb.com/barneyblog/2008/05/08/use-coldfusion-use-java/
This second link contains links to other posts where the author shows how to use other Java classes natively to gain either functionality or speed over CF functions.
A: Assuming all of the values in the array are simple values, the easiest thing might be to convert the arrays to lists and just do string compares.
<cfif arrayToList(arrayA) IS arrayToList(arrayB)>
Arrays are equal!
</cfif>
Not as elegant as other solutions offered, but dead simple.
A: There's a very simple way of comparing two arrays using CFML's underlying java. According to a recent blog by Rupesh Kumar of Adobe (http://coldfused.blogspot.com/), ColdFusion arrays are an implementation of java lists (java.util.List). So all the Java list methods are available for CFML arrays.
So to compare 2 arrays all you need to do is use the equals method. It returns a YES if the arrays are equal and NO if they are not.
<cfset array1 = listToArray("tom,dick,harry,phred")/>
<cfset array2 = listToArray("dick,harry,phred") />
<cfset array3 = listToArray("tom,dick,harry,phred")/>
<cfoutput>
Array2 equals Array1 #array2.equals(array1)# (returns a NO) <br/>
Array3 equals Array1 #array3.equals(array1)# (returns a YES) <br/>
</cfoutput>
A: All of these solutions check that two arrays are equal. They don't check that they are the same object. The only way I know to do that in native CF is to change the object in some way and see if both references have the change in.
I also think that you should be wary of relying on CF implementing certain java classes or exposing methods. These are liable to change.
As for comparing two arrays to see if the contents is the same, why not just
Check the length (if different return false)
If the lengths are the same from 1 to array len and check the elements are the same break and return false if they are not.
This will work for simple values.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30288",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Should you run one or multiple applications per tomcat cluster? Currently I am setting up an application that can deploy other web apps to Tomcat 6 clusters. It is set up right now to have a one to one relationship between deployed web application and a cluster. My current reasoning for this is so that I can change the JVM args of the Tomcat server without disrupting other applications and so that the memory usage of the single application will not conflict with other applications.
The question is, what is considered best practice in terms of tomcat instance clusters? Should you only have one application running per cluster or multiple applications like in a single tomcat instance environment? Or does this depend on the size of your application?
Thank you
A: I've learned from experience that having only one app per Tomcat instance has a very significant advantage: when a Tomcat instance dies, you don't have to dig through logs (or guess) which app is to blame.
A: Divide your services by resource requirements at the very least. For example, if you are running a photo album site, separate your image download server from your image upload server. The download server will have many more requests, and because most people have a lower upload speed the upload server will have longer lasting connections. Similarly, and image manipulation server would probably have few connections, but it should fork off threads to perform the CPU intensive image manipulation tasks asynchronously from the web user interface.
If you have the hardware to do it, it's a lot easier to manage many separate tomcat instances with one application each than a few instances with many applications.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30295",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: MS hotfix delayed delivery I just requested a hotfix from support.microsoft.com and put in my email address, but I haven't received the email yet. The splash page I got after I requested the hotfix said:
Hotfix Confirmation
We will send these hotfixes to the following e-mail address:
(my correct email address)
Usually, our hotfix e-mail is delivered to you within five minutes. However, sometimes unforeseen issues in e-mail delivery systems may cause delays.
We will send the e-mail from the “[email protected]” e-mail account. If you use an e-mail filter or a SPAM blocker, we recommend that you add “[email protected]” or the “microsoft.com” domain to your safe senders list. (The safe senders list is also known as a whitelist or an approved senders list.) This will help prevent our e-mail from going into your junk e-mail folder or being automatically deleted.
I'm sure that the email is not getting caught in a spam catcher.
How long does it normally take to get one of these hotfixes? Am I waiting for some human to approve it, or something? Should I just give up and try to get the file I need some other way?
(Update: Replaced "[email protected]" with "(my correct email address)" to resolve Martín Marconcini's ambiguity.)
A: It usually arrives within the first hour. BUt the fact that it reads [email protected] could either because you put it there to protect your privacy (in which case forget about this) or that the system didn't catch your email and they sent it to [email protected].
If the email address was ok and you didn't get it, somehow it bounced or it won't arrive. I'd suggest you contact them again providing an alternate email (gmail or such) to make sure that you don't experience any problems.
Last time I received a hotfix it took them 10 minutes.
Good luck with that!
A: Took about a day for me when I requested one so I suspect some sort of manual/semi-automated process has to complete before you get the e-mail.
Give it a day before you start bugging them ;)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30297",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
} |
Q: Asp.Net Routing: How do I ignore multiple wildcard routes? I'd like to ignore multiple wildcard routes. With asp.net mvc preview 4, they ship with:
RouteTable.Routes.IgnoreRoute("{resource}.axd/{*pathInfo}");
I'd also like to add something like:
RouteTable.Routes.IgnoreRoute("Content/{*pathInfo}");
but that seems to break some of the helpers that generate urls in my program. Thoughts?
A: This can be quite tricky.
When attempting to figure out how to map route data into a route, the system currently searches top-down until it finds something where all the required information is provided, and then stuffs everything else into query parameters.
Since the required information for the route "Content/{*pathInfo}" is entirely satisfied always (no required data at all in this route), and it's near the top of the route list, then all your attempts to map to unnamed routes will match this pattern, and all your URLs will be based on this ("Content?action=foo&controller=bar")
Unfortunately, there's no way around this with action routes. If you use named routes (f.e., choosing Html.RouteLink instead of Html.ActionLink), then you can specify the name of the route to match. It's less convenient, but more precise.
IMO, complex routes make the action-routing system basically fall over. In applications where I have something other than the default routes, I almost always end up reverting to named-route based URL generation to ensure I'm always getting the right route.
A: There are two possible solutions here.
*
*Add a constraint to the ignore route to make sure that only requests that should be ignored would match that route. Kinda kludgy, but it should work.
RouteTable.Routes.IgnoreRoute("{folder}/{*pathInfo}", new {folder="content"});
*What is in your content directory? By default, Routing does not route files that exist on disk (actually checks the VirtualPathProvider). So if you are putting static content in the Content directory, you might not need the ignore route.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30302",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Why would getcwd() return a different directory than a local pwd? I'm doing some PHP stuff on an Ubuntu server.
The path I'm working in is /mnt/dev-windows-data/Staging/mbiek/test_list but the PHP call getcwd() is returning /mnt/dev-windows/Staging/mbiek/test_list (notice how it's dev-windows instead of dev-windows-data).
There aren't any symbolic links anywhere.
Are there any other causes for getcwd() returning a different path from a local pwd call?
Edit
I figured it out. The DOCUMENT_ROOT in PHP is set to /mnt/dev-windows which throws everything off.
A: Which file are you calling the getcwd() in and is that file is included into the one you are running (e.g. running index.php, including startup.php which contains gwtcwd()).
Is the file you are running in /dev-windows/ or /dev-windows-data/? It works on the file you are actually running.
Here's an example of my current project:
index.php
<?php
require_once('./includes/construct.php');
//snip
?>
includes/construct.php
<?php
//snip
(!defined('DIR')) ? define('DIR', getcwd()) : NULL;
require_once(DIR . '/includes/functions.php');
//snip
?>
A: @Ross
I thought that getcwd() was returning a filesystem path rather than a relative url path.
Either way, the fact remains that the path /mnt/dev-windows doesn't exist while /mnt/dev-windows-data does.
A: @Mark
Well that's just plain weird! What's your include_path - that could be messing thigns around. I've personally ditched it in favour of contants as it's just so temperamental (or I've never learned how to do it justice).
A: @Ross
I figured it out and updated the OP with the solution.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30307",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: Asp.Net MVC: How do I enable dashes in my urls? I'd like to have dashes separate words in my URLs. So instead of:
/MyController/MyAction
I'd like:
/My-Controller/My-Action
Is this possible?
A: Here's what I did using areas in ASP.NET MVC 5 and it worked liked a charm. I didn't have to rename my views, either.
In RouteConfig.cs, do this:
public static void RegisterRoutes(RouteCollection routes)
{
// add these to enable attribute routing and lowercase urls, if desired
routes.MapMvcAttributeRoutes();
routes.LowercaseUrls = true;
// routes.MapRoute...
}
In your controller, add this before your class definition:
[RouteArea("SampleArea", AreaPrefix = "sample-area")]
[Route("{action}")]
public class SampleAreaController: Controller
{
// ...
[Route("my-action")]
public ViewResult MyAction()
{
// do something useful
}
}
The URL that shows up in the browser if testing on local machine is: localhost/sample-area/my-action. You don't need to rename your view files or anything. I was quite happy with the end result.
After routing attributes are enabled you can delete any area registration files you have such as SampleAreaRegistration.cs.
This article helped me come to this conclusion. I hope it is useful to you.
A: Asp.Net MVC 5 will support attribute routing, allowing more explicit control over route names. Sample usage will look like:
[RoutePrefix("dogs-and-cats")]
public class DogsAndCatsController : Controller
{
[HttpGet("living-together")]
public ViewResult LivingTogether() { ... }
[HttpPost("mass-hysteria")]
public ViewResult MassHysteria() { }
}
To get this behavior for projects using Asp.Net MVC prior to v5, similar functionality can be found with the AttributeRouting project (also available as a nuget). In fact, Microsoft reached out to the author of AttributeRouting to help them with their implementation for MVC 5.
A: You could create a custom route handler as shown in this blog:
http://blog.didsburydesign.com/2010/02/how-to-allow-hyphens-in-urls-using-asp-net-mvc-2/
public class HyphenatedRouteHandler : MvcRouteHandler{
protected override IHttpHandler GetHttpHandler(RequestContext requestContext)
{
requestContext.RouteData.Values["controller"] = requestContext.RouteData.Values["controller"].ToString().Replace("-", "_");
requestContext.RouteData.Values["action"] = requestContext.RouteData.Values["action"].ToString().Replace("-", "_");
return base.GetHttpHandler(requestContext);
}
}
...and the new route:
routes.Add(
new Route("{controller}/{action}/{id}",
new RouteValueDictionary(
new { controller = "Default", action = "Index", id = "" }),
new HyphenatedRouteHandler())
);
A very similar question was asked here: ASP.net MVC support for URL's with hyphens
A: You can use the ActionName attribute like so:
[ActionName("My-Action")]
public ActionResult MyAction() {
return View();
}
Note that you will then need to call your View file "My-Action.cshtml" (or appropriate extension). You will also need to reference "my-action" in any Html.ActionLink methods.
There isn't such a simple solution for controllers.
Edit: Update for MVC5
Enable the routes globally:
public static void RegisterRoutes(RouteCollection routes)
{
routes.MapMvcAttributeRoutes();
// routes.MapRoute...
}
Now with MVC5, Attribute Routing has been absorbed into the project. You can now use:
[Route("My-Action")]
On Action Methods.
For controllers, you can apply a RoutePrefix attribute which will be applied to all action methods in that controller:
[RoutePrefix("my-controller")]
One of the benefits of using RoutePrefix is URL parameters will also be passed down to any action methods.
[RoutePrefix("clients/{clientId:int}")]
public class ClientsController : Controller .....
Snip..
[Route("edit-client")]
public ActionResult Edit(int clientId) // will match /clients/123/edit-client
A: I've developed an open source NuGet library for this problem which implicitly converts EveryMvc/Url to every-mvc/url.
Uppercase urls are problematic because cookie paths are case-sensitive, most of the internet is actually case-sensitive while Microsoft technologies treats urls as case-insensitive. (More on my blog post)
NuGet Package: https://www.nuget.org/packages/LowercaseDashedRoute/
To install it, simply open the NuGet window in the Visual Studio by right clicking the Project and selecting NuGet Package Manager, and on the "Online" tab type "Lowercase Dashed Route", and it should pop up.
Alternatively, you can run this code in the Package Manager Console:
Install-Package LowercaseDashedRoute
After that you should open App_Start/RouteConfig.cs and comment out existing route.MapRoute(...) call and add this instead:
routes.Add(new LowercaseDashedRoute("{controller}/{action}/{id}",
new RouteValueDictionary(
new { controller = "Home", action = "Index", id = UrlParameter.Optional }),
new DashedRouteHandler()
)
);
That's it. All the urls are lowercase, dashed, and converted implicitly without you doing anything more.
Open Source Project Url: https://github.com/AtaS/lowercase-dashed-route
A: You could write a custom route that derives from the Route class GetRouteData to strip dashes, but when you call the APIs to generate a URL, you'll have to remember to include the dashes for action name and controller name.
That shouldn't be too hard.
A: You can define a specific route such as:
routes.MapRoute(
"TandC", // Route controllerName
"CommonPath/{controller}/Terms-and-Conditions", // URL with parameters
new {
controller = "Home",
action = "Terms_and_Conditions"
} // Parameter defaults
);
But this route has to be registered BEFORE your default route.
A: If you have access to the IIS URL Rewrite module ( http://blogs.iis.net/ruslany/archive/2009/04/08/10-url-rewriting-tips-and-tricks.aspx ), you can simply rewrite the URLs.
Requests to /my-controller/my-action can be rewritten to /mycontroller/myaction and then there is no need to write custom handlers or anything else. Visitors get pretty urls and you get ones MVC can understand.
Here's an example for one controller and action, but you could modify this to be a more generic solution:
<rewrite>
<rules>
<rule name="Dashes, damnit">
<match url="^my-controller(.*)" />
<action type="Rewrite" url="MyController/Index{R:1}" />
</rule>
</rules>
</rewrite>
The possible downside to this is you'll have to switch your project to use IIS Express or IIS for rewrites to work during development.
A: I'm still pretty new to MVC, so take it with a grain of salt. It's not an elegant, catch-all solution but did the trick for me in MVC4:
routes.MapRoute(
name: "ControllerName",
url: "Controller-Name/{action}/{id}",
defaults: new { controller = "ControllerName", action = "Index", id = UrlParameter.Optional }
);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30310",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "78"
} |
Q: Where can I find a graphical command shell? Terminals and shells are very powerful but can be complicated to learn, especially to get the best out of them. Does anyone know of a more GUI based command shell that helps a user or displays answers in a more friendly way? I'm aware of IPython, but even the syntax of that is somewhat convoluted, although it's a step in the right direction.
Further to this, results could be presented graphically, e.g. wouldn't it be nice to be able to pipe file sizes into a pie chart?
A: Hotwire is an attempt to combine the power of the traditional command line interface with GUI elements. So it has a GUI side, and tries to be helpful in suggesting commands, and in showing you possible matches from your history. (While there are keyboard shortcuts to do this in bash and other shells, you have to know them ...)
You can use all your common system commands, but a number of key ones have new versions by default which use an object pipeline, and are displayed with a nice GUI view. In particular ls (aka dir) shows lists files and shows them in columns. You can sort by clicking on the column headers, double click on files to open, or double click on directories to move to that directory. The proc command allows you to right click on a process and one of the options is to kill it.
The object pipeline works in a similar way to Microsoft Powershell, allowing commands in the pipe to access object properties directly rather than having to do text processing to extract it.
Hotwire is cross platform (Linux, BSD, Windows, Mac), though it is at an early stage of development. To learn more, install (click on the link for your platform) and work through the simple getting started page.
If you don't like hotwire, you could also look at the list of related projects and ideas maintained on the hotwire wiki.
A: fish is a Unix shell that focuses on user-friendliness, such as by providing colored highlighting and extensive tab completion.
For a different kind of blend of textual and graphical interface, there's Quicksilver, as well as similar/inspired tools like Launchy, GNOME Do and ENSO.
A: GUI-based command shell seems like an oxymoron to me.
The key-word here is Graphical.
If I want a GUI, I want a full-featured GUI. But if I want raw performance, I want a command line.
A: Is this for Python in particular, or are you just interested in any command shell that has a GUI interface?
If the idea of piping file sizes into a pie chart interests you, you might try PowerGUI, a GUI layer on Microsoft's PowerShell command shell. PowerShell also lets you pipe data from commands into XML, CSV, and other formats that are understood by GUI programs.
A:
GUI-based command shell seems like an oxymoron to me.
Not really? A command shell is just an encapsulated environment in which to execute commands. Why can't they have GUI extensions? We are in the 21st century! :)
Check out http://hotwire-shell.org/
This is along the lines of what I was thinking. It's a shame it uses PyGTK, I'd have preferred PyQT (perhaps a licensing issue?). There look to be some interesting related links from the project as well.
If the idea of piping file sizes into a pie chart interests you, you might try PowerGUI, a GUI layer on [...]
PowerGUI looks like a hobby project I've been working on that organises regularly used tasks. It looks like it organises frequent jobs and formats the output for you. The formatting I see as the end result of the data flow. But it would be nice to be able to tinker with data and then continue to use it.
PowerShell as a command shell is very forgiving for new users and is easy to learn. There is an add-on product (it is a commercial product) called PowerGadgets that would let you pipe file sizes into a pie chart or other types of displays
PowerGadgets looks very interesting. It would be interesting to have things like system monitors so that you could say, read the CPU usage per second and pipe it into a graph.
Is this for Python in particular, or are you just interested in any command shell that has a GUI interface?
Any really, currently, but I like the idea of cross platform, easy to edit, no compiler setup. I use Windows at work and Windows/Linux (Ubuntu)/OSX at home. Python is just an easy solution, and for writing stuff like this is has a lot of libraries already.
Thanks for all the links. Keep them coming. :)
A: I'm not sure whether you're asking for a shell as in bash/csh, or a shell as in ipython. If it's the later, then I'd recommend looking at Reinteract. While it's still very alpha, it's already a great tool for rapid prototyping in python, and allows embedding of plots, widgets, etc.
A: I'm not exactly sure what you're asking for. You can either have a GUI or a command line. What do you need from a graphical command shell that you couldn't get from a straight GUI?
Also, if you want graphical information about file sizes there are a few applications that do that. One example is WinDirStat.
A: Also not related to Python, but Ubiquity (a firefox extension) is a graphical command-line-like tool for the web, with a similar spirit to Quicksilver/Launchy/GnomeDo.
A: I know that Automator in Mac OS X is not a shell but it is the best graphical tool I have ever used to do batch tasks. I think it is worth mentioning here as even I (self-titled as a power user) use it from time to time to rename files or other routines. Although these could be done in a few lines of shell script, the Automator's graphical interface makes me feel like I am not working and it just works.
A: Check out http://hotwire-shell.org/
A: PowerShell V2 is developing a graphical command shell, but I don't think that is what you are looking for.
PowerShell as a command shell is very forgiving for new users and is easy to learn. There is an add-on product (it is a commercial product) called PowerGadgets that would let you pipe file sizes into a pie chart or other types of displays. Information about that can be found here.
As for ease of use, PowerShell command follow a Verb-Noun pattern (along with aliases for ease of use from the command line) and is very discoverable. Check out some screencasts I did on using PowerShell at PowerShell Basics.
A: I found POSH, a GUI for MS PowerShell. This is pretty much what I intended. You have a command-line backend with a WPF GUI frontend. You can pipe results from command to the next and show graphical output.
A: Maxima provides a mathematical shell (screens) . It is nice that you type in a C-like syntax and receive graphically formatted output.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30318",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Is there a HTML opposite to ? Is there a tag in HTML that will only display its content if JavaScript is enabled? I know <noscript> works the opposite way around, displaying its HTML content when JavaScript is turned off. But I would like to only display a form on a site if JavaScript is available, telling them why they can't use the form if they don't have it.
The only way I know how to do this is with the document.write(); method in a script tag, and it seems a bit messy for large amounts of HTML.
A: I don't really agree with all the answers here about embedding the HTML beforehand and hiding it with CSS until it is again shown with JS. Even w/o JavaScript enabled, that node still exists in the DOM. True, most browsers (even accessibility browsers) will ignore it, but it still exists and there may be odd times when that comes back to bite you.
My preferred method would be to use jQuery to generate the content. If it will be a lot of content, then you can save it as an HTML fragment (just the HTML you will want to show and none of the html, body, head, etc. tags) then use jQuery's ajax functions to load it into the full page.
test.html
<html>
<head>
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"></script>
<script type="text/javascript" charset="utf-8">
$(document).ready(function() {
$.get('_test.html', function(html) {
$('p:first').after(html);
});
});
</script>
</head>
<body>
<p>This is content at the top of the page.</p>
<p>This is content at the bottom of the page.</p>
</body>
</html>
_test.html
<p>This is from an HTML fragment document</p>
result
<p>This is content at the top of the page.</p>
<p>This is from an HTML fragment document</p>
<p>This is content at the bottom of the page.</p>
A: First of all, always separate content, markup and behaviour!
Now, if you're using the jQuery library (you really should, it makes JavaScript a lot easier), the following code should do:
$(document).ready(function() {
$("body").addClass("js");
});
This will give you an additional class on the body when JS is enabled.
Now, in CSS, you can hide the area when the JS class is not available, and show the area when JS is available.
Alternatively, you can add no-js as the the default class to your body tag, and use this code:
$(document).ready(function() {
$("body").removeClass("no-js");
$("body").addClass("js");
});
Remember that it is still displayed if CSS is disabled.
A: You could have an invisible div that gets shown via JavaScript when the page loads.
A: I have a simple and flexible solution, somewhat similar to Will's (but with the added benefit of being valid html):
Give the body element a class of "jsOff". Remove (or replace) this with JavaScript. Have CSS to hide any elements with a class of "jsOnly" with a parent element with a class of "jsOff".
This means that if JavaScript is enabled, the "jsOff" class will be removed from the body. This will mean that elements with a class of "jsOnly" will not have a parent with a class of "jsOff" and so will not match the CSS selector that hides them, thus they will be shown.
If JavaScript is disabled, the "jsOff" class will not be removed from the body. Elements with "jsOnly" will have a parent with "jsOff" and so will match the CSS selector that hides them, thus they will be hidden.
Here's the code:
<html>
<head>
<!-- put this in a separate stylesheet -->
<style type="text/css">
.jsOff .jsOnly{
display:none;
}
</style>
</head>
<body class="jsOff">
<script type="text/javascript">
document.body.className = document.body.className.replace('jsOff','jsOn');
</script>
<noscript><p>Please enable JavaScript and then refresh the page.</p></noscript>
<p class="jsOnly">I am only shown if JS is enabled</p>
</body>
</html>
It's valid html. It is simple. It's flexible.
Just add the "jsOnly" class to any element that you want to only display when JS is enabled.
Please note that the JavaScript that removes the "jsOff" class should be executed as early as possible inside the body tag. It cannot be executed earlier, as the body tag will not be there yet. It should not be executed later as it will mean that elements with the "jsOnly" class may not be visible right away (as they will match the CSS selector that hides them until the "jsOff" class is removed from the body element).
This could also provide a mechanism for js-only styling (e.g. .jsOn .someClass{}) and no-js-only styling (e.g. .jsOff .someOtherClass{}). You could use it to provide an alternative to <noscript>:
.jsOn .noJsOnly{
display:none;
}
A: Easiest way I can think of:
<html>
<head>
<noscript><style> .jsonly { display: none } </style></noscript>
</head>
<body>
<p class="jsonly">You are a JavaScript User!</p>
</body>
</html>
No document.write, no scripts, pure CSS.
A: In the decade since this question was asked, the HIDDEN attribute was added to HTML. It allows one to directly hide elements without using CSS. As with CSS-based solutions, the element must be un-hidden by script:
<form hidden id=f>
Javascript is on, form is visible.<br>
<button>Click Me</button>
</form>
<script>
document.getElementById('f').hidden=false;
</script>
<noscript>
Javascript is off, but form is hidden, even when CSS is disabled.
</noscript>
A: You could also use Javascript to load content from another source file and output that. That may be a bit more black box-is than you're looking for though.
A: Here's an example for the hidden div way:
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title></title>
<style>
*[data-when-js-is-on] {
display: none;
}
</style>
<script>
document.getElementsByTagName("style")[0].textContent = "";
</script>
</head>
<body>
<div data-when-js-is-on>
JS is on.
</div>
</body>
</html>
(You'd probably have to tweak it for poor IE, but you get the idea.)
A: My solution
.css:
.js {
display: none;
}
.js:
$(document).ready(function() {
$(".js").css('display', 'inline');
$(".no-js").css('display', 'none');
});
.html:
<span class="js">Javascript is enabled</span>
<span class="no-js">Javascript is disabled</span>
A: Alex's article springs to mind here, however it's only applicable if you're using ASP.NET - it could be emulated in JavaScript however but again you'd have to use document.write();
A: You could set the visibility of a paragraph|div to 'hidden'.
Then in the 'onload' function, you could set the visibility to 'visible'.
Something like:
<body onload="javascript:document.getElementById(rec).style.visibility=visible">
<p style="visibility: visible" id="rec">This text to be hidden unless javascript available.</p>
A: There isn't a tag for that. You would need to use javascript to show the text.
Some people already suggested using JS to dynamically set CSS visible. You could also dynamically generate the text with document.getElementById(id).innerHTML = "My Content" or dynamically creating the nodes, but the CSS hack is probably the most straightforward to read.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30319",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "123"
} |
Q: How to store Application Messages for a .NET Website I am looking for a method of storing Application Messages, such as
*
*"You have logged in successfully"
*"An error has occurred, please call the helpdesk on x100"
*"You do not have the authority to reset all system passwords" etc
So that "when" the users decide they don't like the wording of messages I don't have to change the source code, recompile then redeploy - instead I just change the message store.
I really like the way that I can easily access strings in the web.config using keys and values.
ConfigurationManager.AppSettings("LOGINSUCCESS");
However as I could have a large number of application messages I didn't want to use the web.config directly. I was going to add a 2nd web config file and use that but of course you can only have one per virtual directory.
Does anyone have any suggestions on how to do this without writing much custom code?
A: In your Web.config, under appSettings, change it to:
<appSettings file="StringKeys.config">
Then, create your StringKeys.config file and have all your keys in it.
You can still use the AppSettings area in the main web.config for any real application related keys.
A: *
*Put the strings in an xml file and use a filewatcher to check for updates to the file
*Put the strings in a database, cache them and set a reasonable expiration policy
A: You can use ResourceManager class. See "ResourceManager and ASP.NET" article at http://msdn.microsoft.com/en-us/library/aa309419(VS.71).aspx
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30321",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: OCR with the Tesseract interface How do you OCR an tiff file using Tesseract's interface in c#?
Currently I only know how to do it using the executable.
A: C# program launches tesseract.exe and then reads the output file of tesseract.exe.
Process process = Process.Start("tesseract.exe", "out");
process.WaitForExit();
if (process.ExitCode == 0)
{
string content = File.ReadAllText("out.txt");
}
A: I discovered today that EMGU now includes a Tesseract wrapper. While the number of unmanaged dlls of the opencv lib might seem a little daunting, it's nothing that a quick copy to your output directory won't cure. From there the actual OCR process is as simple as three lines:
Tesseract ocr = new Tesseract(Path.Combine(Environment.CurrentDirectory, "tessdata"), "eng", Tesseract.OcrEngineMode.OEM_TESSERACT_ONLY);
this.ocr.Recognize(clip);
optOCR.Text = this.ocr.GetText();
"robomatics" put together a very nice youtube video that demonstrates a simple but effective solution.
A: Take a look at tessnet
A: The source code seemed to be geared for an executable, you might need to rewire stuffs a bit so it would build as a DLL instead. I don't have much experience with Visual C++ but I think it shouldn't be too hard with some research. My guess is that someone might have had made a library version already, you should try Google.
Once you have tesseract-ocr code in a DLL file, you can then import the file into your C# project via Visual Studio and have it create wrapper classes and do all the marshaling stuffs for you. If you can't import then DllImport will let you call the functions in the DLL from C# code.
Then you can take a look at the original executable to find clues on what functions to call to properly OCR a tiff image.
A: Disclaimer: I work for Atalasoft
Our OCR module supports Tesseract and if that proves to not be good enough, you can upgrade to a better engine and just change one line of code (we provide a common interface to multiple OCR engines).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30328",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32"
} |
Q: Install-base of Java JRE? Is there an online resource somewhere that maintains statistics on the install-base of Java including JRE version information? If not, is there any recent report that has some numbers?
I'm particularly interested in Windows users, but all other OS's are welcome too.
A: I'm not aware of anyone who keeps track of this publicly on a regular basis (unlike Adobe who pushes it every chance they get). The closest that I could come was this article from last November. Based upon his site, this data could be skewed a bit, but I think we fairly similar numbers as well.
A: There is a very rough percentage of browsers with some JRE available at The Counter, though I wouldn't trust it. Sun has a few useful stats from 2007, but their stats from 2008 are much less detailed. They suggest that in 2007 "92%...of JRE installs...are now Java SE 6", but who knows what highly technical site they surveyed to get that number.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30337",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Why do I receive a q[num] error when aborting a jQuery queue pipeline? When creating and executing a ajax request queue with $.manageAjax, I call ajaxManager.abort();, to abort the entire queue due to error, at which time I get an error stating: q[num] has no properties (jquery.ajaxmanager.js line 75)
Here is the calling code:
var ajaxManager = $.manageAjax({manageType:'sync', maxReq:0});
// setup code calling ajaxManager.add(...)
// in success callback of first request
ajaxManager.abort(); <-- causes error in jquery.ajaxManager.js
There are 4 requests in the queue, this is being called in the success of the first request, if certain criteria is met, the queue needs to be aborted.
Any ideas?
A: It looks like you've got fewer items in q than you were expecting when you started iterating. Your script may be trying to access q[q.length], i.e. the element after the last element.
Could it be that your successful request has been popped from the queue, and you have a race condition? Are you trying to abort a request that has already completed its life cycle? Alternatively, have you made a silly mistake as people sometimes do, and got your loop termination condition wrong?
Just a few thoughts, I hope they help.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30342",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Fixed page layout in IE6 Header, footer and sidebars have fixed position. In the center a content area with both scroll bars. No outer scroll bars on the browser. I have a layout that works in IE7 and FF. I need to add IE6 support. How can I make this work?
Here is an approximation of my current CSS.
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html>
<head>
<title>Layout</title>
<style>
* {
margin: 0px;
padding: 0px;
border: 0px;
}
.sample-border {
border: 1px solid black;
}
#header {
position: absolute;
top: 0px;
left: 0px;
right: 0px;
height: 60px;
}
#left-sidebar {
position: absolute;
top: 65px;
left: 0px;
width: 220px;
bottom: 110px;
}
#right-sidebar {
position: absolute;
top: 65px;
right: 0px;
width: 200px;
bottom: 110px;
}
#footer {
position: absolute;
bottom: 0px;
left: 0px;
right: 0px;
height: 105px;
}
@media screen {
#content {
position: absolute;
top: 65px;
left: 225px;
bottom: 110px;
right: 205px;
overflow: auto;
}
body #left-sidebar,
body #right-sidebar,
body #header,
body #footer,
body #content {
position: fixed;
}
}
</style>
</head>
<body>
<div id="header" class="sample-border"></div>
<div id="left-sidebar" class="sample-border"></div>
<div id="right-sidebar" class="sample-border"></div>
<div id="content" class="sample-border"><img src="/broken.gif" style="display: block; width: 3000px; height: 3000px;" /></div>
<div id="footer" class="sample-border"></div>
</body>
</html>
A: Might be overkill for your project, but Dean Edwards' IE7 javascript adds support for fixed positioning to IE6.
A: Add the following code to the <head>
<!--[if lte IE 6]>
<style type="text/css">
html, body {
height: 100%;
overflow: auto;
}
.ie6fixed {
position: absolute;
}
</style>
<![endif]-->
Add the ie6fixed CSS class to whatever you want to be position: fixed;
A: Try IE7.js. Should fix your problem without having to make any modifications.
Link: IE7.js
A: These answers were helpful and they did let me add a limited form of fixed positioning to IE6, however none of these fix the bug that breaks my layout in IE6 if I specify both a top and a bottom css property for my sidebars (which is the behavior I need).
Since top and bottom can't be specified, I used top and height. The height property turned out to be very necessary. I used javascript to recalculate the height when the page loads and for any resize.
Below is the code I added to my test case to get it to work. This could be much cleaner with jQuery.
<!--[if lt IE 7]>
<style>
body>div.ie6-autoheight {
height: 455px;
}
body>div.ie6-autowidth {
right: ;
width: 530px;
}
</style>
<script src="http://ie7-js.googlecode.com/svn/version/2.0(beta3)/IE7.js" type="text/javascript"></script>
<script type="text/javascript">
function fixLayout() {
if (document.documentElement.offsetWidth) {
var w = document.documentElement.offsetWidth - 450;
var h = document.documentElement.offsetHeight - 175;
var l = document.getElementById('left-sidebar');
var r = document.getElementById('right-sidebar');
var c = document.getElementById('content');
c.style.width = w;
c.style.height = h;
l.style.height = h;
r.style.height = h;
}
}
window.onresize = fixLayout;
fixLayout();
</script>
<![endif]-->
A: check out the pure css hacks below... some require forcing it into quirks mode (I think that's the most robust) but all work really well:
http://ryanfait.com/resources/fixed-positioning-in-internet-explorer/
http://tagsoup.com/cookbook/css/fixed/
I've used this to great effect, hope it helps!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30346",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: When must I set a variable to "Nothing" in VB6? In one of my VB6 forms, I create several other Form objects and store them in member variables.
Private m_frm1 as MyForm
Private m_frm2 as MyForm
// Later...
Set m_frm1 = New MyForm
Set m_frm2 = New MyForm
I notice that I'm leaking memory whenever this (parent) form is created and destroyed. Is it necessary for me to assign these member variables to Nothing in Form_Unload()?
In general, when is that required?
SOLVED: This particular memory leak was fixed when I did an Unload on the forms in question, not when I set the form to Nothing. I managed to remove a few other memory leaks by explicitly setting some instances of Class Modules to Nothing, as well.
A: Actually, VB6 implements RAII just like C++ meaning that locally declared references automatically get set to Nothing at the end of a block. Similarly, it should automatically reset member class variables after executing Class_Terminate. However, there have been several reports that this is not done reliably. I don't remember any rigorous test but it has always been best practice to reset member variables manually.
A: @Matt Dillard - Did setting these to nothing fix your memory leak?
VB6 doesn't have a formal garbage collector, more along the lines of what @Konrad Rudolph said.
Actually calling unload on your forms seems to me to be the best way to ensure that the main form is cleaned up and that each subform cleans up their actions.
I tested this with a blank project and two blank forms.
Private Sub Form_Load()
Dim frm As Form2
Set frm = New Form2
frm.Show
Set frm = Nothing
End Sub
After running both forms are left visible. setting frm to nothing did well... nothing.
After settign frm to nothing, the only handle open to this form is via the reference.
Unload Forms(1)
Am I seeing the problem correctly?
*
*Josh
A: Objects in VB have reference counting. This means that an object keeps a count of how many other object variables hold a reference to it. When there are no references to the object, the object is garbage collected (eventually). This process is part of the COM specification.
Usually, when a locally instantiated object goes out of scope (i.e. exits the sub), its reference count goes down by one, in other words the variable referencing the object is destroyed. So in most instances you won't need to explicitly set an object equal to Nothing on exiting a Sub.
In all other instances you must explicitly set an object variable to Nothing, in order to decrease its reference count (by one). Setting an object variable to Nothing, will not necessarily destroy the object, you must set ALL references to Nothing. This problem can become particularly acute with recursive data structures.
Another gotcha, is when using the New keyword in an object variable declaration. An object is only created on first use, not at the point where the New keyword is used. Using the New keyword in the declaration will re-create the object on first use every time its reference count goes to zero. So setting an object to Nothing may destroy it, but the object will automatically be recreated if referenced again. Ideally you should not declare using the New keyword, but by using the New operator which doesn't have this resurrection behaviour.
A: @Martin
VB6 had a "With/End With" statement that worked "like" the Using() statement in C#.NET. And of course, the less global things you have, the better for you.
With/End With does not working like the Using statement, it doesn't "Dispose" at the end of the statement.
With/End With works in VB 6 just like it does in VB.Net, it is basically a way to shortcut object properties/methods call. e.g.
With aCustomer
.FirstName = "John"
.LastName = "Smith"
End With
A: Strictly speaking never, but it gives the garbage collector a strong hint to clean things up.
As a rule: do it every time you're done with an object that you've created.
A: Setting a VB6 reference to Nothing, decreases the refecences count that VB has for that object. If and only if the count is zero, then the object will be destroyed.
Don't think that just because you set to Nothing it will be "garbage collected" like in .NET
VB6 uses a reference counter.
You are encouraged to set to "Nothing" instanciated objects that make referece to C/C++ code and stuff like that. It's been a long time since I touched VB6, but I remember setting files and resources to nothing.
In either case it won't hurt (if it was Nothing already), but that doesn't mean that the object will be destroyed.
VB6 had a "With/End With" statement that worked "like" the Using() statement in C#.NET. And of course, the less global things you have, the better for you.
Remember that, in either case, sometimes creating a large object is more expensive than keeping a reference alive and reusing it.
A: I had a problem similar to this a while back. I seem to think it would also prevent the app from closing, but it may be applicable here.
I pulled up the old code and it looks something like:
Dim y As Long
For y = 0 To Forms.Count -1
Unload Forms(x)
Next
It may be safer to Unload the m_frm1. and not just set it to nothing.
A: One important point that hasn't yet been mentioned here is that setting an object reference to Nothing will cause the object's destructor to run (Class_Terminate if the class was written in VB) if there are no other references to the object (reference count is zero).
In some cases, especially when using a RAII pattern, the termination code can execute code that can raise an error. I believe this is the case with some of the ADODB classes. Another example is a class that encapsulates file i/o - the code in Class_Terminate might attempt to flush and close the file if it's still open, which can raise an error.
So it's important to be aware that setting an object reference to Nothing can raise an error, and deal with it accordingly (exactly how will depend on your application - for example you might ignore such errors by inserting "On Error Resume Next" just before "Set ... = Nothing").
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30354",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: What C++ pitfalls should I avoid? I remember first learning about vectors in the STL and after some time, I wanted to use a vector of bools for one of my projects. After seeing some strange behavior and doing some research, I learned that a vector of bools is not really a vector of bools.
Are there any other common pitfalls to avoid in C++?
A: The web page C++ Pitfalls by Scott Wheeler covers some of the main C++ pitfalls.
A: Not really a specific tip, but a general guideline: check your sources. C++ is an old language, and it has changed a lot over the years. Best practices have changed with it, but unfortunately there's still a lot of old information out there. There have been some very good book recommendations on here - I can second buying every one of Scott Meyers C++ books. Become familiar with Boost and with the coding styles used in Boost - the people involved with that project are on the cutting edge of C++ design.
Do not reinvent the wheel. Become familiar with the STL and Boost, and use their facilities whenever possible rolling your own. In particular, use STL strings and collections unless you have a very, very good reason not to. Get to know auto_ptr and the Boost smart pointers library very well, understand under which circumstances each type of smart pointer is intended to be used, and then use smart pointers everywhere you might otherwise have used raw pointers. Your code will be just as efficient and a lot less prone to memory leaks.
Use static_cast, dynamic_cast, const_cast, and reinterpret_cast instead of C-style casts. Unlike C-style casts they will let you know if you are really asking for a different type of cast than you think you are asking for. And they stand out viisually, alerting the reader that a cast is taking place.
A: A short list might be:
*
*Avoid memory leaks through use shared pointers to manage memory allocation and cleanup
*Use the Resource Acquisition Is Initialization (RAII) idiom to manage resource cleanup - especially in the presence of exceptions
*Avoid calling virtual functions in constructors
*Employ minimalist coding techniques where possible - for example, declaring variables only when needed, scoping variables, and early-out design where possible.
*Truly understand the exception handling in your code - both with regard to exceptions you throw, as well as ones thrown by classes you may be using indirectly. This is especially important in the presence of templates.
RAII, shared pointers and minimalist coding are of course not specific to C++, but they help avoid problems that do frequently crop up when developing in the language.
Some excellent books on this subject are:
*
*Effective C++ - Scott Meyers
*More Effective C++ - Scott Meyers
*C++ Coding Standards - Sutter & Alexandrescu
*C++ FAQs - Cline
Reading these books has helped me more than anything else to avoid the kind of pitfalls you are asking about.
A: I've already mentioned it a few times, but Scott Meyers' books Effective C++ and Effective STL are really worth their weight in gold for helping with C++.
Come to think of it, Steven Dewhurst's C++ Gotchas is also an excellent "from the trenches" resource. His item on rolling your own exceptions and how they should be constructed really helped me in one project.
A: Two gotchas that I wish I hadn't learned the hard way:
(1) A lot of output (such as printf) is buffered by default. If you're debugging crashing code, and you're using buffered debug statements, the last output you see may not really be the last print statement encountered in the code. The solution is to flush the buffer after each debug print (or turn off the buffering altogether).
(2) Be careful with initializations - (a) avoid class instances as globals / statics; and (b) try to initialize all your member variables to some safe value in a ctor, even if it's a trivial value such as NULL for pointers.
Reasoning: the ordering of global object initialization is not guaranteed (globals includes static variables), so you may end up with code that seems to fail nondeterministically since it depends on object X being initialized before object Y. If you don't explicitly initialize a primitive-type variable, such as a member bool or enum of a class, you'll end up with different values in surprising situations -- again, the behavior can seem very nondeterministic.
A: Using C++ like C. Having a create-and-release cycle in the code.
In C++, this is not exception safe and thus the release may not be executed. In C++, we use RAII to solve this problem.
All resources that have a manual create and release should be wrapped in an object so these actions are done in the constructor/destructor.
// C Code
void myFunc()
{
Plop* plop = createMyPlopResource();
// Use the plop
releaseMyPlopResource(plop);
}
In C++, this should be wrapped in an object:
// C++
class PlopResource
{
public:
PlopResource()
{
mPlop=createMyPlopResource();
// handle exceptions and errors.
}
~PlopResource()
{
releaseMyPlopResource(mPlop);
}
private:
Plop* mPlop;
};
void myFunc()
{
PlopResource plop;
// Use the plop
// Exception safe release on exit.
}
A: Pitfalls in decreasing order of their importance
First of all, you should visit the award winning C++ FAQ. It has many good answers to pitfalls. If you have further questions, visit ##c++ on irc.freenode.org in IRC. We are glad to help you, if we can. Note all the following pitfalls are originally written. They are not just copied from random sources.
delete[] on new, delete on new[]
Solution: Doing the above yields to undefined behavior: Everything could happen. Understand your code and what it does, and always delete[] what you new[], and delete what you new, then that won't happen.
Exception:
typedef T type[N]; T * pT = new type; delete[] pT;
You need to delete[] even though you new, since you new'ed an array. So if you are working with typedef, take special care.
Calling a virtual function in a constructor or destructor
Solution: Calling a virtual function won't call the overriding functions in the derived classes. Calling a pure virtual function in a constructor or desctructor is undefined behavior.
Calling delete or delete[] on an already deleted pointer
Solution: Assign 0 to every pointer you delete. Calling delete or delete[] on a null-pointer does nothing.
Taking the sizeof of a pointer, when the number of elements of an 'array' is to be calculated.
Solution: Pass the number of elements alongside the pointer when you need to pass an array as a pointer into a function. Use the function proposed here if you take the sizeof of an array that is supposed to be really an array.
Using an array as if it were a pointer. Thus, using T ** for a two dimentional array.
Solution: See here for why they are different and how you handle them.
Writing to a string literal: char * c = "hello"; *c = 'B';
Solution: Allocate an array that is initialized from the data of the string literal, then you can write to it:
char c[] = "hello"; *c = 'B';
Writing to a string literal is undefined behavior. Anyway, the above conversion from a string literal to char * is deprecated. So compilers will probably warn if you increase the warning level.
Creating resources, then forgetting to free them when something throws.
Solution: Use smart pointers like std::unique_ptr or std::shared_ptr as pointed out by other answers.
Modifying an object twice like in this example: i = ++i;
Solution: The above was supposed to assign to i the value of i+1. But what it does is not defined. Instead of incrementing i and assigning the result, it changes i on the right side as well. Changing an object between two sequence points is undefined behavior. Sequence points include ||, &&, comma-operator, semicolon and entering a function (non exhaustive list!). Change the code to the following to make it behave correctly: i = i + 1;
Misc Issues
Forgetting to flush streams before calling a blocking function like sleep.
Solution: Flush the stream by streaming either std::endl instead of \n or by calling stream.flush();.
Declaring a function instead of a variable.
Solution: The issue arises because the compiler interprets for example
Type t(other_type(value));
as a function declaration of a function t returning Type and having a parameter of type other_type which is called value. You solve it by putting parentheses around the first argument. Now you get a variable t of type Type:
Type t((other_type(value)));
Calling the function of a free object that is only declared in the current translation unit (.cpp file).
Solution: The standard doesn't define the order of creation of free objects (at namespace scope) defined across different translation units. Calling a member function on an object not yet constructed is undefined behavior. You can define the following function in the object's translation unit instead and call it from other ones:
House & getTheHouse() { static House h; return h; }
That would create the object on demand and leave you with a fully constructed object at the time you call functions on it.
Defining a template in a .cpp file, while it's used in a different .cpp file.
Solution: Almost always you will get errors like undefined reference to .... Put all the template definitions in a header, so that when the compiler is using them, it can already produce the code needed.
static_cast<Derived*>(base); if base is a pointer to a virtual base class of Derived.
Solution: A virtual base class is a base which occurs only once, even if it is inherited more than once by different classes indirectly in an inheritance tree. Doing the above is not allowed by the Standard. Use dynamic_cast to do that, and make sure your base class is polymorphic.
dynamic_cast<Derived*>(ptr_to_base); if base is non-polymorphic
Solution: The standard doesn't allow a downcast of a pointer or reference when the object passed is not polymorphic. It or one of its base classes has to have a virtual function.
Making your function accept T const **
Solution: You might think that's safer than using T **, but actually it will cause headache to people that want to pass T**: The standard doesn't allow it. It gives a neat example of why it is disallowed:
int main() {
char const c = ’c’;
char* pc;
char const** pcc = &pc; //1: not allowed
*pcc = &c;
*pc = ’C’; //2: modifies a const object
}
Always accept T const* const*; instead.
Another (closed) pitfalls thread about C++, so people looking for them will find them, is Stack Overflow question C++ pitfalls.
A: The book C++ Gotchas may prove useful.
A: Here are a few pits I had the misfortune to fall into. All these have good reasons which I only understood after being bitten by behaviour that surprised me.
*
*virtual functions in constructors aren't.
*Don't violate the ODR (One Definition Rule), that's what anonymous namespaces are for (among other things).
*Order of initialization of members depends on the order in which they are declared.
class bar {
vector<int> vec_;
unsigned size_; // Note size_ declared *after* vec_
public:
bar(unsigned size)
: size_(size)
, vec_(size_) // size_ is uninitialized
{}
};
*Default values and virtual have different semantics.
class base {
public:
virtual foo(int i = 42) { cout << "base " << i; }
};
class derived : public base {
public:
virtual foo(int i = 12) { cout << "derived "<< i; }
};
derived d;
base& b = d;
b.foo(); // Outputs `derived 42`
A: The most important pitfalls for beginning developers is to avoid confusion between C and C++. C++ should never be treated as a mere better C or C with classes because this prunes its power and can make it even dangerous (especially when using memory as in C).
A: Check out boost.org. It provides a lot of additional functionality, especially their smart pointer implementations.
A: PRQA have an excellent and free C++ coding standard based on books from Scott Meyers, Bjarne Stroustrop and Herb Sutter. It brings all this information together in one document.
A: *
*Not reading the C++ FAQ Lite. It explains many bad (and good!) practices.
*Not using Boost. You'll save yourself a lot of frustration by taking advantage of Boost where possible.
A: Be careful when using smart pointers and container classes.
A: Avoid pseudo classes and quasi classes... Overdesign basically.
A: Forgetting to define a base class destructor virtual. This means that calling delete on a Base* won't end up destructing the derived part.
A: Some must have C++ books that will help you avoid common C++ pitfalls:
Effective C++
More Effective C++
Effective STL
The Effective STL book explains the vector of bools issue :)
A: Brian has a great list: I'd add "Always mark single argument constructors explicit (except in those rare cases you want automatic casting)."
A: Read the book C++ Gotchas: Avoiding Common Problems in Coding and Design.
A: Keep the name spaces straight (including struct, class, namespace, and using). That's my number-one frustration when the program just doesn't compile.
A: To mess up, use straight pointers a lot. Instead, use RAII for almost anything, making sure of course that you use the right smart pointers. If you write "delete" anywhere outside a handle or pointer-type class, you're very likely doing it wrong.
A: *
*Blizpasta. That's a huge one I see a lot...
*Uninitialized variables are a huge mistake that students of mine make. A lot of Java folks forget that just saying "int counter" doesn't set counter to 0. Since you have to define variables in the h file (and initialize them in the constructor/setup of an object), it's easy to forget.
*Off-by-one errors on for loops / array access.
*Not properly cleaning object code when voodoo starts.
A:
*
*static_cast downcast on a virtual base class
Not really... Now about my misconception: I thought that A in the following was a virtual base class when in fact it's not; it's, according to 10.3.1, a polymorphic class. Using static_cast here seems to be fine.
struct B { virtual ~B() {} };
struct D : B { };
In summary, yes, this is a dangerous pitfall.
A: Always check a pointer before you dereference it. In C, you could usually count on a crash at the point where you dereference a bad pointer; in C++, you can create an invalid reference which will crash at a spot far removed from the source of the problem.
class SomeClass
{
...
void DoSomething()
{
++counter; // crash here!
}
int counter;
};
void Foo(SomeClass & ref)
{
...
ref.DoSomething(); // if DoSomething is virtual, you might crash here
...
}
void Bar(SomeClass * ptr)
{
Foo(*ptr); // if ptr is NULL, you have created an invalid reference
// which probably WILL NOT crash here
}
A: Intention is (x == 10):
if (x = 10) {
//Do something
}
I thought I would never make this mistake myself, but I actually did it recently.
A: The essay/article Pointers, references and Values is very useful. It talks avoid avoiding pitfalls and good practices. You can browse the whole site too, which contains programming tips, mainly for C++.
A: I spent many years doing C++ development. I wrote a quick summary of problems I had with it years ago. Standards-compliant compilers are not really a problem anymore, but I suspect the other pitfalls outlined are still valid.
A: Forgetting an & and thereby creating a copy instead of a reference.
This happened to me twice in different ways:
*
*One instance was in an argument list, which caused a large object to be put on the stack with the result of a stack overflow and crash of the embedded system.
*I forgot the & on an instance variable, with the effect that the object was copied. After registering as a listener to the copy I wondered why I never got the callbacks from the original object.
Both where rather hard to spot, because the difference is small and hard to see, and otherwise objects and references are used syntactically in the same way.
A: #include <boost/shared_ptr.hpp>
class A {
public:
void nuke() {
boost::shared_ptr<A> (this);
}
};
int main(int argc, char** argv) {
A a;
a.nuke();
return(0);
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30373",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "76"
} |
Q: How do I replicate content on a web farm We have a Windows Server Web Edition 2003 Web Farm.
What can we use that handles replication across the servers for:
Content & IIS Configuration (App Pools, Virtual Directories, etc...)
We will be moving to Windows 2008 in the near future, so I guess what options are there on Windows 2008 as well.
A: I'd look into Windows Distributed File System. It should be supported by both Windows Server 2003 & 2008.
A: Distributed File System (DFS) is good for content, especially if each server (or a number of servers) host a replica synced up with File Replication Service (FRS). So if you've got two servers, each has a complete replica, so one going down doesn't mean the site goes down.
If all servers in your 'cluster' will host a replica, the home directory in IIS can be configured to go against the local drive (e.g., D:). If you have more servers than replicas, then you should use the DFS mount point (\domainname\dfsmountpointname).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30379",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Dynamically display Edit Control Block menu item in SharePoint I am trying to set up dynamic per-item menus (Edit Control Block) in SharePoint 2007. My goal is to have certain features that are available based on the current user's group membership.
I know that the CustomAction tag that controls the creation of this menu item has a Rights attribute. The problem that I have with this is that the groups I am using have identical rights in the site (ViewListItems, ManageAlerts, etc). The groups that we have set up deal more with function, such as Manager, Employee, etc. We want to be able to assign a custom feature to a group, and have the menu items associated with that feature visible only to members of that group. Everyone has the same basic site permissions, but will have extra options availble based on their login credentials.
I have seen several articles on modifying the Core.js file to hide items in the context menu, but they are an all-or-nothing approach. There is an interesting post at http://blog.thekid.me.uk/archive/2008/04/29/sharepoint-custom-actions-in-a-list-view-webpart.aspx that shows how to dynamically modify the Actions menu. It is trivial to modify this example to check the users group and show or hide the menu based on membership. Unfortunately, this example does not seem to apply to context menu items as evidenced here http://forums.msdn.microsoft.com/en-US/sharepointdevelopment/thread/c2259839-24c4-4a7e-83e5-3925cdd17c44/.
Does anyone know of a way to do this without using javascript? If not, what is the best way to check the user's group from javascript?
A: There are two different Javascript functions that you can implement for dynamically adding menu items to list item drop downs. Core.js (C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\TEMPLATE\LAYOUTS\1033\CORE.JS) checks for the existence of these methods when generating the menu items for a selected list item. "Custom_AddDocLibMenuItems" and "Custom_AddListMenuItems" are the names of the Javascript methods.
One article that I think you can use to solve your specific problem, dynamic menu item customization based on user role membership, can be found here:
MSDN: Customizing the Context Menu of Document Library Items (note the process is exactly the same for any list type)
This article outlines how server side code can be executed to define the menu items that will be displayed:
[...] in more complex cases, you must retrieve the list of available commands from the server, because only there you can run your business logic and perhaps get the commands from a custom database. Typically, you want to do this if you are implementing a workflow solution where each document has its own process state, with commands associated to it.
The solution for this situation is to have the Custom_AddDocLibMenuItems dynamically call a custom ASP.NET page. This page takes the ID of the document library and the specific item on the query string, and returns an XML string containing all the information for the commands available for that particular document. These commands are available according to the document's process status (or some other custom business logic). [...]
A: Unfortunately this is not possible to accomplish without using javascript. The ECB doesn't render server controls defined as a custom action (unlike the SiteActions etc).
To learn how to accomplish this by using Javascript check out the following article:
http://www.helloitsliam.com/archive/2007/08/10/moss2007-%E2%80%93-item-level-menus-investigation.aspx
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30397",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Experience with Java clustering? Would like to hear from people about their experience with java clustering (ie. implementing HA solutions). aka . terracotta, JGroups etc. It doesn't have to be web apps. Experience writing custom stand alone servers would be great also.
UPDATE : I will be a bit more specific -> not that interested in Web App clustering (unless it can be pulled out and run standalone). I know it works. But we need a bit more than just session clustering. Examining solutions in terms of ease of programming, supported topologies (ie. single data center versus over the WAN ), number of supported nodes. Issues faced, workarounds. At the moment I am doing some POC (Proof of concept) work on Terracotta and JGroups to see if its worth the effort for our app (which is stand alone, outside of a web container).
A: Jboss clustering was very easy to get up and running.
It seems to work well for us.
A: You might want to take a look at Hazelcast. It is super lite, easy and free clustering platform with cluster API. If you are clustering your application state/data, Hazelcast can be great help with its distributed/partitioned, queue, map, set, list and lock implementations.
Regards,
-talip
http://www.hazelcast.com
A: You may look at Oracle Coherence (formerly Tangosole Coherence).
http://www.oracle.com/technology/products/coherence/coherencedatagrid/coherence_solutions.html
A: I saw a demonstration of GridGain at our local JUG and I was very impressed. The documentation is very complete and it's very easy to get it going. I haven't started using it yet, so I can't quite say that it's working for us.
http://www.gridgain.com/
A: JBossCache is a standalone open source project that JbossClustering makes use of in the Application Server.
Our company made use of it in our own custom network server, its working well so far in development, though yet to be deployed.
Its a pretty simple API, and it comes in two flavors, a flat cache, or a "POJO Cache" that uses insturmentation to keep State across servers. Basically, updates to fields are propgated across the network using JGroups.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30428",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Moving from Visual Studio 2005 to 2008 and .NET 2.0 I'm currently using VS2005 Profesional and .NET 2.0, and since our project is rather large (25 projects in the solution), I'd like to try VS 2008, since its theoretically faster with larger projects.
Before doing such thing, i'd like to know if what I've read is true: can I use VS2008 in ".net 2.0" mode? I don't want my customers to install .net 3.0 or .3.5, I just want to install VS2008, open my solution and start working from there.
Is this possible?
P.D.: the solution is a c# Window Forms project.
A: Yes it's possible. In the project properties you can target different versions of the .Net Framework going back to .NET 2.0.
Upgrading to VS 2008 will upgrade your Solution file and you won't be able to go back to VS 2005 unless you have backed up your solution
A: yes, vs2008 can "target" a framework, but i think by default, if converting from vs2005 - vs2008 it just keeps it at framework 2.0
A: It is possible to have a 2.0 project in VS 2008. You would just target .NET Framework 2.0 under the project properties.
Your solution will have to be converted to a VS9 solution however.
A: Yes you can run 2.0 with VS2008. Be sure to select that when you convert, however. When converting a project, I mistakenly left in the ASP.NET 3.5 conversion, and then all new files contained references to Linq, so it was a little bit of a pain to switch back to 2.0 when I realized the mistake.
A: Yes, the feature that enables this is Visual Studio 2008 is called multi-targeting. See this link for more information. To use it you simply open the Properties for your Project, and select the Target Framework you want from the drop-down list on that page.
A: And bear in mind that all this stuff about the projects being compatible between 2005 and 2008 is NOT true for C/C++ projects.
A: I had absolutely no problems switching to VS2008. :) Multi-targeting worked without a hitch.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30430",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Parsing XML Elements & Attributes with Perl So I wrote some perl that would parse results returned from the Amazon Web Services. I am using the XML::Simple package. For the most part, everything worked when I pulled out an element. However, the problem I ran into was when an element had an attribute as well. Then I get an error that the item is a Hash.
Here's what I did if I wanted to get the Running Time for a DVD: I just created an item to hold the specific info for this one-off item.
// XML
<ProductGroup>DVD</ProductGroup>
<RunningTime Units="minutes">90</RunningTime>
// Perl to parse XML
my $item = $xml->XMLin($content, KeyAttr => { Item => 'ASIN'}, ForceArray => ['ASIN']);
$ProductGroup = $item->{Items}->{Item}->{ItemAttributes}->{ProductGroup};
if(ref($item->{Items}->{Item}->{ItemAttributes}->{RunningTime}) eq 'HASH'){
$RunningTimeXML = $xml->XMLin($content, KeyAttr => { Item => 'ASIN'}, NoAttr => 1);
$RunningTime = $RunningTimeXML->{Items}->{Item}->{ItemAttributes}->{RunningTime};
}
Is there a way I can access both elements and attributes from one item?
A: $item is a hashref that looks like this:
$item = {
'RunningTime' => {'content' => '90', 'Units' => 'minutes'},
'ProductGroup' => 'DVD'
};
Therefore you can get the running time like this:
$RunningTime = $item->{RunningTime}->{content}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30454",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How to collect customer feedback? What's the best way to close the loop and have a desktop app "call home" with customer feedback? Right now our code will login to our SMTP server and send me some email.
A: The site GetSatisfaction has been an increasingly popular way to get customer feedback.
http://getsatisfaction.com/
GetSatisfaction is a community based site that builds a community around your application. Users can post questions, comments, and feedback about and application and get answers to their questions either from other members or from members of the development team themselves.
They also have an API so you can incorporate GetSatifaction into your app, and/or your site.
I've been playing with it for a couple of weeks and it is pretty cool. Kind of like stackoverflow, but for customer feedback.
A: Feedback from users and programmers simply is one of the most important points of development in my opinion. The whole web2.0 - beta - concept more or less is build around this concept and therefore there should be absolutely no pain involved whatsoever for the user. What does it have to do with your question? I think quite a bit. If you provide a feedback option, make it visible in your application, but don't annoy the user (like MS sometimes does with there feedback thingy on there website above all elements!!). Place it somewhere directly! visible, but discreet. What about a separate menu entry? Some leftover space in the statusbar? Put it there so it is accessible all the time. Why? People really liking your product or who are REALLY annoyed about something will probably find your feedback option in any case, but you will miss the small things. Imagine a user unsure about the value of his input "should I really write him?". This one will probably will not make the afford in searching and in the end these small things make a really outstanding product, don't they? OK, the user found your feedback form, but how should it look and what's next? Keep it simple and don't ask him dozens questions and provoke him with check- and radioboxes. Give him two input fields, one for a title and one for a long description. Not more and not less. Maybe a small text shortly giving him some info what might be useful (OS, program version etc., maybe his email), but leave all this up to him. How to get the message to you and how to show the user that his input counts? In most cases this is simple. Like levand suggested use http and post the comment on a private area on your site and provide a link to his input. After revisiting his input, make it public and accessible for all (if possible). There he can see your response and that you really care etc.. Why not use the mail approach? What about a firewall preventing him to access your site? Duo to spam in quite some modern routers these ports are by default closed and you certainly will not get any response from workers in bigger companies, however port 80 or 443 is often open... (maybe you should check, if the current browser have a proxy installed and use this one..). Although I haven't used GetSatisfaction yet, I somewhat disagree with Nick Hadded, because you don't want third parties to have access to possible private and confidential data. Additionally you want "one face to the customer" and don't want to open up your customers base to someone else. There is SOO much more to tell, but I don't want to get banned for tattling .. haha! THX for caring about the user! :)
A: You might be interested in UseResponse, open-source (yet not free) hosted customer feedback / idea gathering solution that will be released in December, 2001.
It should run on majority of PHP hosting environments (including shared ones) and according to it's authors it's absorbed only the best features of it's competitors (mentioned in other answers) while will have little-to-none flaws of these.
A: You could also have the application send a POST http request directly to a URL on your server.
A: What my friend we are forgetting here is that, does having a mere form on your website enough to convince the users how much effort a Company puts in to act on that precious feedback.
A users' note to a company is a true image about the product or service that they offer. In Web 2.0 culture, people feel proud of being part of continuous development strategy always preached by almost all companies nowadays.
A community engagement platform is the need of the hour & an entry point on ur website that gains enuf traction from visitors to start talking what they feel will leave no stone unturned in getting those precious feedback. Thats where products like GetSatisfaction, UserRules or Zendesk comes in.
A company's active community that involves unimagined ideas, unresolved issues and ofcourse testimonials conveys the better development strategy of the product or service they offer.
A: Personally, I would also POST the information. However, I would send it to a PHP script that would then insert it into a mySQL database. This way, your data can be pre-sorted and pre-categorized for analysis later. It also gives you the potential to track multiple entries by single users.
A: There's quite a few options. This site makes the following suggestions
*
*http://www.suggestionbox.com/
*http://www.kampyle.com/
*http://getsatisfaction.com/
*http://www.feedbackify.com/
*http://uservoice.com/
*http://userecho.com/
*http://www.opinionlab.com/content/
*http://ideascale.com/
*http://sparkbin.net/
*http://www.gri.pe/
*http://www.dialogcentral.com/
*http://websitechat.net/en/
*http://www.anymeeting.com/
*http://www.facebook.com/
A: I would recommend just using pre built systems. Saves you the hassle.
Get an Insight is good: http://getaninsight.com/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30465",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: What is a reasonable length limit on person "Name" fields? I have a simple webform that will allow unauthenticated users to input their information, including name. I gave the name field a limit of 50 characters to coincide with my database table where the field is varchar(50), but then I started to wonder.
Is it more appropriate to use something like the Text column type or should I limit the length of the name to something reasonable?
I'm using SQL Server 2005, in case that matters in your response.
EDIT: I did not see this broader question regarding similar issues.
A: If it's full name in one field, I usually go with 128 - 64/64 for first and last in separate fields - you just never know.
A: In the UK, there are a few government standards which deal successfully with the bulk of the UK population -- the Passport Office, the Driver & Vehicle Licensing Agency, the Deed Poll office, and the NHS. They use different standards, obviously.
Changing your name by Deed Poll allows 300 characters;
There is no legal limit on the length of your name, but we impose a limit of 300 characters (including spaces) for your full name.
The NHS uses 70 characters for patient names
PATIENT NAME
Format/length: max an70
The Passport Office allows 30+30 first/last and Driving Licenses (DVLA) is 30 total.
Note that other organisations will have their own restrictions about what they will show on the documents they produce — for HM Passport Office the limit is 30 characters each for your forename and your surname, and for the DVLA the limit is 30 characters in total for your full name.
A: @Ian Nelson: I'm wondering if others see the problem there.
Let's say you have split fields. That's 70 characters total, 35 for first name and 35 for last name. However, if you have one field, you neglect the space that separates first and last names, short changing you by 1 character. Sure, it's "only" one character, but that could make the difference between someone entering their full name and someone not. Therefore, I would change that suggestion to "35 characters for each of Given Name and Family Name, or 71 characters for a single field to hold the Full Name".
A: I know I'm late on this one, but I'll add this comment anyway, as others may well come here in the future with similar questions.
Beware of tweaking column sizes dependent on locale. For a start, it sets you up for a maintenance nightmare, leaving aside the fact that people migrate, and take their names with them.
For example, Spanish people with those extra surnames can move to and live in an English-speaking country, and can reasonably expect their full name to be used. Russians have patronymics in addition to their surnames, some African names can be considerably longer than most European names.
Go with making each column as wide as you can reasonably do, taking into account the potential row count. I use 40 characters each for first name, other given names and surname and have never found any problems.
A: We use 50.
A: What you're really asking is a related, but substantially different question: how often do I want to truncate names in order to fit them in the database? The answer depends both on the frequency of different lengths of names as well as the maximum lengths chosen. This concern is balanced by the concerns about resources used by the database. Considering how little overhead difference there is between different max lengths for a varchar field I'd generally err on the side of never being forced to truncate a name and make the field as large as I dared.
A: The answer may differ for a database field which is used to store the name, and for a field in a HTML form.
The length of the Name filed in a HTML can be guided by UX.
There is a study which shows that in Europe, cite: "The median was 6.5 characters for the first names and 7.1 characters for the last names". If you look at the charts below you'll see that 10 characters for each, given name and family name, is enough to have optimal UX.
Also should be noted that governmental databases can't shorten names for obvious reasons. You probably can. They can afford extra storage. You probably not.
A: UK Government Data Standards Catalogue suggests 35 characters for each of Given Name and Family Name, or 70 characters for a single field to hold the Full Name.
A: I usually go with varchar(255) (255 being the maximum length of a varchar type in MySQL).
A: Note that many cultures have 'second surnames' often called family names. For example, if you are dealing with Spanish people, they will appreciate having a family name separated from their 'surname'.
Best bet is to define a data type for the name components, use those for a data type for the surname and tweak depending on locale.
A: The average first name is about 6 letters. That leaves 43 for a last name. :) Seems like you could probably shorten it if you like.
The main question is how many rows do you think you will have? I don't think varchar(50) is going to kill you until you get several million rows.
A: depending on who is going to be using your database, for example African names will do with varchar(20) for last name and first name separated. however it is different from nation to nation but for the sake saving your database resources and memory, separate last name and first name fields and use varchar(30) think that will work.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30485",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "162"
} |
Q: Compare Version Identifiers Here is my code, which takes two version identifiers in the form "1, 5, 0, 4" or "1.5.0.4" and determines which is the newer version.
Suggestions or improvements, please!
/// <summary>
/// Compares two specified version strings and returns an integer that
/// indicates their relationship to one another in the sort order.
/// </summary>
/// <param name="strA">the first version</param>
/// <param name="strB">the second version</param>
/// <returns>less than zero if strA is less than strB, equal to zero if
/// strA equals strB, and greater than zero if strA is greater than strB</returns>
public static int CompareVersions(string strA, string strB)
{
char[] splitTokens = new char[] {'.', ','};
string[] strAsplit = strA.Split(splitTokens, StringSplitOptions.RemoveEmptyEntries);
string[] strBsplit = strB.Split(splitTokens, StringSplitOptions.RemoveEmptyEntries);
int[] versionA = new int[4];
int[] versionB = new int[4];
for (int i = 0; i < 4; i++)
{
versionA[i] = Convert.ToInt32(strAsplit[i]);
versionB[i] = Convert.ToInt32(strBsplit[i]);
}
// now that we have parsed the input strings, compare them
return RecursiveCompareArrays(versionA, versionB, 0);
}
/// <summary>
/// Recursive function for comparing arrays, 0-index is highest priority
/// </summary>
private static int RecursiveCompareArrays(int[] versionA, int[] versionB, int idx)
{
if (versionA[idx] < versionB[idx])
return -1;
else if (versionA[idx] > versionB[idx])
return 1;
else
{
Debug.Assert(versionA[idx] == versionB[idx]);
if (idx == versionA.Length - 1)
return 0;
else
return RecursiveCompareArrays(versionA, versionB, idx + 1);
}
}
@ Darren Kopp:
The version class does not handle versions of the format 1.0.0.5.
A: Use the Version class.
Version a = new Version("1.0.0.0");
Version b = new Version("2.0.0.0");
Console.WriteLine(string.Format("Newer: {0}", (a > b) ? "a" : "b"));
// prints b
A: The System.Version class does not support versions with commas in it, so the solution presented by Darren Kopp is not sufficient.
Here is a version that is as simple as possible (but no simpler).
It uses System.Version but achieves compatibility with version numbers like "1, 2, 3, 4" by doing a search-replace before comparing.
/// <summary>
/// Compare versions of form "1,2,3,4" or "1.2.3.4". Throws FormatException
/// in case of invalid version.
/// </summary>
/// <param name="strA">the first version</param>
/// <param name="strB">the second version</param>
/// <returns>less than zero if strA is less than strB, equal to zero if
/// strA equals strB, and greater than zero if strA is greater than strB</returns>
public static int CompareVersions(String strA, String strB)
{
Version vA = new Version(strA.Replace(",", "."));
Version vB = new Version(strB.Replace(",", "."));
return vA.CompareTo(vB);
}
The code has been tested with:
static void Main(string[] args)
{
Test("1.0.0.0", "1.0.0.1", -1);
Test("1.0.0.1", "1.0.0.0", 1);
Test("1.0.0.0", "1.0.0.0", 0);
Test("1, 0.0.0", "1.0.0.0", 0);
Test("9, 5, 1, 44", "3.4.5.6", 1);
Test("1, 5, 1, 44", "3.4.5.6", -1);
Test("6,5,4,3", "6.5.4.3", 0);
try
{
CompareVersions("2, 3, 4 - 4", "1,2,3,4");
Console.WriteLine("Exception should have been thrown");
}
catch (FormatException e)
{
Console.WriteLine("Got exception as expected.");
}
Console.ReadLine();
}
private static void Test(string lhs, string rhs, int expected)
{
int result = CompareVersions(lhs, rhs);
Console.WriteLine("Test(\"" + lhs + "\", \"" + rhs + "\", " + expected +
(result.Equals(expected) ? " succeeded." : " failed."));
}
A: Well, since you only have a four element array you may just want ot unroll the recursion to save time. Passing arrays as arguments will eat up memory and leave a mess for the GC to clean up later.
A: If you can assume that each place in the version string will only be one number (or at least the last 3, you can just remove the commas or periods and compare...which would be a lot faster...not as robust, but you don't always need that.
public static int CompareVersions(string strA, string strB)
{
char[] splitTokens = new char[] {'.', ','};
string[] strAsplit = strA.Split(splitTokens, StringSplitOptions.RemoveEmptyEntries);
string[] strBsplit = strB.Split(splitTokens, StringSplitOptions.RemoveEmptyEntries);
int versionA = 0;
int versionB = 0;
string vA = string.Empty;
string vB = string.Empty;
for (int i = 0; i < 4; i++)
{
vA += strAsplit[i];
vB += strBsplit[i];
versionA[i] = Convert.ToInt32(strAsplit[i]);
versionB[i] = Convert.ToInt32(strBsplit[i]);
}
versionA = Convert.ToInt32(vA);
versionB = Convert.ToInt32(vB);
if(vA > vB)
return 1;
else if(vA < vB)
return -1;
else
return 0; //they are equal
}
And yes, I'm also assuming 4 version places here...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30494",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26"
} |
Q: Programmatically retrieve Visual Studio install directory I know there is a registry key indicating the install directory, but I don't remember what it is off-hand.
I am currently interested in Visual Studio 2008 install directory, though it wouldn't hurt to list others for future reference.
A: Environment: Thanks to Zeb and Sam for the VS*COMNTOOLS environment variable suggestion. To get to the IDE in PowerShell:
$vs = Join-Path $env:VS90COMNTOOLS '..\IDE\devenv.exe'
Registry: Looks like the registry location is HKLM\Software\Microsoft\VisualStudio, with version-specific subkeys for each install. In PowerShell:
$vsRegPath = 'HKLM:\Software\Microsoft\VisualStudio\9.0'
$vs = (Get-ItemProperty $vsRegPath).InstallDir + 'devenv.exe'
[Adapted from here]
A: For Visual Studio 2017 and Visual Studio 2019 there is the Setup API from Microsoft.
In C#, just add the NuGet package "Microsoft.VisualStudio.Setup.Configuration.Interop", and use it in this way:
try {
var query = new SetupConfiguration();
var query2 = (ISetupConfiguration2)query;
var e = query2.EnumAllInstances();
var helper = (ISetupHelper)query;
int fetched;
var instances = new ISetupInstance[1];
do {
e.Next(1, instances, out fetched);
if (fetched > 0)
Console.WriteLine(instances[0].GetInstallationPath());
}
while (fetched > 0);
return 0;
}
catch (COMException ex) when (ex.HResult == REGDB_E_CLASSNOTREG) {
Console.WriteLine("The query API is not registered. Assuming no instances are installed.");
return 0;
}
You can find more samples for VC, C#, and VB here.
A: It is a real problem that all Visual Studio versions have their own location. So the solutions here proposed are not generic. However, Microsoft has made a utility available for free (including the source code) that solved this problem (i.e. annoyance). It is called vswhere.exe and you can download it from here. I am very happy with it, and hopefully it will also do for future releases. It makes the whole discussion on this page redundant.
A: I use this method to find the installation path of Visual Studio 2010:
private string GetVisualStudioInstallationPath()
{
string installationPath = null;
if (Environment.Is64BitOperatingSystem)
{
installationPath = (string)Registry.GetValue(
"HKEY_LOCAL_MACHINE\\SOFTWARE\\Wow6432Node\\Microsoft\\VisualStudio\\10.0\\",
"InstallDir",
null);
}
else
{
installationPath = (string)Registry.GetValue(
"HKEY_LOCAL_MACHINE\\SOFTWARE \\Microsoft\\VisualStudio\\10.0\\",
"InstallDir",
null);
}
return installationPath;
}
A: Ah, the 64-bit machine part was the issue. It turns out I need to make sure I'm running the PowerShell.exe under the syswow64 directory in order to get the x86 registry keys.
Now that wasn't very fun.
A: Use Environment.GetEnvironmentVariable("VS90COMNTOOLS");.
Also in a 64-bit environment, it works for me.
A: Here's a solution to always get the path for the latest version:
$vsEnvVars = (dir Env:).Name -match "VS[0-9]{1,3}COMNTOOLS"
$latestVs = $vsEnvVars | Sort-Object | Select -Last 1
$vsPath = Get-Content Env:\$latestVs
A: @Dim-Ka has a great answer. If you were curious how you'd implement this in a batch script, this is how.
@echo off
:: BATCH doesn't have logical or, otherwise I'd use it
SET platform=
IF /I [%PROCESSOR_ARCHITECTURE%]==[amd64] set platform=true
IF /I [%PROCESSOR_ARCHITEW6432%]==[amd64] set platform=true
:: default to VS2012 = 11.0
:: the Environment variable VisualStudioVersion is set by devenv.exe
:: if this batch is a child of devenv.exe external tools, we know which version to look at
if not defined VisualStudioVersion SET VisualStudioVersion=11.0
if defined platform (
set VSREGKEY=HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\VisualStudio\%VisualStudioVersion%
) ELSE (
set VSREGKEY=HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\VisualStudio\%VisualStudioVersion%
)
for /f "skip=2 tokens=2,*" %%A in ('reg query "%VSREGKEY%" /v InstallDir') do SET VSINSTALLDIR=%%B
echo %VSINSTALLDIR%
A: I'm sure there's a registry entry as well but I couldn't easily locate it. There is the VS90COMNTOOLS environment variable that you could use as well.
A: Registry Method
I recommend querying the registry for this information. This gives the actual installation directory without the need for combining paths, and it works for express editions as well. This could be an important distinction depending on what you need to do (e.g. templates get installed to different directories depending on the edition of Visual Studio). The registry locations are as follows (note that Visual Studio is a 32-bit program and will be installed to the 32-bit section of the registry on x64 machines):
*
*Visual Studio: HKLM\SOFTWARE\Microsoft\Visual Studio\Major.Minor:InstallDir
*Visual C# Express: HKLM\SOFTWARE\Microsoft\VCSExpress\Major.Minor:InstallDir
*Visual Basic Express: HKLM\SOFTWARE\Microsoft\VBExpress\Major.Minor:InstallDir
*Visual C++ Express: HKLM\SOFTWARE\Microsoft\VCExpress\Major.Minor:InstallDir
where Major is the major version number, Minor is the minor version number, and the text after the colon is the name of the registry value. For example, the installation directory of Visual Studio 2008 Professional would be located at the HKLM\SOFTWARE\Microsoft\Visual Studio\9.0 key, in the InstallDir value.
Here's a code example that prints the installation directory of several versions of Visual Studio and Visual C# Express:
string visualStudioRegistryKeyPath = @"SOFTWARE\Microsoft\VisualStudio";
string visualCSharpExpressRegistryKeyPath = @"SOFTWARE\Microsoft\VCSExpress";
List<Version> vsVersions = new List<Version>() { new Version("10.0"), new Version("9.0"), new Version("8.0") };
foreach (var version in vsVersions)
{
foreach (var isExpress in new bool[] { false, true })
{
RegistryKey registryBase32 = RegistryKey.OpenBaseKey(RegistryHive.LocalMachine, RegistryView.Registry32);
RegistryKey vsVersionRegistryKey = registryBase32.OpenSubKey(
string.Format(@"{0}\{1}.{2}", (isExpress) ? visualCSharpExpressRegistryKeyPath : visualStudioRegistryKeyPath, version.Major, version.Minor));
if (vsVersionRegistryKey == null) { continue; }
Console.WriteLine(vsVersionRegistryKey.GetValue("InstallDir", string.Empty).ToString());
}
Environment Variable Method
The non-express editions of Visual Studio also write an environment variable that you could check, but it gives the location of the common tools directory, not the installation directory, so you'll have to do some path combining. The format of the environment variable is VS*COMNTOOLS where * is the major and minor version number. For example, the environment variable for Visual Studio 2010 is VS100COMNTOOLS and contains a value like C:\Program Files\Microsoft Visual Studio 10.0\Common7\Tools.
Here's some example code to print the environment variable for several versions of Visual Studio:
List<Version> vsVersions = new List<Version>() { new Version("10.0"), new Version("9.0"), new Version("8.0") };
foreach (var version in vsVersions)
{
Console.WriteLine(Path.Combine(Environment.GetEnvironmentVariable(string.Format("VS{0}{1}COMNTOOLS", version.Major, version.Minor)), @"..\IDE"));
}
A: You can read the VSINSTALLDIR environment variable.
A: Here is something I have been updating over the years... (for CudaPAD)
Usage examples:
var vsPath = VS_Tools.GetVSPath(avoidPrereleases:true, requiredWorkload:"NativeDesktop");
var vsPath = VS_Tools.GetVSPath();
var vsPath = VS_Tools.GetVSPath(specificVersion:"15");
The drop-in function:
using System;
using System.Collections.Generic;
using System.Linq;
using Microsoft.VisualStudio.Setup.Configuration;
using System.IO;
using Microsoft.Win32;
static class VS_Tools
{
public static string GetVSPath(string specificVersion = "", bool avoidPrereleases = true, string requiredWorkload = "")
{
string vsPath = "";
// Method 1 - use "Microsoft.VisualStudio.Setup.Configuration.SetupConfiguration" method.
// Note: This code has is a heavily modified version of Heath Stewart's code.
// original source: (Heath Stewart, May 2016) https://github.com/microsoft/vs-setup-samples/blob/80426ad4ba10b7901c69ac0fc914317eb65deabf/Setup.Configuration.CS/Program.cs
try
{
var e = new SetupConfiguration().EnumAllInstances();
int fetched;
var instances = new ISetupInstance[1];
do
{
e.Next(1, instances, out fetched);
if (fetched > 0)
{
var instance2 = (ISetupInstance2)instances[0];
var state = instance2.GetState();
// Let's make sure this install is complete.
if (state != InstanceState.Complete)
continue;
// If we have a version to match lets make sure to match it.
if (!string.IsNullOrWhiteSpace(specificVersion))
if (!instances[0].GetInstallationVersion().StartsWith(specificVersion))
continue;
// If instances[0] is null then skip
var catalog = instances[0] as ISetupInstanceCatalog;
if (catalog == null)
continue;
// If there is not installation path lets skip
if ((state & InstanceState.Local) != InstanceState.Local)
continue;
// Let's make sure it has the required workload - if one was given.
if (!string.IsNullOrWhiteSpace(requiredWorkload))
{
if ((state & InstanceState.Registered) == InstanceState.Registered)
{
if (!(from package in instance2.GetPackages()
where string.Equals(package.GetType(), "Workload", StringComparison.OrdinalIgnoreCase)
where package.GetId().Contains(requiredWorkload)
orderby package.GetId()
select package).Any())
{
continue;
}
}
else
{
continue;
}
}
// Let's save the installation path and make sure it has a value.
vsPath = instance2.GetInstallationPath();
if (string.IsNullOrWhiteSpace(vsPath))
continue;
// If specified, avoid Pre-release if possible
if (avoidPrereleases && catalog.IsPrerelease())
continue;
// We found the one we need - lets get out of here
return vsPath;
}
}
while (fetched > 0);
}
catch (Exception){ }
if (string.IsNullOrWhiteSpace(vsPath))
return vsPath;
// Fall-back Method: Find the location of visual studio (%VS90COMNTOOLS%\..\..\vc\vcvarsall.bat)
// Note: This code has is a heavily modified version of Kevin Kibler's code.
// source: (Kevin Kibler, 2014) http://stackoverflow.com/questions/30504/programmatically-retrieve-visual-studio-install-directory
List<Version> vsVersions = new List<Version>() { new Version("15.0"), new Version("14.0"),
new Version("13.0"), new Version("12.0"), new Version("11.0") };
foreach (var version in vsVersions)
{
foreach (var isExpress in new bool[] { false, true })
{
RegistryKey registryBase32 = RegistryKey.OpenBaseKey(RegistryHive.LocalMachine, RegistryView.Registry32);
RegistryKey vsVersionRegistryKey = registryBase32.OpenSubKey(
string.Format(@"{0}\{1}.{2}",
(isExpress) ? @"SOFTWARE\Microsoft\VCSExpress" : @"SOFTWARE\Microsoft\VisualStudio",
version.Major, version.Minor));
if (vsVersionRegistryKey == null) { continue; }
string path = vsVersionRegistryKey.GetValue("InstallDir", string.Empty).ToString();
if (!string.IsNullOrEmpty(path))
{
path = Directory.GetParent(path).Parent.Parent.FullName;
if (File.Exists(path + @"\VC\bin\cl.exe") && File.Exists(path + @"\VC\vcvarsall.bat"))
{
vsPath = path;
break;
}
}
}
if (!string.IsNullOrWhiteSpace(vsPath))
break;
}
return vsPath;
}
}
A: Nowadays, I use the following PowerShell command to get the Visual Studio 2017/2019 path (here with the Common7\IDE suffix, so it mimics the DevEnvDir property):
Get-ChildItem HKLM:\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall | foreach { Get-ItemProperty $_.PsPath } | where { $_.DisplayName -like '*Visual Studio*' -and $_.InstallLocation.Length -gt 0 } | sort InstallDate -Descending | foreach { (Join-Path $_.InstallLocation 'Common7\IDE') } | where { Test-Path $_ } | select -First 1
If you want to execute it from cmd.exe, the command would look like this:
powershell.exe -ExecutionPolicy Bypass -Command "Get-ChildItem HKLM:\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall | foreach { Get-ItemProperty $_.PsPath } | where { $_.DisplayName -like '*Visual Studio*' -and $_.InstallLocation.Length -gt 0 } | sort InstallDate -Descending | foreach { (Join-Path $_.InstallLocation 'Common7\IDE') } | where { Test-Path $_ } | select -First 1"
I am using it in a C# project, where I use Rider instead of Visual Studio as my IDE (of course I could have also just manually setup the DevEnvDir property in Rider's settings):
<Target Name="MyTarget" BeforeTargets="Build">
<Exec Condition="'$(DevEnvDir)' == '' Or '$(DevEnvDir)' == '*Undefined*' Or !Exists('$(DevEnvDir)')"
Command="powershell.exe -ExecutionPolicy Bypass -Command "Get-ChildItem HKLM:\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall | foreach { Get-ItemProperty $_.PsPath } | where { $_.DisplayName -like '*Visual Studio*' -and $_.InstallLocation.Length -gt 0 } | sort InstallDate -Descending | foreach { (Join-Path $_.InstallLocation 'Common7\IDE') } | where { Test-Path $_ } | select -First 1""
ConsoleToMSBuild="true">
<Output TaskParameter="ConsoleOutput" PropertyName="DevEnvDir" />
</Exec>
</Target>
I use it to get the path to the VS command prompt batch files (like vcvars64.bat or vcvarsall.bat), so I can invoke them before calling MIDL.exe to generate a type library for my IDL file, so my .NET 5 COM classes can register a type library for themselves when the comhost.dll is being registered via regsvr32.exe.
A: Aren't there environment settings?
I have VCToolkitInstallDir and VS71COMNTOOLS although I'm using Visual Studio 2003, I don't know if that changed for later versions. Type "set V" at the command line and see if you have them.
A: Note that if you're using Visual Studio Express or Visual C++ Express the keynames contain WDExpress or VCExpress, respectively, instead of VisualStudio.
A: This is the easiest solution I came with. It works for x86 and x64, regardless of VS version:
Use
Environment.GetEnvironmentVariable("VSAPPIDDIR")
To get the IDE folder, such as:
"C:\Program Files\Microsoft Visual Studio\2019\Community\Common7\IDE\" On x86 machine.
You can use that to go to any other directory you want, such as:
Dim x = Environment.GetEnvironmentVariable("VSAPPIDDIR").Trim("\"c, "/"c)
x = System.IO.Path.GetDirectoryName(x)
Dim XsdFile = IO.Path.Combine(x, "Packages\Schemas\html\html_5.xsd")
In x64 machine XsdFile will refer to:
"C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\Common7\Packages\Schemas\html\html_5.xsd"
Caution: This sems to work with Community edition only!
A: For newer versions of VS it is better to use from Microsoft provided APIs, because install information is no longer maintained in registry correctly.
*
*install Nuget package Microsoft.VisualStudio.Setup.Configuration.Native
*do the trick (returned is tuple with version and path of all VS instances):
private const int REGDB_E_CLASSNOTREG = unchecked((int)0x80040154);
public static IEnumerable<(string, string)> GetVisualStudioInstallPaths()
{
var result = new List<(string, string)>();
try
{
var query = new SetupConfiguration() as ISetupConfiguration2;
var e = query.EnumAllInstances();
int fetched;
var instances = new ISetupInstance[1];
do
{
e.Next(1, instances, out fetched);
if (fetched > 0)
{
var instance2 = (ISetupInstance2)instances[0];
result.Add((instance2.GetInstallationVersion(), instance2.GetInstallationPath()));
}
}
while (fetched > 0);
}
catch (COMException ex) when (ex.HResult == REGDB_E_CLASSNOTREG)
{
}
catch (Exception)
{
}
return result;
}
Regards
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30504",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25"
} |
Q: Display solution/file path in the Visual Studio IDE I frequently work with multiple instances of Visual Studio, often working on different branches of the same solution.
Visual C++ 6.0 used to display the full path of the current source file in its title bar, but Visual Studio 2005 doesn't appear to do this. This makes it slightly more awkward than it should be to work out which branch of the solution I'm currently looking at (the quickest way I know of is to hover over a tab so you get the source file's path as a tooltip).
Is there a way to get the full solution or file path into the title bar, or at least somewhere that's always visible, so I can quickly tell which branch is loaded into each instance?
A: For Visual Studio 2008, a slightly better way to write the macro from the accepted answer is to use the Solution events instead of the document ones - this lets you always edit the title bar, even if you don't have a document selected.
Here's the macro my coworker and I put together based on the other one - you'll want to change lines 15-18 to pull your branch name from the source directory for however you're set up.
Private timer As System.Threading.Timer
Declare Auto Function SetWindowText Lib "user32" (ByVal hWnd As System.IntPtr, ByVal lpstring As String) As Boolean
Private _branchName As String = String.Empty
Private Sub SolutionEvents_Opened() Handles SolutionEvents.Opened
Try
If timer Is Nothing Then
' Create timer which refreshes the caption because
' IDE resets the caption very often
Dim autoEvent As New System.Threading.AutoResetEvent(False)
Dim timerDelegate As System.Threading.TimerCallback = _
AddressOf tick
timer = New System.Threading.Timer(timerDelegate, autoEvent, 0, 25)
End If
Dim sourceIndex As Integer = DTE.Solution.FullName.IndexOf("\Source")
Dim shortTitle As String = DTE.Solution.FullName.Substring(0, sourceIndex)
Dim lastIndex As Integer = shortTitle.LastIndexOf("\")
_branchName = shortTitle.Substring(lastIndex + 1)
showTitle(_branchName)
Catch ex As Exception
End Try
End Sub
Private Sub SolutionEvents_BeforeClosing() Handles SolutionEvents.BeforeClosing
If Not timer Is Nothing Then
timer.Dispose()
End If
End Sub
''' <summary>Dispose the timer on IDE shutdown.</summary>
Public Sub DTEEvents_OnBeginShutdown() Handles DTEEvents.OnBeginShutdown
If Not timer Is Nothing Then
timer.Dispose()
End If
End Sub
'''<summary>Called by timer.</summary>
Public Sub tick(ByVal state As Object)
Try
showTitle(_branchName)
Catch ex As System.Exception
End Try
End Sub
'''<summary>Shows the title in main window.</summary>
Private Sub showTitle(ByVal title As String)
SetWindowText(New System.IntPtr(DTE.MainWindow.HWnd), title & " - " & DTE.Name)
End Sub
A: This is a extension available in the online gallery specifically tailored for this job. Checkout Labs > Visual Studio Extension: Customize Visual Studio Window Title.
A: It's awkward indeed. Hovering on the tab is indeed one of the few things useful.
Alternative: right click on the file tab: Find your File Path in Visual Studio. It seems we have to do with that.
A: How to customise the Visual Studio window title
Install the Customize Visual Studio Window Title plugin.
After installing the extension, the settings can be found in the menu.
Menu Tools ► Options ► Customize VS Window Title.
More information
Customize Visual Studio Window Title is a lightweight extension to Visual Studio, which allows you to change the window title to include a folder tree:
Features
*
*A configurable minimum and maximum depth distance from the solution/project file
*Allows the use of special tags to help with many other possible scenarios, which include Git, Mercurial, and TFS.
A: There is not a native way to do it, but you can achieve it with a macro. The details are described here in full: How To Show Full File Path (or Anything Else) in VS 2005 Title Bar
You just have to add a little Visual Basic macro to the EvironmentEvents macro section and restart Visual Studio.
Note: The path will not show up when you first load Visual Studio, but it will whenever you change which file you are viewing. There is probably a way to fix this, but it doesn't seem like a big deal.
A: I am using VSCommands 10 to show the full path of the solution file open.
Friendly Name: {repo}
Solution Path Regex: (?<repo>.*)
Now my main title window looks like this:
c:\repositories\acme.marketplace.trunk\Acme.Marketplace.web\Acme.Marketplace.Web.sln
I can quickly glance and see that I am working in the trunk folder or a rc folder because we use Mercurial (Hg) and keep separate folders for trunk, rc, preprod, prod like this:
c:\repositories\acme.marketplace.rc1
c:\repositories\acme.marketplace.rc2
c:\repositories\acme.marketplace.trunk
c:\repositories\acme.marketplace.preprod
c:\repositories\acme.marketplace.prod
A: As Dan also mentioned it in a comment, the File Path On Footer extension serves the same purpose.
A: Check out the latest release of VSCommands 2010 Lite. It introduced a feature called Friendly Solution Name where you can set it to display the solution file path (or any part of it) in Visual Studio's main window title.
More details: http://vscommands.com/releasenotes/3.6.8.0 and http://vscommands.com/releasenotes/3.6.9.0
A: Related note: As an alternative, for Visual Studio 2005 you can use the command menu File → Advanced Save Options. The dialog displays the full path of the current file, and you are able to copy the text.
A: Use the MKLINK command to create a link to your existing solution. As far as Visual Studio is concerned, it's working with the link file, but any changes go to the underlying .sln file.
I wrote a blog entry here about it...
http://willissoftware.com/?p=72
A: For the people that didn't get the VB method to work (like me) you can use a plugin:
Customize Visual Studio Window Title
It was tested it in Visual Studio 2008 Ultimate. You can configure it in the Options menu of Visual Studio.
A: TabsStudio | US$49
It is a pretty good (although paid) Visual Studio extension that provides:
*
*Tab grouping
*Tab coloring
*Title transformation
*Lots of customization and extensions
File Path On Footer | Free
It displays the full file path on the bottom of the editor window:
Honorable Mention: Visual Studio Code
Visual Studio Code version 1.26 implemented breadcrumbs which displays the file path in a separate row at the top of the editor window when using tabs or inline the file name when in its own window.
A: If you are using Visual Studio 2010 or above you can you the extension "Visual Studio Window Title Changer".
Install this and use the following 'Window Title Setup' expression to display the solution path:
'sln_dir + "/" + orig_title'
Use the extension manager to download and install the extension. Details of the extension and how to use it can be found here:
https://visualstudiogallery.msdn.microsoft.com/2e8ebfe4-023f-4c4d-9b7a-d05bbc5cb239?SRC=VSIDE
A: File > Preferences > Settings >> Window:Title
I just changed ${activeEditorShort} => ${activeEditorLong}
within the setting:
${dirty}${activeEditorLong}${separator}${rootName}${separator}${appName}
Worked immediately when I clicked a file.
Great help right in the setting ...
Window: Title --
Controls the window title based on the active editor. Variables are substituted based on the context:
${activeEditorShort}: the file name (e.g. myFile.txt).
${activeEditorMedium}: the path of the file relative to the workspace folder (e.g. myFolder/myFileFolder/myFile.txt).
...
Visual Studio Code
Version: 1.56.2
Date: 2021-05-12
I found one reference saying this existed since 2017.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30505",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "84"
} |
Q: Qt Child Window Placement Is there a way to specify a child's initial window position in Qt?
I have an application that runs on Linux and Windows and it looks like the default behavior of Qt lets the Window Manager determine the placement of the child windows.
On Windows, this is in the center of the screen the parent is on which seems reasonable.
On Linux, in GNOME (metacity) it is always in the upper left-hand corner which is annoying. I can't find any window manager preferences for metacity that allow me to control window placement so I would like to override that behavior.
A: Qt Widget Geometry
Call the move(x, y) method on the child window before show(). The default values for x and y are 0 so that's why it appears in the upper left-hand corner.
You can also use the position of the parent window to compute a relative position for the child.
A: Generally, I'd recommend not forcing window positions unless your application has some very special windowing requirements. It's the window manager's job to determine where new windows are put and most of them do a good job. If MetaCity isn't picking a good position, then that's its problem.
If you do your own window placement you may get a better result then what a poor window manager would give, but you'll also miss out on the intelligent window placement algorithms available in more advanced window managers.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30521",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How to implement Type-safe COM enumerations? How could i implement Type-Safe Enumerations in Delphi in a COM scenario ? Basically, i'd like to replace a set of primitive constants of a enumeration with a set of static final object references encapsulated in a class ? .
In Java, we can do something like:
public final class Enum
{
public static final Enum ENUMITEM1 = new Enum ();
public static final Enum ENUMITEM2 = new Enum ();
//...
private Enum () {}
}
and make comparisons using the customized enumeration type:
if (anObject != Enum.ENUMITEM1) ...
I am currently using the old Delphi 5 and i would like to declare some enums parameters on the interfaces, not allowing that client objects to pass integers (or long) types in the place of the required enumeration type.
Do you have a better way of implementing enums other than using the native delphi enums ?
A: Native Delphi enumerations are already type-safe. Java enumerations were an innovation for that language, because before it didn't have enumerations at all. However, perhaps you mean a different feature - enumeration values prefixed by their type name.
Upcoming Delphi 2009, and the last version of the Delphi for .NET product, support a new directive called scoped enums. It looks like this:
{$APPTYPE CONSOLE}
{$SCOPEDENUMS ON}
type
TFoo = (One, Two, Three);
{$SCOPEDENUMS OFF}
var
x: TFoo;
begin
x := TFoo.One;
if not (x in [TFoo.Two, TFoo.Three]) then
Writeln('OK');
end.
A: What is wrong with native Delphi enums? They are type safe.
type
TMyEnum = (Item1, Item2, Item3);
if MyEnum <> Item1 then...
Since Delphi 2005 you can have consts in a class, but Delphi 5 can not.
type
TMyEnum = sealed class
public
const Item1 = 0;
const Item2 = 1;
const Item3 = 2;
end;
A: Now you have provided us with some more clues about the nature of your question, namely mentioning COM, I think I understand what you mean. COM can marshal only a subset of the types Delphi knows between a COM server and client. You can define enums in the TLB editor, but these are all of the type TOleEnum which basically is an integer type (LongWord). You can have a variable of the type TOleEnum any integer value you want and assign values of different enum types to each other. Not really type safe.
I can not think of a reason why Delphi's COM can't use the type safe enums instead, but it doesn't. I am afraid nothing much can be done about that. Maybe the changes in the TLB editor in the upcoming Delphi 2009 version might change that.
For the record: When the TLB editor is not used, Delphi is perfectly able to have interface with methods who have type safe enums as parameters.
A: I think I know why Borland choose not to use type safe enums in the TLB editor. Enums in COM can be different values while Delphi only since Delphi 6 (I think) can do that.
type
TSomeEnum = (Enum1 = 1, Enum2 = 6, Enum3 = 80); // Only since Delphi 6
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30529",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Best way to enumerate all available video codecs on Windows? I'm looking for a good way to enumerate all the Video codecs on a Windows XP/Vista machine.
I need present the user with a set of video codecs, including the compressors and decompressors. The output would look something like
Available Decoders
DiVX Version 6.0
XVID
Motion JPEG
CompanyX's MPEG-2 Decoder
Windows Media Video
**Available Encoders**
DiVX Version 6.0
Windows Media Video
The problem that I am running into is that there is no reliable way to to capture all of the decoders available to the system. For instance:
*
*You can enumerate all the decompressors using DirectShow, but this tells you nothing about the compressors (encoders).
*You can enumerate all the Video For Windows components, but you get no indication if these are encoders or decoders.
*There are DirectShow filters that may do the job for you perfectly well (Motion JPEG filter for example), but there is no indication that a particular DirectShow filter is a "video decoder".
Has anyone found a generalizes solution for this problem using any of the Windows APIs? Does the Windows Vista Media Foundation API solve any of these issues?
A: This is best handled by DirectShow.
DirectShow is currently a part of the platform SDK.
HRESULT extractFriendlyName( IMoniker* pMk, std::wstring& str )
{
assert( pMk != 0 );
IPropertyBag* pBag = 0;
HRESULT hr = pMk->BindToStorage(0, 0, IID_IPropertyBag, (void **)&pBag );
if( FAILED( hr ) || pBag == 0 )
{
return hr;
}
VARIANT var;
var.vt = VT_BSTR;
hr = pBag->Read(L"FriendlyName", &var, NULL);
if( SUCCEEDED( hr ) && var.bstrVal != 0 )
{
str = reinterpret_cast<wchar_t*>( var.bstrVal );
SysFreeString(var.bstrVal);
}
pBag->Release();
return hr;
}
HRESULT enumerateDShowFilterList( const CLSID& category )
{
HRESULT rval = S_OK;
HRESULT hr;
ICreateDevEnum* pCreateDevEnum = 0; // volatile, will be destroyed at the end
hr = ::CoCreateInstance( CLSID_SystemDeviceEnum, NULL, CLSCTX_INPROC_SERVER, IID_ICreateDevEnum, reinterpret_cast<void**>( &pCreateDevEnum ) );
assert( SUCCEEDED( hr ) && pCreateDevEnum != 0 );
if( FAILED( hr ) || pCreateDevEnum == 0 )
{
return hr;
}
IEnumMoniker* pEm = 0;
hr = pCreateDevEnum->CreateClassEnumerator( category, &pEm, 0 );
// If hr == S_FALSE, no error is occured. In this case pEm is NULL, because
// a filter does not exist e.g no video capture devives are connected to
// the computer or no codecs are installed.
assert( SUCCEEDED( hr ) && ((hr == S_OK && pEm != 0 ) || hr == S_FALSE) );
if( FAILED( hr ) )
{
pCreateDevEnum->Release();
return hr;
}
if( hr == S_OK && pEm != 0 ) // In this case pEm is != NULL
{
pEm->Reset();
ULONG cFetched;
IMoniker* pM = 0;
while( pEm->Next(1, &pM, &cFetched) == S_OK && pM != 0 )
{
std::wstring str;
if( SUCCEEDED( extractFriendlyName( pM, str ) )
{
// str contains the friendly name of the filter
// pM->BindToObject creates the filter
std::wcout << str << std::endl;
}
pM->Release();
}
pEm->Release();
}
pCreateDevEnum->Release();
return rval;
}
The following call enumerates all video compressors to the console :
enumerateDShowFilterList( CLSID_VideoCompressorCategory );
The MSDN page Filter Categories lists all other 'official' categories.
I hope that is a good starting point for you.
A:
The answer above doesn't account for decompressors. There is no CLSID_VideoDecompressorCategory. Is the are a way to ask a filter if it is a video decompressor?
Not that I know of.
Most filters in this list are codecs, so contain both a encoder and decoder.
The filters in the
CLSID_ActiveMovieCategories
are wrappers around the VfW filters installed.
(Some software companies create their own categories, so there may be 'non official' categories on some machines)
If you want to see all installed categories, use GraphEdit which is supplied with the DirectShow SDK.
GraphEdit itself is a great tool to see what DirectShow does under the hood. So maybe that may be a source of more information about the filters (and their interactions) on your system.
A: Another point I forgot.
The Windows Media Foundation is a toolkit for using WMV/WMA. It does not provide all things that DirectShow supports. It is really only a SDK for Windows Media.
There are bindings in WMV/WMA to DirectShow, so that you can use WM* files/streams in DirectShow applications.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30539",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: What does this javascript error mean? Permission denied to call method to Location.toString This error just started popping up all over our site.
Permission denied to call method to Location.toString
I'm seeing google posts that suggest that this is related to flash and our crossdomain.xml. What caused this to occur and how do you fix?
A: Are you using javascript to communicate between frames/iframes which point to different domains? This is not permitted by the JS "same origin/domain" security policy. Ie, if you have
<iframe name="foo" src="foo.com/script.js">
<iframe name="bar" src="bar.com/script.js">
And the script on bar.com tries to access window["foo"].Location.toString, you will get this (or similar) exceptions. Please also note that the same origin policy can also kick in if you have content from different subdomains. Here you can find a short and to the point explanation of it with examples.
A: You may have come across this posting, but it appears that a flash security update changed the behaviour of the crossdomain.xml, requiring you to specify a security policy to allow arbitrary headers to be sent from a remote domain. The Adobe knowledge base article (also referenced in the original post) is here.
A: This post suggests that there is one line that needs to be added to the crossdomain.xml file.
<allow-http-request-headers-from domain="*" headers="*"/>
A: This likely causeed by a change made in the Flash Player version released in early April, I'm not too sure about the specifics, but I assume there were security concerns with this functionality.
What you need to do is indeed add that to your crossdomain.xml (which should be in your servers webroot)
You can read more here: http://www.adobe.com/devnet/flashplayer/articles/flash_player9_security_update.html
A typical example of a crossdomain.xml is twitters, more info about how the file works can be found here.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30540",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Using MVP - How to use Events Properly for Testing I've just started using the MVP pattern in the large ASP.NET application that I'm building (re-building actually) and I am having a hard time figuring out how I should be using Events applied to the view.
Say I have 2 drop down lists in a User Control, where one is dependent on the other's value:
<%@ Control Language="vb" AutoEventWireup="false" CodeBehind="ucTestMVP.ascx.vb" Inherits=".ucTestMVP" %>
<asp:DropDownList ID="ddlCountry" runat="server" AutoPostBack="True" />
<asp:DropDownList ID="ddlCity" runat="server" />
How should the AutoPostBack Event be defined in the interface? Should it be an event that is handled by the User Control like this:
Public Partial Class ucTestMVP
Inherits System.Web.UI.UserControl
Implements ITestMVPView
Protected Sub PageLoad(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load
If Not Page.IsPostBack Then
Dim presenter As New TestMVPPresenter(Me)
presenter.InitView()
End If
End Sub
Private Sub ddlCountrySelectedIndexChanged(ByVal sender As Object, ByVal e As System.EventArgs) Handles ddlCountry.SelectedIndexChanged
Dim presenter as New TestMVPPresenter(Me)
presenter.CountryDDLIndexChanged()
End Sub
End Class
Or should there be an event defined at the Interface? If this is the preferred pattern, how do I add events to be handled and used?
A: I don't know if there's a universally preferred pattern. I tend to prefer adding the event to the view interface and having the presenter respond to the view. I described this pattern in more detail here.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30541",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Is code written in Vista 64 compatible on 32 bit os? We are getting new dev machines and moving up to Vista 64 Ultimate to take advantage of our 8gb ram. Our manager wants us to do all dev in 32bit virtual machines to make sure there will be no problems with our code moving into production.
Is there any way to guarantee the resultant programs will work on 32bit os's?
I don't mind using virtual machines, but I don't like how they force you back into a "Single" monitor type view. I like moving my VS toolbars off to my other monitor.
EDIT: We are using Visual Studio 2005 and 2008, VB.NET and/or C#
EDIT: Using Harpreet's answer, these are the steps I used to set my Visual Studio IDE to compile x86 / 32bit:
*
*Click Build and open Configuration Manager
*Select Active Solution Platform drop down list
*Select x86 if it is in the list and skip to step 5, if not Select <New...>
*In the New Solution Platform dialog, select x86 and press OK
*Verify the selected platform for all of your projects is x86
*Click Close.
Enjoy.
Thank you,
Keith
A: I do development on 64 bit machines for 32 bit Windows. It's not a problem. You should make sure that your projects are set to compile in x86 mode in order to be conservative. You'll want to go through each project in the solution and double check this. You could also use the AnyCPU setting but that's a little riskier since it will run differently on your dev machine than a 32 bit machine. You want to avoid the 64bit mode, of course.
The problems I've run into are drivers that don't work when the app is compiled for 64 bit (explicitly 64 bit or AnyCPU compiled and running on 64 bit Windows). Those problems are completely avoidable by sticking with x86 compilation. That should reveal all flaws on your dev machines.
Ideally, you could set up a build and test environment that could be executed against frequently on a 32 bit machine. That should reassure your management and let you avoid the VM as your desktop.
A: As long as you compile your executables as 32 bit, they will run on both 32 bit and 64 Windows machines (guaranteed). Using 64 dev machines has the advantage that you can start testing your code with 64 bit compilation (to check for things like pointers casted to 32 bit integers), this way making the transition to 64 bit easier in the future (should you your company choose to do a 64 bit version).
A: Compiling for a 64bit OS is an option in the compiler. You can absolutely compile to a 32bit exe from within Vista 64 bit. When you run the app, you can then see in the TaskManager that there is a "*32" next to the process...this means it's 32bit ;)
I believe your managers need some more education on what 64bit OS really means :)
A: Not an answer to your question, but possibly a solution to your problem: VirtualBox (and probably others) supports "seamless integration" mode, which just gives you a second start bar and lets you drag windows around freely.
Also, and this is an answer to your question, it depends on your compile settings. You can compile for different environments, and you can perfectly compile 32-bit programs on a 64-bit system with Visual Studio. Can't tell you how, but I'm sure some Visual Studio guru could help you out.
A: We develop a 32-bit application using VS 2005 (2008 soon) and have just purchased some new machines with XP Pro x64 or Vista Business 64-bit on them so that we can take advantage of the extra RAM whilst holding a watching brief on the possibility of doing a 64-bit port if it becomes commercially necessary to do so. We haven't had any problems with doing this other than tweaking some scripts in our development environment etc.
Those developers who weren't included in this upgrade cycle still use 32-bit machines, so these should pick up problems when the unit tests and the application test suite are run as a matter of course before a check-in.
What we also do is to make sure that we have a set of "test build" machines made up of "typical" configurations (XP/Vista, 2/4/8 cores, etc.) that build and test sets of check-ins - we have various different test suites for stability, performance, etc. - before they are added to the integration area proper. Again, these haven't picked up any problems with running a 32-bit application built on a 64-bit OS.
Anyway, as others have already said, I wouldn't expect it to be a problem because it's the compiler that generates the appropriate code for the target OS regardless of the OS that the compiler is actually running on.
A: yeah, like adam was saying. There's 3 options: MSIL (default), x64, and x86. You can target x64 and it will generate dll's specifically for 64-bit systems, or you can do x86 which will run on 32-bit and 64-bit, but will have the same restrictions as 32-bit on a 64-bit system.
MSIL will basically let the JITer issue the platform specific instruction (at a slight performance penalty compared to a native image)
EDIT: no language, so i'm talking about .net framework languages like vb.net and c#, c++ is a completely different animal.
A: Found this today:
http://www.brianpeek.com/blog/archive/2007/11/13/x64-development-with-net.aspx
x64 Development with .NET
Earlier this year I made the switch to a 64-bit operating system - Vista Ultimate x64 to be exact. For the most part, this process has been relatively painless, but there have been a few hiccups along the way (x64 compatible drivers, mainly, but that's not the point of this discussion).
In the world of x64 development, there have been a few struggling points that I thought I'd outline here. This list will likely grow, so expect future posts on the matter.
In the wonderful world of .NET development, applications and assemblies can be compiled to target various platforms. By default, applications and assemblies are compiled as Any CPU in Visual Studio. In this scenario, the CLR will load the assembly as whatever the default target is for the machine it is being executed on. For example, when running an executable on an x64 machine, it will be run as a 64-bit process.
Visual Studio also provides for 3 specific platform targets: x86, x64 and Itanium (IA-64). When building an executable as a specific target, it will be loaded as a process of that type. For example, an x86-targeted executable run on an x64 machine will run as a 32-bit process using the 32-bit CLR and WOW64 layer. When assemblies are loaded at runtime, they can only be loaded by a process if their target matches that of the hosting process, or it is compiled as Any CPU. For example, if x64 were set as the target for an assembly, it can only be loaded by an x64 process.
This has come into play in a few scenarios for me:
*
*XNA - XNA is available as a set of 32-bit assemblies only. Therefore, when referencing the XNA assemblies, the executable/assembly using them must be targeted to the x86 platform. If it is targeted as x64 (or as Any CPU and run on a 64-bit machine), an error will be thrown when trying to load the XNA assemblies.
*Microsoft Robotics Studio - The XInputGamepadService uses XNA internally to talk to the Xbox 360 controller. See above.
*Managed DirectX - While this is already deprecated and being replaced with XNA, it still has its uses. The assemblies are not marked for a specific target, however I had difficulty with memory exceptions, especially with the Microsoft.DirectX.AudioVideoPlayback assembly.
*Phidgets - Depending on what library you download and when, it may or may not be marked as 32-bit only. The current version (11/8/07) is marked as such, and so requires a 32-bit process to host it.
The easiest way to determine if an executable or assembly is targeted to a specific platform is to use the corflags application. To use this, open a Visual Studio Command Prompt from your Start menu and run it against the assembly you wish to check.
The easiest way to determine if an executable or assembly is targeted to a specific platform is to use the corflags application. To use this, open a Visual Studio Command Prompt from your Start menu and run it against the assembly you wish to check.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30543",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Returning the sum of items depending on which type it is I have one field that I need to sum lets say named items
However that field can be part of group a or b
In the end I need to have all of the items summed for group a and group b
when I say grouped I mean there is a LEFT OUTER JOIN to another table the previous table has a type for the items and the one being joined has a group assigned for this item type
Sorry guys Im a little new to sql I am going to try out what you have given me an get back to you
Ok I feel like we are getting close just not yet allain's I can get them to separate but the issue I need to have both groups to sum on the same row which is difficult because I also have several LEFT OUTER JOIN's involved
Tyler's looks like it might work too so I am trying to hash that out real fast
Alain's seems to be the way to go but I have to tweek it a little more
A: Maybe I'm not understanding the complexity of what you're asking but... shouldn't this do?
SELECT groupname, SUM(value)
FROM items
WHERE groupname IN ('a', 'b')
GROUP BY groupname
And if you don't care which of a or b the item belongs to then this will do:
SELECT SUM(value)
FROM items
WHERE groupname IN ('a', 'b')
A: You want something like
SELECT column,SUM( column ) FROM table GROUP BY column
A: Is that (Tyler's answer) what the question meant, or is it simply this;
SELECT sum(item), groupingField FROM someTable GROUP BY groupingField
or even:
SELECT count(*), item FROM someTable GROUP BY item
which will produce results like this:
sum(item) | groupingField
-------------+-----------------------
71 | A
82 | B
Questioner, perhaps you could clarify which you meant or if I'm oversimplifying?
A: Try this:
SELECT B.[Group], COUNT(*) AS GroupCount
FROM Table1 A
LEFT JOIN Table2 B ON B.ItemType=A.ItemType
GROUP BY B.[Group]
A: Please refer this Image (1)
SELECT category_id,SUM(amount) AS count FROM expense GROUP BY category_id;
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30563",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Delayed Write Failed on Windows 2003 Clustered Fileshare I am trying to solve a persistent IO problem when we try to read or write to a Windows 2003 Clustered Fileshare. It is happening regularly and seem to be triggered by traffic. We are writing via .NET's FileStream object.
Basically we are writing from a Windows 2003 Server running IIS to a Windows 2003 file share cluster. When writing to the file share, the IIS server often gets two errors. One is an Application Popup from Windows, the other is a warning from MRxSmb. Both say the same thing:
[Delayed Write Failed] Windows was unable to save all the data for the file \Device\LanmanRedirector. The data has been lost. This error may be caused by a failure of your computer hardware or network connection. Please try to save this file elswhere.
On reads, we are also getting errors, which are System.IO.IOException errors: "The specified network name is no longer available."
We have other servers writing more and larger files to this File Share Cluster without an issue. It's only coming from the one group of servers that the issue comes up. So it doesn't seem related to writing large files. We've applied all the hotfixes referenced in articles online dealing with this issue, and yet it continues.
Our network team ran Network Monitor and didn't see any packet loss, from what I understand, but as I wasn't present for that test I can't say that for certain.
Any ideas of where to check? I'm out of avenues to explore or tests to run. I'm guessing the issue is some kind of network problem, but as it's only happening when these servers connect to that File Share cluster, I'm not sure what kind of problem it might be.
This issue is awfully specific, and potentially hardware related, but any help you can give would be of assistance.
Eric Sipple
A: I've heard of AutoDisconnect causing similar issues (even if the device isn't idle). You may want to try disabling that on the server.
A: I am having similar problems:
*
*writing to a machine that is also part of a Windows 2003 R2 NLB cluster sometimes results in "Delayed Write Failed" or "the semaphore has timed out" or "the specified network name is no longer available"
*this is reproducible for the same files, even after rebooting all machines involved
*if I rename the problem-files (some of which are quite small), the problem remains
*if I write the files to another location (fysical disk) on the same machine, the problem remains
*I uninstalled all anti-virus software, problem remains
*I have reset the tcp-ip stack, problem temporarily disappears, but after some time the problem returns for the same files
PARTLY SOLVED the problem:
I deleted (not stopped) the host from the NLB cluster. Problem solved.
Seems to have to do something with writing to a share on a server that is also part of a network load balancing cluster
I have not yet found other people posting NLB cluster related file write problems. However, I did find many posts complaining about similar problems, none of which seem to have been solved.
Anne
A: I've seen other people reporting the "delayed write failed" error. One recommendation was to adjust the size of the cache, there's a utility from sysinternals (http://technet.microsoft.com/en-us/sysinternals/bb897561.aspx) that will allow you to do that.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30566",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Resize transparent images using C# Does anyone have the secret formula to resizing transparent images (mainly GIFs) without ANY quality loss - what so ever?
I've tried a bunch of stuff, the closest I get is not good enough.
Take a look at my main image:
http://www.thewallcompany.dk/test/main.gif
And then the scaled image:
http://www.thewallcompany.dk/test/ScaledImage.gif
//Internal resize for indexed colored images
void IndexedRezise(int xSize, int ySize)
{
BitmapData sourceData;
BitmapData targetData;
AdjustSizes(ref xSize, ref ySize);
scaledBitmap = new Bitmap(xSize, ySize, bitmap.PixelFormat);
scaledBitmap.Palette = bitmap.Palette;
sourceData = bitmap.LockBits(new Rectangle(0, 0, bitmap.Width, bitmap.Height),
ImageLockMode.ReadOnly, bitmap.PixelFormat);
try
{
targetData = scaledBitmap.LockBits(new Rectangle(0, 0, xSize, ySize),
ImageLockMode.WriteOnly, scaledBitmap.PixelFormat);
try
{
xFactor = (Double)bitmap.Width / (Double)scaledBitmap.Width;
yFactor = (Double)bitmap.Height / (Double)scaledBitmap.Height;
sourceStride = sourceData.Stride;
sourceScan0 = sourceData.Scan0;
int targetStride = targetData.Stride;
System.IntPtr targetScan0 = targetData.Scan0;
unsafe
{
byte* p = (byte*)(void*)targetScan0;
int nOffset = targetStride - scaledBitmap.Width;
int nWidth = scaledBitmap.Width;
for (int y = 0; y < scaledBitmap.Height; ++y)
{
for (int x = 0; x < nWidth; ++x)
{
p[0] = GetSourceByteAt(x, y);
++p;
}
p += nOffset;
}
}
}
finally
{
scaledBitmap.UnlockBits(targetData);
}
}
finally
{
bitmap.UnlockBits(sourceData);
}
}
I'm using the above code, to do the indexed resizing.
Does anyone have improvement ideas?
A: If there's no requirement on preserving file type after scaling I'd recommend the following approach.
using (Image src = Image.FromFile("main.gif"))
using (Bitmap dst = new Bitmap(100, 129))
using (Graphics g = Graphics.FromImage(dst))
{
g.SmoothingMode = SmoothingMode.AntiAlias;
g.InterpolationMode = InterpolationMode.HighQualityBicubic;
g.DrawImage(src, 0, 0, dst.Width, dst.Height);
dst.Save("scale.png", ImageFormat.Png);
}
The result will have really nice anti aliased edges
*
*removed image shack image that had been replaced by an advert
If you must export the image in gif you're in for a ride; GDI+ doesn't play well with gif. See this blog post about it for more information
Edit: I forgot to dispose of the bitmaps in the example; it's been corrected
A: This is a basic resize function I've used for a few of my applications that leverages GDI+
/// <summary>
/// Resize image with GDI+ so that image is nice and clear with required size.
/// </summary>
/// <param name="SourceImage">Image to resize</param>
/// <param name="NewHeight">New height to resize to.</param>
/// <param name="NewWidth">New width to resize to.</param>
/// <returns>Image object resized to new dimensions.</returns>
/// <remarks></remarks>
public static Image ImageResize(Image SourceImage, Int32 NewHeight, Int32 NewWidth)
{
System.Drawing.Bitmap bitmap = new System.Drawing.Bitmap(NewWidth, NewHeight, SourceImage.PixelFormat);
if (bitmap.PixelFormat == Drawing.Imaging.PixelFormat.Format1bppIndexed | bitmap.PixelFormat == Drawing.Imaging.PixelFormat.Format4bppIndexed | bitmap.PixelFormat == Drawing.Imaging.PixelFormat.Format8bppIndexed | bitmap.PixelFormat == Drawing.Imaging.PixelFormat.Undefined | bitmap.PixelFormat == Drawing.Imaging.PixelFormat.DontCare | bitmap.PixelFormat == Drawing.Imaging.PixelFormat.Format16bppArgb1555 | bitmap.PixelFormat == Drawing.Imaging.PixelFormat.Format16bppGrayScale)
{
throw new NotSupportedException("Pixel format of the image is not supported.");
}
System.Drawing.Graphics graphicsImage = System.Drawing.Graphics.FromImage(bitmap);
graphicsImage.SmoothingMode = Drawing.Drawing2D.SmoothingMode.HighQuality;
graphicsImage.InterpolationMode = Drawing.Drawing2D.InterpolationMode.HighQualityBicubic;
graphicsImage.DrawImage(SourceImage, 0, 0, bitmap.Width, bitmap.Height);
graphicsImage.Dispose();
return bitmap;
}
I don't remember off the top of my head if it will work with GIFs, but you can give it a try.
Note: I can't take full credit for this function. I pieced a few things together from some other samples online and made it work to my needs 8^D
A: I think the problem is that you're doing a scan line-based resize, which is going to lead to jaggies no matter how hard you tweak it. Good image resize quality requires you to do some more work to figure out the average color of the pre-resized pixels that your resized pixel covers.
The guy who runs this website has a blog post that discusses a few image resizing algorithms. You probably want a bicubic image scaling algorithm.
Better Image Resizing
A: For anyone that may be trying to use Markus Olsson's solution to dynamically resize images and write them out to the Response Stream.
This will not work:
Response.ContentType = "image/png";
dst.Save( Response.OutputStream, ImageFormat.Png );
But this will:
Response.ContentType = "image/png";
using (MemoryStream stream = new MemoryStream())
{
dst.Save( stream, ImageFormat.Png );
stream.WriteTo( Response.OutputStream );
}
A: While PNG is definitely better that GIF, occasionally there is a use case for needing to stay in GIF format.
With GIF or 8-bit PNG, you have to address the problem of quantization.
Quantization is where you choose which 256 (or fewer) colors will best preserve and represent the image, and then turn the RGB values back into indexes. When you perform a resize operation, the ideal color palette changes, as you are mixing colors and changing balances.
For slight resizes, like 10-30%, you may be OK preserving the original color palette.
However, in most instances you'll need to re-quantize.
The primary two algorithms to pick from are Octree and nQuant. Octree is very fast and does a very good job, especially if you can overlay a smart dithering algorithm. nQuant requires at least 80MB of RAM to perform an encode (it builds a complete histogram), and is typically 20-30X slower (1-5 seconds per encode on an average image). However, it sometimes produces higher image quality that Octree since it doesn't 'round' values to maintain consistent performance.
When implementing transparent GIF and animated GIF support in the imageresizing.net project, I chose Octree. Transparency support isn't hard once you have control of the image palette.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30569",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26"
} |
Q: How do I tell Maven to use the latest version of a dependency? In Maven, dependencies are usually set up like this:
<dependency>
<groupId>wonderful-inc</groupId>
<artifactId>dream-library</artifactId>
<version>1.2.3</version>
</dependency>
Now, if you are working with libraries that have frequent releases, constantly updating the <version> tag can be somewhat annoying. Is there any way to tell Maven to always use the latest available version (from the repository)?
A: Unlike others I think there are many reasons why you might always want the latest version. Particularly if you are doing continuous deployment (we sometimes have like 5 releases in a day) and don't want to do a multi-module project.
What I do is make Hudson/Jenkins do the following for every build:
mvn clean versions:use-latest-versions scm:checkin deploy -Dmessage="update versions" -DperformRelease=true
That is I use the versions plugin and scm plugin to update the dependencies and then check it in to source control. Yes I let my CI do SCM checkins (which you have to do anyway for the maven release plugin).
You'll want to setup the versions plugin to only update what you want:
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>versions-maven-plugin</artifactId>
<version>1.2</version>
<configuration>
<includesList>com.snaphop</includesList>
<generateBackupPoms>false</generateBackupPoms>
<allowSnapshots>true</allowSnapshots>
</configuration>
</plugin>
I use the release plugin to do the release which takes care of -SNAPSHOT and validates that there is a release version of -SNAPSHOT (which is important).
If you do what I do you will get the latest version for all snapshot builds and the latest release version for release builds. Your builds will also be reproducible.
Update
I noticed some comments asking some specifics of this workflow. I will say we don't use this method anymore and the big reason why is the maven versions plugin is buggy and in general is inherently flawed.
It is flawed because to run the versions plugin to adjust versions all the existing versions need to exist for the pom to run correctly. That is the versions plugin cannot update to the latest version of anything if it can't find the version referenced in the pom. This is actually rather annoying as we often cleanup old versions for disk space reasons.
Really you need a separate tool from maven to adjust the versions (so you don't depend on the pom file to run correctly). I have written such a tool in the the lowly language that is Bash. The script will update the versions like the version plugin and check the pom back into source control. It also runs like 100x faster than the mvn versions plugin. Unfortunately it isn't written in a manner for public usage but if people are interested I could make it so and put it in a gist or github.
Going back to workflow as some comments asked about that this is what we do:
*
*We have 20 or so projects in their own repositories with their own jenkins jobs
*When we release the maven release plugin is used. The workflow of that is covered in the plugin's documentation. The maven release plugin sort of sucks (and I'm being kind) but it does work. One day we plan on replacing this method with something more optimal.
*When one of the projects gets released jenkins then runs a special job we will call the update all versions job (how jenkins knows its a release is a complicated manner in part because the maven jenkins release plugin is pretty crappy as well).
*The update all versions job knows about all the 20 projects. It is actually an aggregator pom to be specific with all the projects in the modules section in dependency order. Jenkins runs our magic groovy/bash foo that will pull all the projects update the versions to the latest and then checkin the poms (again done in dependency order based on the modules section).
*For each project if the pom has changed (because of a version change in some dependency) it is checked in and then we immediately ping jenkins to run the corresponding job for that project (this is to preserve build dependency order otherwise you are at the mercy of the SCM Poll scheduler).
At this point I'm of the opinion it is a good thing to have the release and auto version a separate tool from your general build anyway.
Now you might think maven sort of sucks because of the problems listed above but this actually would be fairly difficult with a build tool that does not have a declarative easy to parse extendable syntax (aka XML).
In fact we add custom XML attributes through namespaces to help hint bash/groovy scripts (e.g. don't update this version).
A: NOTE:
The mentioned LATEST and RELEASE metaversions have been dropped for plugin dependencies in Maven 3 "for the sake of reproducible builds", over 6 years ago.
(They still work perfectly fine for regular dependencies.)
For plugin dependencies please refer to this Maven 3 compliant solution.
If you always want to use the newest version, Maven has two keywords you can use as an alternative to version ranges. You should use these options with care as you are no longer in control of the plugins/dependencies you are using.
When you depend on a plugin or a dependency, you can use the a version value of LATEST or RELEASE. LATEST refers to the latest released or snapshot version of a particular artifact, the most recently deployed artifact in a particular repository. RELEASE refers to the last non-snapshot release in the repository. In general, it is not a best practice to design software which depends on a non-specific version of an artifact. If you are developing software, you might want to use RELEASE or LATEST as a convenience so that you don't have to update version numbers when a new release of a third-party library is released. When you release software, you should always make sure that your project depends on specific versions to reduce the chances of your build or your project being affected by a software release not under your control. Use LATEST and RELEASE with caution, if at all.
See the POM Syntax section of the Maven book for more details. Or see this doc on Dependency Version Ranges, where:
*
*A square bracket ( [ & ] ) means "closed" (inclusive).
*A parenthesis ( ( & ) ) means "open" (exclusive).
Here's an example illustrating the various options. In the Maven repository, com.foo:my-foo has the following metadata:
<?xml version="1.0" encoding="UTF-8"?><metadata>
<groupId>com.foo</groupId>
<artifactId>my-foo</artifactId>
<version>2.0.0</version>
<versioning>
<release>1.1.1</release>
<versions>
<version>1.0</version>
<version>1.0.1</version>
<version>1.1</version>
<version>1.1.1</version>
<version>2.0.0</version>
</versions>
<lastUpdated>20090722140000</lastUpdated>
</versioning>
</metadata>
If a dependency on that artifact is required, you have the following options (other version ranges can be specified of course, just showing the relevant ones here):
Declare an exact version (will always resolve to 1.0.1):
<version>[1.0.1]</version>
Declare an explicit version (will always resolve to 1.0.1 unless a collision occurs, when Maven will select a matching version):
<version>1.0.1</version>
Declare a version range for all 1.x (will currently resolve to 1.1.1):
<version>[1.0.0,2.0.0)</version>
Declare an open-ended version range (will resolve to 2.0.0):
<version>[1.0.0,)</version>
Declare the version as LATEST (will resolve to 2.0.0) (removed from maven 3.x)
<version>LATEST</version>
Declare the version as RELEASE (will resolve to 1.1.1) (removed from maven 3.x):
<version>RELEASE</version>
Note that by default your own deployments will update the "latest" entry in the Maven metadata, but to update the "release" entry, you need to activate the "release-profile" from the Maven super POM. You can do this with either "-Prelease-profile" or "-DperformRelease=true"
It's worth emphasising that any approach that allows Maven to pick the dependency versions (LATEST, RELEASE, and version ranges) can leave you open to build time issues, as later versions can have different behaviour (for example the dependency plugin has previously switched a default value from true to false, with confusing results).
It is therefore generally a good idea to define exact versions in releases. As Tim's answer points out, the maven-versions-plugin is a handy tool for updating dependency versions, particularly the versions:use-latest-versions and versions:use-latest-releases goals.
A: By the time this question was posed there were some kinks with version ranges in maven, but these have been resolved in newer versions of maven.
This article captures very well how version ranges work and best practices to better understand how maven understands versions: https://docs.oracle.com/middleware/1212/core/MAVEN/maven_version.htm#MAVEN8855
A: The truth is even in 3.x it still works, surprisingly the projects builds and deploys. But the LATEST/RELEASE keyword causing problems in m2e and eclipse all over the place, ALSO projects depends on the dependency which deployed through the LATEST/RELEASE fail to recognize the version.
It will also causing problem if you are try to define the version as property, and reference it else where.
So the conclusion is use the versions-maven-plugin if you can.
A: Sometimes you don't want to use version ranges, because it seems that they are "slow" to resolve your dependencies, especially when there is continuous delivery in place and there are tons of versions - mainly during heavy development.
One workaround would be to use the versions-maven-plugin. For example, you can declare a property:
<properties>
<myname.version>1.1.1</myname.version>
</properties>
and add the versions-maven-plugin to your pom file:
<build>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>versions-maven-plugin</artifactId>
<version>2.3</version>
<configuration>
<properties>
<property>
<name>myname.version</name>
<dependencies>
<dependency>
<groupId>group-id</groupId>
<artifactId>artifact-id</artifactId>
<version>latest</version>
</dependency>
</dependencies>
</property>
</properties>
</configuration>
</plugin>
</plugins>
</build>
Then, in order to update the dependency, you have to execute the goals:
mvn versions:update-properties validate
If there is a version newer than 1.1.1, it will tell you:
[INFO] Updated ${myname.version} from 1.1.1 to 1.3.2
A: If you want Maven should use the latest version of a dependency, then you can use Versions Maven Plugin and how to use this plugin, Tim has already given a good answer, follow his answer.
But as a developer, I will not recommend this type of practices. WHY?
answer to why is already given by Pascal Thivent in the comment of the question
I really don't recommend this practice (nor using version ranges) for
the sake of build reproducibility. A build that starts to suddenly
fail for an unknown reason is way more annoying than updating manually
a version number.
I will recommend this type of practice:
<properties>
<spring.version>3.1.2.RELEASE</spring.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-core</artifactId>
<version>${spring.version}</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-context</artifactId>
<version>${spring.version}</version>
</dependency>
</dependencies>
it is easy to maintain and easy to debug. You can update your POM in no time.
A: The dependencies syntax is located at the Dependency Version Requirement Specification documentation. Here it is is for completeness:
Dependencies' version element define version requirements, used to compute effective dependency version. Version requirements have the following syntax:
*
*1.0: "Soft" requirement on 1.0 (just a recommendation, if it matches all other ranges for the dependency)
*[1.0]: "Hard" requirement on 1.0
*(,1.0]: x <= 1.0
*[1.2,1.3]: 1.2 <= x <= 1.3
*[1.0,2.0): 1.0 <= x < 2.0
*[1.5,): x >= 1.5
*(,1.0],[1.2,): x <= 1.0 or x >= 1.2; multiple sets are comma-separated
*(,1.1),(1.1,): this excludes 1.1 (for example if it is known not to
work in combination with this library)
In your case, you could do something like <version>[1.2.3,)</version>
A: Now I know this topic is old, but reading the question and the OP supplied answer it seems the Maven Versions Plugin might have actually been a better answer to his question:
In particular the following goals could be of use:
*
*versions:use-latest-versions searches the pom for all versions
which have been a newer version and
replaces them with the latest
version.
*versions:use-latest-releases searches the pom for all non-SNAPSHOT
versions which have been a newer
release and replaces them with the
latest release version.
*versions:update-properties updates properties defined in a
project so that they correspond to
the latest available version of
specific dependencies. This can be
useful if a suite of dependencies
must all be locked to one version.
The following other goals are also provided:
*
*versions:display-dependency-updates scans a project's dependencies and
produces a report of those
dependencies which have newer
versions available.
*versions:display-plugin-updates scans a project's plugins and
produces a report of those plugins
which have newer versions available.
*versions:update-parent updates the parent section of a project so
that it references the newest
available version. For example, if
you use a corporate root POM, this
goal can be helpful if you need to
ensure you are using the latest
version of the corporate root POM.
*versions:update-child-modules updates the parent section of the
child modules of a project so the
version matches the version of the
current project. For example, if you
have an aggregator pom that is also
the parent for the projects that it
aggregates and the children and
parent versions get out of sync, this
mojo can help fix the versions of the
child modules. (Note you may need to
invoke Maven with the -N option in
order to run this goal if your
project is broken so badly that it
cannot build because of the version
mis-match).
*versions:lock-snapshots searches the pom for all -SNAPSHOT
versions and replaces them with the
current timestamp version of that
-SNAPSHOT, e.g. -20090327.172306-4
*versions:unlock-snapshots searches the pom for all timestamp
locked snapshot versions and replaces
them with -SNAPSHOT.
*versions:resolve-ranges finds dependencies using version ranges and
resolves the range to the specific
version being used.
*versions:use-releases searches the pom for all -SNAPSHOT versions
which have been released and replaces
them with the corresponding release
version.
*versions:use-next-releases searches the pom for all non-SNAPSHOT
versions which have been a newer
release and replaces them with the
next release version.
*versions:use-next-versions searches the pom for all versions
which have been a newer version and
replaces them with the next version.
*versions:commit removes the pom.xml.versionsBackup files. Forms
one half of the built-in "Poor Man's
SCM".
*versions:revert restores the pom.xml files from the
pom.xml.versionsBackup files. Forms
one half of the built-in "Poor Man's
SCM".
Just thought I'd include it for any future reference.
A: MY solution in maven 3.5.4 ,use nexus, in eclipse:
<dependency>
<groupId>yilin.sheng</groupId>
<artifactId>webspherecore</artifactId>
<version>LATEST</version>
</dependency>
then in eclipse: atl + F5, and choose the force update of snapshots/release
it works for me.
A: Please take a look at this page (section "Dependency Version Ranges"). What you might want to do is something like
<version>[1.2.3,)</version>
These version ranges are implemented in Maven2.
A: Are you possibly depending on development versions that obviously change a lot during development?
Instead of incrementing the version of development releases, you could just use a snapshot version that you overwrite when necessary, which means you wouldn't have to change the version tag on every minor change. Something like 1.0-SNAPSHOT...
But maybe you are trying to achieve something else ;)
A: Who ever is using LATEST, please make sure you have -U otherwise the latest snapshot won't be pulled.
mvn -U dependency:copy -Dartifact=com.foo:my-foo:LATEST
// pull the latest snapshot for my-foo from all repositories
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30571",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "888"
} |
Q: Can you use CruiseControl to build Cocoa/Objective-C projects? Has anyone ever set up Cruise Control to build an OS X Cocoa/Objective-C project?
If so, is there a preferred flavor of CruiseControl (CruiseControl.rb or just regular CruiseControl) that would be easier to do this with.
I currently have a Ruby rake file that has steps for doing building and running tests, and wanted to automate this process after doing a checkin.
Also, does CruiseControl have support for git? I couldn't find anything on the website for this.
A: Yes, you just run xcode builds via the command line (xcodebuild) which makes it simple to target from CC via an ant <exec>. I've been using just regular CC, not the ruby version and it works fine. Here's a barebones example:
<project name="cocoathing" default="build">
<target name="build">
<exec executable="xcodebuild" dir="CocoaThing" failonerror="true">
<arg line="-target CocoaThing -buildstyle Deployment build" />
</exec>
</target>
</project>
More info on xcodebuild
And there does appear to be a standard git object here, but I don't use git so I can't tell you much more than that!
A: Yes, CruiseControl has a support for git.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30585",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Is it possible to find code coverage in ColdFusion? I am trying to be a "good" programmer and have unit tests for my ColdFusion application but haven't been able to find a code coverage tool that can tie into the test that I'm using. For those of you who do unit tests on your ColdFusion code, how have you approached this problem?
A: With ColdFusion Builder you can use Rancho for code coverage.
http://forta.com/blog/index.cfm/2012/5/25/ColdFusion-Code-Coverage-With-Rancho
A: Many have asked, but to date there remains no ColdFusion code coverage tool.
The latest ColdFusion features .NET integration in addition to J2EE integration, so if you require code coverage metrics (ie, customer requirements) then (aside from choosing something other than ColdFusion) you might define what parts need such coverage and break them out into .NET or J2EE objects which you can profile with readily available tools.
Alternately, you can do some analysis yourself - use a parser to insert cftrace tags after every control statement and then parse the results.
Lastly, ColdFusion does have some basic stack trace ability - I don't know if it goes deeper than that, but it's worth exploring to see what you can parse from that.
I hope this helps!
-Adam Davis
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30594",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How would you use Java to handle various XML documents? I'm looking for the best method to parse various XML documents using a Java application. I'm currently doing this with SAX and a custom content handler and it works great - zippy and stable.
I've decided to explore the option having the same program, that currently recieves a single format XML document, receive two additional XML document formats, with various XML element changes. I was hoping to just swap out the ContentHandler with an appropriate one based on the first "startElement" in the document... but, uh-duh, the ContentHandler is set and then the document is parsed!
... constructor ...
{
SAXParserFactory spf = SAXParserFactory.newInstance();
try {
SAXParser sp = spf.newSAXParser();
parser = sp.getXMLReader();
parser.setErrorHandler(new MyErrorHandler());
} catch (Exception e) {}
... parse StringBuffer ...
try {
parser.setContentHandler(pP);
parser.parse(new InputSource(new StringReader(xml.toString())));
return true;
} catch (IOException e) {
e.printStackTrace();
} catch (SAXException e) {
e.printStackTrace();
}
...
So, it doesn't appear that I can do this in the way I initially thought I could.
That being said, am I looking at this entirely wrong? What is the best method to parse multiple, discrete XML documents with the same XML handling code? I tried to ask in a more general post earlier... but, I think I was being too vague. For speed and efficiency purposes I never really looked at DOM because these XML documents are fairly large and the system receives about 1200 every few minutes. It's just a one way send of information
To make this question too long and add to my confusion; following is a mockup of some various XML documents that I would like to have a single SAX, StAX, or ?? parser cleanly deal with.
products.xml:
<products>
<product>
<id>1</id>
<name>Foo</name>
<product>
<id>2</id>
<name>bar</name>
</product>
</products>
stores.xml:
<stores>
<store>
<id>1</id>
<name>S1A</name>
<location>CA</location>
</store>
<store>
<id>2</id>
<name>A1S</name>
<location>NY</location>
</store>
</stores>
managers.xml:
<managers>
<manager>
<id>1</id>
<name>Fen</name>
<store>1</store>
</manager>
<manager>
<id>2</id>
<name>Diz</name>
<store>2</store>
</manager>
</managers>
A: As I understand it, the problem is that you don't know what format the document is prior to parsing. You could use a delegate pattern. I'm assuming you're not validating against a DTD/XSD/etcetera and that it is OK for the DefaultHandler to have state.
public class DelegatingHandler extends DefaultHandler {
private Map<String, DefaultHandler> saxHandlers;
private DefaultHandler delegate = null;
public DelegatingHandler(Map<String, DefaultHandler> delegates) {
saxHandlers = delegates;
}
@Override
public void startElement(String uri, String localName, String name,
Attributes attributes) throws SAXException {
if(delegate == null) {
delegate = saxHandlers.get(name);
}
delegate.startElement(uri, localName, name, attributes);
}
@Override
public void endElement(String uri, String localName, String name)
throws SAXException {
delegate.endElement(uri, localName, name);
}
//etcetera...
A: You've done a good job of explaining what you want to do but not why. There are several XML frameworks that simplify marshalling and unmarshalling Java objects to/from XML.
The simplest is Commons Digester which I typically use to parse configuration files. But if you are want to deal with Java objects then you should look at Castor, JiBX, JAXB, XMLBeans, XStream, or something similar. Castor or JiBX are my two favourites.
A: I have tried the SAXParser once, but once I found XStream I never went back to it. With XStream you can create Java Objects and convert them to XML. Send them over and use XStream to recreate the object. Very easy to use, fast, and creates clean XML.
Either way you have to know what data your going to receiver from the XML file. You can send them over in different ways to know which parser to use. Or have a data object that can hold everything but only one structure is populated (product/store/managers). Maybe something like:
public class DataStructure {
List<ProductStructure> products;
List<StoreStructure> stors;
List<ManagerStructure> managers;
...
public int getProductCount() {
return products.lenght();
}
...
}
And with XStream convert to XML send over and then recreate the object. Then do what you want with it.
A: See the documentation for XMLReader.setContentHandler(), it says:
Applications may register a new or different handler in the middle of a parse, and the SAX parser must begin using the new handler immediately.
Thus, you should be able to create a SelectorContentHandler that consumes events until the first startElement event, based on that changes the ContentHandler on the XML reader, and passes the first start element event to the new content handler. You just have to pass the XMLReader to the SelectorContentHandler in the constructor. If you need all the events to be passes to the vocabulary specific content handler, SelectorContentHandler has to cache the events and then pass them, but in most cases this is not needed.
On a side note, I've lately used XOM in almost all my projects to handle XML ja thus far performance hasn't been the issue.
A: JAXB. The Java Architecture for XML Binding. Basically you create an xsd defining your XML layout (I believe you could also use a DTD). Then you pass the XSD to the JAXB compiler and the compiler creates Java classes to marshal and unmarshal your XML document into Java objects. It's really simple.
BTW, there are command line options to jaxb to specify the package name you want to place the resulting classes in, etc.
A: If you want more dynamic handling, Stax approach would probably work better than Sax.
That's quite low-level, still; if you want simpler approach, XStream and JAXB are my favorites. But they do require quite rigid objects to map to.
A: Agree with StaxMan, who interestingly enough wants you to use Stax. It's a pull based parser instead of the push you are currently using. This would require some significant changes to your code though.
A: :-)
Yes, I have some bias towards Stax. But as I said, oftentimes data binding is more convenient than streaming solution. But if it's streaming you want, and don't need pipelining (of multiple filtering stages), Stax is simpler than SAX.
One more thing: as good as XOM is (wrt alternatives), often Tree Model is not the right thing to use if you are not dealing with "document-centric" xml (~= xhtml pages, docbook, open office docs).
For data interchange, config files etc data binding is more convenient, more efficient, more natural. Just say no to tree models like DOM for these use cases.
So, JAXB, XStream, JibX are good. Or, for more acquired taste, digester, castor, xmlbeans.
A: VTD-XML is known for being the best XML processing technology for heavy duty XML processing. See the reference below for a proof
http://sdiwc.us/digitlib/journal_paper.php?paper=00000582.pdf
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30627",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Difference between the Apache HTTP Server and Apache Tomcat? What is the difference in terms of functionality between the Apache HTTP Server and Apache Tomcat?
I know that Tomcat is written in Java and the HTTP Server is in C, but other than that I do not really know how they are distinguished. Do they have different functionality?
A: *
*Apache is a general-purpose http server, which supports a number of advanced options that Tomcat doesn't.
*Although Tomcat can be used as a general purpose http server, you can also set up Apache and Tomcat to work together with Apache serving static content and forwarding the requests for dynamic content to Tomcat.
A: Apache Tomcat is used to deploy your Java Servlets and JSPs. So in your Java project you can build your WAR (short for Web ARchive) file, and just drop it in the deploy directory in Tomcat.
So basically Apache is an HTTP Server, serving HTTP. Tomcat is a Servlet and JSP Server serving Java technologies.
Tomcat includes Catalina, which is a servlet container. A servlet, at the end, is a Java class. JSP files (which are similar to PHP, and older ASP files) are generated into Java code (HttpServlet), which is then compiled to .class files by the server and executed by the Java virtual machine.
A: Well, Apache is HTTP webserver, where as Tomcat is also webserver for Servlets and JSP.
Moreover Apache is preferred over Apache Tomcat in real time
A: Tomcat is primarily an application server, which serves requests to custom-built Java servlets or JSP files on your server. It is usually used in conjunction with the Apache HTTP server (at least in my experience). Use it to manually process incoming requests.
The HTTP server, by itself, is best for serving up static content... html files, images, etc.
A: an apache server is an http server which can serve any simple http requests, where tomcat server is actually a servlet container which can serve java servlet requests.
Web server [apache] process web client (web browsers) requests and forwards it to servlet container [tomcat] and container process the requests and sends response which gets forwarded by web server to the web client [browser].
Also you can check this link for more clarification:-
https://sites.google.com/site/sureshdevang/servlet-architecture
Also check this answer for further researching :-
https://softwareengineering.stackexchange.com/a/221092
A: If you are using java technology(Servlet/JSP) for making web application you will probably use Apache Tomcat.
However, if you are using other technologies like Perl, PHP or ruby, its better(easier) to use Apache HTTP Server.
A: In addition to the fine answers above, I think it should be said that Tomcat has it's own HTTP server built into it, and is fully functional at serving static content too. Depending on your java virtual machine configuration it can actually outperform going through traditional connectors in apache such as mod_proxy and mod_jk.
That said a fully optimized Tomcat server should serve static files fast and if you have Java servlets, JSPs and ColdFusion files in addition to static content you may find tomcat does an excellent job by itself.
A: Apache is an HTTP web server which serve as HTTP.
Apache Tomcat is a java servlet container. It features same as web server but is customized to execute java servlet and JSP pages.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30632",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "681"
} |
Q: Open 2 Visio diagrams in different windows I would like to know if I can open 2 different diagrams using MS Visio and each diagram have its own window. I've tried in several ways, but I always end up with 1 Visio window ...
I'm using a triple monitor setup and I'd like to put one diagram to each side of my main monitor.
[]'s
André Casteliano
PS: I'm using Visio 2007 here.
A: Visio 2005 allows you to open visio multiple times - does this not work in 2007? Try opening a visio document, and then starting another instance of visio from the Start-->Programs menu.
If not, read on...
Visio is an MDI interface - you'll need to stretch the whole visio window across the two monitors in question, then choose the "Window" menu and select "Tile" after you've opened your two documents.
Alternately, in the upper right hand corner just below the application minimize, restore and close buttons you'll find the document minimize, restore and close. Choose restore, and you can manipulate the windows inside the main visio app.
Hope this helps!
-Adam Davis
A: This allows you to open two or more instances of Visio so that you can view different Visio docs at the same time without going through the process to stretch the Visio window across two screens. I found this to be a simpler method and a bit easier to manipulate. If it doesn't work on your first try recheck the registry setting. It changed back on me a couple of times before it took.
To implement the new behaviour, follow the following registry trick:
*
*Open Microsoft Visio.
*Go to Tools -> Options -> Advanced or File -> Options -> Advanced in newer versions.
*Check the Put all settings in Windows Registry option.
*Close Microsoft Visio
*Run Registry Editor (regedit).
*Navigate to the following registry key:
HKEY_CURRENT_USER\Software\Microsoft\Office\12.0\Visio\Application\
Note: The value 12.0 in the key can be different.
(i.e: for Visio 2010: 14.0, Visio 2019: 16.0)
*In the right pane, right click on SingleInstanceFileOpen, and then select Modify. Update the value of SingleInstanceFileOpen from 1 to 0
If the value SingleInstanceFileOpen doesn't exist, it can be created as a type REG_SZ.
A: Seems like my installation of Visio is the problem. I've tried on another computer here and it allow me to open 2 instances of the software.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30651",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
} |
Q: What do I need to do to implement an "out of proc" COM server in C#? I am trying to implement an "out of proc" COM server written in C#. How do I do this?
I need the C# code to be "out of proc" from my main C++ application, because I cannot load the .NET runtime into my main process space
WHY?:
My C++ code is in a DLL that is loaded into many different customer EXE's, some of which use different versions of the .NET runtime. Since there can only be one runtime loaded into a single process, my best bet seems to be to put my C# code into another process.
A: You can create COM+ components using System.EnterpriseServices.ServicedComponent. Consequently, you'll be able to create out-of-proc and in-proc (client) component activation as well as all COM+ benefits of pooling, remoting, run as a windows service etc.
A: Here we can read that it is possible, but the exe will be loaded as an library and not started in it's own process like an exe. I don't know if that is a problem for you? It also contains some possible solutions if you do want to make it act like a real out of process com server. But maybe using another way of inter process communication is better. Like .Net Remoting.
A: I cannot recommend this as the way, but you could create a COM-callable wrapper for your C# library, then create a VB6 ActiveX exe project that delegates calls to your C# library.
A: Why can't you load the .net runtime into you process space? It is possible to host the .net runtime and call into .net using COM.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30653",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: MySQL Binary Log Replication: Can it be set to ignore errors? I'm running a master-slave MySQL binary log replication system (phew!) that, for some data, is not in sync (meaning, the master holds more data than the slave). But the slave stops very frequently on the slightest MySQL error, can this be disabled? (perhaps a my.cnf setting for the replicating slave ignore-replicating-errors or some of the sort ;) )
This is what happens, every now and then, when the slave tries to replicate an item that does not exist, the slave just dies. a quick check at SHOW SLAVE STATUS \G; gives
Slave-IO-Running: Yes
Slave-SQL-Running: No
Replicate-Do-DB:
Last-Errno: 1062
Last-Error: Error 'Duplicate entry '15218' for key 1' on query. Default database: 'db'. Query: 'INSERT INTO db.table ( FIELDS ) VALUES ( VALUES )'
which I promptly fix (once I realize that the slave has been stopped) by doing the following:
STOP SLAVE;
RESET SLAVE;
START SLAVE;
... lately this has been getting kind of tiresome, and before I spit out some sort of PHP which does this for me, i was wondering if there's some my.cnf entry which will not kill the slave on the first error.
Cheers,
/mp
A: First, do you really want to ignore errors? If you get an error, it is likely that the data is not in sync any more. Perhaps what you want is to drop the slave database and restart the sync process when you get an error.
Second, I think the error you are getting is not when you replicate an item that does not exist (what would that mean anyway?) - it looks like you are replicating an item that already exists in the slave database.
I suspect the problem mainly arises from not starting at a clean data copy. It seems that the master has been copied to the slave; then replication has been turned off (or failed); and then it has started up again, but without giving the slave the chance to catch up with what it missed.
If you ever have a time when the master can be closed for write access long enough to clone the database and import it into the slave, this might get the problems to go away.
A: Modern mysqldump commands have a couple options to help with setting up consistent replication. Check out --master-data which will put the binary log file and position in the dump and automatically set when loaded into slave. Also --single-transaction will do the dump inside a transaction so that no write lock is needed to do a consistent dump.
A: stop slave; set global sql_slave_skip_counter=1; start slave;
You can ignore only the current error and continue the replication process.
A: Yes, with --slave-skip-errors=xxx in my.cnf, where xxx is 'all' or a comma sep list of error codes.
A: If the slave isn't used for any writes other than the replication, the authors of High Performance MySQL recommend adding read_only on the slave server to prevent users from mistakenly changing data on the slave as this is will also create the same errors you experienced.
A: i think you are doing replication with out sync the database first sync the database and try for replication and servers are generating same unique ids and try to set auto incerment offset
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30660",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Get file version in PowerShell How can you get the version information from a .dll or .exe file in PowerShell?
I am specifically interested in File Version, though other version information (that is, Company, Language, Product Name, etc.) would be helpful as well.
A: This is based on the other answers, but is exactly what I was after:
(Get-Command C:\Path\YourFile.Dll).FileVersionInfo.FileVersion
A: 'dir' is an alias for Get-ChildItem which will return back a System.IO.FileInfo class when you're calling it from the filesystem which has VersionInfo as a property. So ...
To get the version info of a single file do this:
PS C:\Windows> (dir .\write.exe).VersionInfo | fl
OriginalFilename : write
FileDescription : Windows Write
ProductName : Microsoft® Windows® Operating System
Comments :
CompanyName : Microsoft Corporation
FileName : C:\Windows\write.exe
FileVersion : 6.1.7600.16385 (win7_rtm.090713-1255)
ProductVersion : 6.1.7600.16385
IsDebug : False
IsPatched : False
IsPreRelease : False
IsPrivateBuild : False
IsSpecialBuild : False
Language : English (United States)
LegalCopyright : © Microsoft Corporation. All rights reserved.
LegalTrademarks :
PrivateBuild :
SpecialBuild :
For multiple files this:
PS C:\Windows> dir *.exe | %{ $_.VersionInfo }
ProductVersion FileVersion FileName
-------------- ----------- --------
6.1.7600.16385 6.1.7600.1638... C:\Windows\bfsvc.exe
6.1.7600.16385 6.1.7600.1638... C:\Windows\explorer.exe
6.1.7600.16385 6.1.7600.1638... C:\Windows\fveupdate.exe
6.1.7600.16385 6.1.7600.1638... C:\Windows\HelpPane.exe
6.1.7600.16385 6.1.7600.1638... C:\Windows\hh.exe
6.1.7600.16385 6.1.7600.1638... C:\Windows\notepad.exe
6.1.7600.16385 6.1.7600.1638... C:\Windows\regedit.exe
6.1.7600.16385 6.1.7600.1638... C:\Windows\splwow64.exe
1,7,0,0 1,7,0,0 C:\Windows\twunk_16.exe
1,7,1,0 1,7,1,0 C:\Windows\twunk_32.exe
6.1.7600.16385 6.1.7600.1638... C:\Windows\winhlp32.exe
6.1.7600.16385 6.1.7600.1638... C:\Windows\write.exe
A: [System.Diagnostics.FileVersionInfo]::GetVersionInfo("Path\To\File.dll")
A: I find this useful:
function Get-Version($filePath)
{
$name = @{Name="Name";Expression= {split-path -leaf $_.FileName}}
$path = @{Name="Path";Expression= {split-path $_.FileName}}
dir -recurse -path $filePath | % { if ($_.Name -match "(.*dll|.*exe)$") {$_.VersionInfo}} | select FileVersion, $name, $path
}
A: Since PowerShell 5 in Windows 10, you can look at FileVersionRaw (or ProductVersionRaw) on the output of Get-Item or Get-ChildItem, like this:
(Get-Item C:\Windows\System32\Lsasrv.dll).VersionInfo.FileVersionRaw
It's actually the same ScriptProperty from my Update-TypeData in the original answer below, but built-in now.
In PowerShell 4, you could get the FileVersionInfo from Get-Item or Get-ChildItem, but it would show the original FileVersion from the shipped product, and not the updated version. For instance:
(Get-Item C:\Windows\System32\Lsasrv.dll).VersionInfo.FileVersion
Interestingly, you could get the updated (patched) ProductVersion by using this:
(Get-Command C:\Windows\System32\Lsasrv.dll).Version
The distinction I'm making between "original" and "patched" is basically due to the way the FileVersion is calculated (see the docs here). Basically ever since Vista, the Windows API GetFileVersionInfo is querying part of the version information from the language neutral file (exe/dll) and the non-fixed part from a language-specific mui file (which isn't updated every time the files change).
So with a file like lsasrv (which got replaced due to security problems in SSL/TLS/RDS in November 2014) the versions reported by these two commands (at least for a while after that date) were different, and the second one is the more "correct" version.
However, although it's correct in LSASrv, it's possible for the ProductVersion and FileVersion to be different (it's common, in fact). So the only way to get the updated Fileversion straight from the assembly file is to build it up yourself from the parts, something like this:
Get-Item C:\Windows\System32\Lsasrv.dll | ft FileName, File*Part
Or by pulling the data from this:
[System.Diagnostics.FileVersionInfo]::GetVersionInfo($this.FullName)
You can easily add this to all FileInfo objects by updating the TypeData in PowerShell:
Update-TypeData -TypeName System.IO.FileInfo -MemberName FileVersionRaw -MemberType ScriptProperty -Value {
[System.Diagnostics.FileVersionInfo]::GetVersionInfo($this.FullName) | % {
[Version](($_.FileMajorPart, $_.FileMinorPart, $_.FileBuildPart, $_.FilePrivatePart)-join".")
}
}
Now every time you do Get-ChildItem or Get-Item you'll have a FileVersionRaw property that shows the updated File Version ...
A: As EBGreen said, [System.Diagnostics.FileVersionInfo]::GetVersionInfo(path) will work, but remember that you can also get all the members of FileVersionInfo, for example:
[System.Diagnostics.FileVersionInfo]::GetVersionInfo(path).CompanyName
You should be able to use every member of FileVersionInfo documented here, which will get you basically anything you could ever want about the file.
A: I realise this has already been answered, but if anyone's interested in typing fewer characters, I believe this is the shortest way of writing this in PS v3+:
ls application.exe | % versioninfo
*
*ls is an alias for Get-ChildItem
*% is an alias for ForEach-Object
*versioninfo here is a shorthand way of writing {$_.VersionInfo}
The benefit of using ls in this way is that you can easily adapt it to look for a given file within subfolders. For example, the following command will return version info for all files called application.exe within subfolders:
ls application.exe -r | % versioninfo
*
*-r is an alias for -Recurse
You can further refine this by adding -ea silentlycontinue to ignore things like permission errors in folders you can't search:
ls application.exe -r -ea silentlycontinue | % versioninfo
*
*-ea is an alias for -ErrorAction
Finally, if you are getting ellipses (...) in your results, you can append | fl to return the information in a different format. This returns much more detail, although formatted in a list, rather that on one line per result:
ls application.exe -r -ea silentlycontinue | % versioninfo | fl
*
*fl is an alias for Format-List
I realise this is very similar to xcud's reply in that ls and dir are both aliases for Get-ChildItem. But I'm hoping my "shortest" method will help someone.
The final example could be written in long-hand in the following way:
Get-ChildItem -Filter application.exe -Recurse -ErrorAction SilentlyContinue | ForEach-Object {$_.VersionInfo} | Format-List
... but I think my way is cooler and, for some, easier to remember. (But mostly cooler).
A: Since PowerShell can call .NET classes, you could do the following:
[System.Diagnostics.FileVersionInfo]::GetVersionInfo("somefilepath").FileVersion
Or as noted here on a list of files:
get-childitem * -include *.dll,*.exe | foreach-object { "{0}`t{1}" -f $_.Name, [System.Diagnostics.FileVersionInfo]::GetVersionInfo($_).FileVersion }
Or even nicer as a script: https://jtruher3.wordpress.com/2006/05/14/powershell-and-file-version-information/
A: I prefer to install the PowerShell Community Extensions and just use the Get-FileVersionInfo function that it provides.
Like so:
Get-FileVersionInfo MyAssembly.dll
with output like:
ProductVersion FileVersion FileName
-------------- ----------- --------
1.0.2907.18095 1.0.2907.18095 C:\Path\To\MyAssembly.dll
I've used it against an entire directory of assemblies with great success.
A: Just another way to do it is to use the built-in file access technique:
(get-item .\filename.exe).VersionInfo | FL
You can also get any particular property off the VersionInfo, thus:
(get-item .\filename.exe).VersionInfo.FileVersion
This is quite close to the dir technique.
A: Here an alternative method. It uses Get-WmiObject CIM_DATAFILE to select the version.
(Get-WmiObject -Class CIM_DataFile -Filter "Name='C:\\Windows\\explorer.exe'" | Select-Object Version).Version
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30686",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "176"
} |
Subsets and Splits