content
stringlengths 86
88.9k
| title
stringlengths 0
150
| question
stringlengths 1
35.8k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 30
130
|
---|---|---|---|---|---|---|---|---|
Q:
Random in python 2.5 not working?
I am trying to use the import random statement in python, but it doesn't appear to have any methods in it to use.
Am I missing something?
A:
You probably have a file named random.py or random.pyc in your working directory. That's shadowing the built-in random module. You need to rename random.py to something like my_random.py and/or remove the random.pyc file.
To tell for sure what's going on, do this:
>>> import random
>>> print random.__file__
That will show you exactly which file is being imported.
A:
This is happening because you have a random.py file in the python search path, most likely the current directory.
Python is searching for modules using sys.path, which normally includes the current directory before the standard site-packages, which contains the expected random.py.
This is expected to be fixed in Python 3.0, so that you can't import modules from the current directory without using a special import syntax.
Just remove the random.py + random.pyc in the directory you're running python from and it'll work fine.
A:
I think you need to give some more information. It's not really possible to answer why it's not working based on the information in the question. The basic documentation for random is at:
https://docs.python.org/library/random.html
You might check there.
A:
Python 2.5.2 (r252:60911, Jun 16 2008, 18:27:58)
[GCC 3.3.4 (pre 3.3.5 20040809)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import random
>>> random.seed()
>>> dir(random)
['BPF', 'LOG4', 'NV_MAGICCONST', 'RECIP_BPF', 'Random', 'SG_MAGICCONST', 'SystemRandom', 'TWOPI', 'WichmannHill', '_BuiltinMethodType', '_MethodType', '__all__', '__builtins__', '__doc__', '__file__', '__name__', '_acos', '_ceil', '_cos', '_e', '_exp', '_hexlify', '_inst', '_log', '_pi', '_random', '_sin', '_sqrt', '_test', '_test_generator', '_urandom', '_warn', 'betavariate', 'choice', 'expovariate', 'gammavariate', 'gauss', 'getrandbits', 'getstate', 'jumpahead', 'lognormvariate', 'normalvariate', 'paretovariate', 'randint', 'random', 'randrange', 'sample', 'seed', 'setstate', 'shuffle', 'uniform', 'vonmisesvariate', 'weibullvariate']
>>> random.randint(0,3)
3
>>> random.randint(0,3)
1
>>>
A:
If the script you are trying to run is itself called random.py, then you would have a naming conflict. Choose a different name for your script.
A:
Can you post an example of what you're trying to do? It's not clear from your question what the actual problem is.
Here's an example of how to use the random module:
import random
print random.randint(0,10)
A:
Seems to work fine for me. Check out the methods in the official python documentation for random:
>>> import random
>>> random.random()
0.69130806168332215
>>> random.uniform(1, 10)
8.8384170917436293
>>> random.randint(1, 10)
4
A:
Works for me:
Python 2.5.1 (r251:54863, Jun 15 2008, 18:24:51)
[GCC 4.3.0 20080428 (Red Hat 4.3.0-8)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import random
>>> brothers = ['larry', 'curly', 'moe']
>>> random.choice(brothers)
'moe'
>>> random.choice(brothers)
'curly'
|
Random in python 2.5 not working?
|
I am trying to use the import random statement in python, but it doesn't appear to have any methods in it to use.
Am I missing something?
|
[
"You probably have a file named random.py or random.pyc in your working directory. That's shadowing the built-in random module. You need to rename random.py to something like my_random.py and/or remove the random.pyc file.\nTo tell for sure what's going on, do this:\n>>> import random\n>>> print random.__file__\n\nThat will show you exactly which file is being imported.\n",
"This is happening because you have a random.py file in the python search path, most likely the current directory.\nPython is searching for modules using sys.path, which normally includes the current directory before the standard site-packages, which contains the expected random.py.\nThis is expected to be fixed in Python 3.0, so that you can't import modules from the current directory without using a special import syntax.\nJust remove the random.py + random.pyc in the directory you're running python from and it'll work fine.\n",
"I think you need to give some more information. It's not really possible to answer why it's not working based on the information in the question. The basic documentation for random is at: \nhttps://docs.python.org/library/random.html\nYou might check there. \n",
"Python 2.5.2 (r252:60911, Jun 16 2008, 18:27:58)\n[GCC 3.3.4 (pre 3.3.5 20040809)] on linux2\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import random\n>>> random.seed()\n>>> dir(random)\n['BPF', 'LOG4', 'NV_MAGICCONST', 'RECIP_BPF', 'Random', 'SG_MAGICCONST', 'SystemRandom', 'TWOPI', 'WichmannHill', '_BuiltinMethodType', '_MethodType', '__all__', '__builtins__', '__doc__', '__file__', '__name__', '_acos', '_ceil', '_cos', '_e', '_exp', '_hexlify', '_inst', '_log', '_pi', '_random', '_sin', '_sqrt', '_test', '_test_generator', '_urandom', '_warn', 'betavariate', 'choice', 'expovariate', 'gammavariate', 'gauss', 'getrandbits', 'getstate', 'jumpahead', 'lognormvariate', 'normalvariate', 'paretovariate', 'randint', 'random', 'randrange', 'sample', 'seed', 'setstate', 'shuffle', 'uniform', 'vonmisesvariate', 'weibullvariate']\n>>> random.randint(0,3)\n3\n>>> random.randint(0,3)\n1\n>>> \n\n",
"If the script you are trying to run is itself called random.py, then you would have a naming conflict. Choose a different name for your script.\n",
"Can you post an example of what you're trying to do? It's not clear from your question what the actual problem is.\nHere's an example of how to use the random module:\nimport random\nprint random.randint(0,10)\n\n",
"Seems to work fine for me. Check out the methods in the official python documentation for random:\n>>> import random\n>>> random.random()\n0.69130806168332215\n>>> random.uniform(1, 10)\n8.8384170917436293\n>>> random.randint(1, 10)\n4\n\n",
"Works for me:\nPython 2.5.1 (r251:54863, Jun 15 2008, 18:24:51) \n[GCC 4.3.0 20080428 (Red Hat 4.3.0-8)] on linux2\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import random\n>>> brothers = ['larry', 'curly', 'moe']\n>>> random.choice(brothers)\n'moe'\n>>> random.choice(brothers)\n'curly'\n\n"
] |
[
36,
3,
2,
1,
1,
0,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000074430_python.txt
|
Q:
Reading files in use and system files on Windows XP & Vista using .NET
I have this idea for a free backup application.
The largest problem I need to solve at the moment is how to access files which are being used or are system files. I would like the application to be able to perform a full backup of files (i.e. not on a disk sector by sector level).
I'll turn the server part of the application into a service. First of all this service will need to be run with administrative privileges I guess? And secondly, is it possible to access locked files and files used by the system? Maybe take those files after the next reboot? (I've seen some anti virus applications work that way.)
I will use C# and the .NET platform, as it seems to be the easiest way to develop Windows applications these days.
A:
What you're looking for regarding the files in use is the "Volume Shadow Copy Service" which is available on Windows XP, Server 2003 and above. This will allow you to copy files even when they are in use.
I have found a CodeProject article "Volume Shadow Copies from .NET" which describes a simple Outlook PST backup application written against Volume Shadow Copy.
A:
Do a Google on HoboCopy. It is an open source backup tool for windows that can backup files that are in use using Windows Volume Shadow Service.
A:
Nothing in .NET that could do that directly AFAIK.
I think you are looking for Volume Shadow Copy on XP/Vista which is designed for this kind of task.
|
Reading files in use and system files on Windows XP & Vista using .NET
|
I have this idea for a free backup application.
The largest problem I need to solve at the moment is how to access files which are being used or are system files. I would like the application to be able to perform a full backup of files (i.e. not on a disk sector by sector level).
I'll turn the server part of the application into a service. First of all this service will need to be run with administrative privileges I guess? And secondly, is it possible to access locked files and files used by the system? Maybe take those files after the next reboot? (I've seen some anti virus applications work that way.)
I will use C# and the .NET platform, as it seems to be the easiest way to develop Windows applications these days.
|
[
"What you're looking for regarding the files in use is the \"Volume Shadow Copy Service\" which is available on Windows XP, Server 2003 and above. This will allow you to copy files even when they are in use.\nI have found a CodeProject article \"Volume Shadow Copies from .NET\" which describes a simple Outlook PST backup application written against Volume Shadow Copy.\n",
"Do a Google on HoboCopy. It is an open source backup tool for windows that can backup files that are in use using Windows Volume Shadow Service.\n",
"Nothing in .NET that could do that directly AFAIK.\nI think you are looking for Volume Shadow Copy on XP/Vista which is designed for this kind of task.\n"
] |
[
5,
1,
1
] |
[] |
[] |
[
".net",
"c#",
"windows"
] |
stackoverflow_0000078282_.net_c#_windows.txt
|
Q:
Managing large user databases for single-signon
How would you implement a system with the following objectives:
Manage authentication,
authorization for
hundreds of thousands of existing users currently tightly integrated with a 3rd party vendor's application (We want to bust these users out into something we manage and make our apps work against it, plus our 3rd party vendors work against it).
Manage profile information linked to those users
Must be able to be accessed from any number of web applications on just about any platform (Windows, *nix, PHP, ASP/C#, Python/Django, et cetera).
Here some sample implementations:
LDAP/AD Server to manage everything. Use custom schema for all profile data. Everything can authenticate against LDAP/AD and we can store all sorts of ACLs and profile data in a custom schema.
Use LDAP/AD for authentication only, tie LDAP users to a most robust profile/authorization server using some sort of traditional database (MSSQL/PostgreSQL/MySQL) or document based DB (CouchDB, SimpleDB, et cetera). Use LDAP for authorization, then hit the DB for more advanced stuff.
Use a traditional database (Relational or Document) for everything.
Are any of these three the best? Are there other solutions which fit the objectives above and are easier to implement?
** I should add that almost all applications that will be authenticating against the user database will be under our control. The lone few outsiders will be the applications we're removing the current user database from and perhaps 1 or 2 others. Nothing so broad as to need an openID server.
Its also important to know that a lot of these users have had these accounts for 5-8 years and know their logins and passwords, et cetera.
A:
There is a difference between authentication and authorization/profiling so don't force both necessarily into a single tool. Your second solution of using LDAP for authentication and a DB for authorization seems more robust as the LDAP data is controlled by the user and the DB would be controlled by an admin. The latter would likely morph in structure and complexity over time, but authentication is just that authentication. Separation of these functions will prove more manageable.
A:
If you have an existing ActiveDirectory infrastructure, that will be the way to go. This will be particularly advantageous to companies that have already had Windows servers set up for authentication. If this is the case, I'm leaning towards your first bullet point in "sample implementations".
Otherwise it will be a toss-up between AD and opensource LDAP options.
It might be not viable to roll your own authentication schema for single-sign-on (especially considering the high amount of documentation and integration work you might have to do), and obviously do not bundle your authentication server with any of the applications running on your system (since you want it to be able to be independent of the load of such applications).
Goodluck!
A:
Use LDAP/AD for authentication only, tie LDAP users to a most robust profile/authorization server using some sort of traditional database (MSSQL/PostgreSQL/MySQL) or document based DB (CouchDB, SimpleDB, et cetera). Use LDAP for authorization, then hit the DB for more advanced stuff.
A:
We have different sites with around 100k users and they all work with normal databases. If most applications can access the db you can use this solution.
|
Managing large user databases for single-signon
|
How would you implement a system with the following objectives:
Manage authentication,
authorization for
hundreds of thousands of existing users currently tightly integrated with a 3rd party vendor's application (We want to bust these users out into something we manage and make our apps work against it, plus our 3rd party vendors work against it).
Manage profile information linked to those users
Must be able to be accessed from any number of web applications on just about any platform (Windows, *nix, PHP, ASP/C#, Python/Django, et cetera).
Here some sample implementations:
LDAP/AD Server to manage everything. Use custom schema for all profile data. Everything can authenticate against LDAP/AD and we can store all sorts of ACLs and profile data in a custom schema.
Use LDAP/AD for authentication only, tie LDAP users to a most robust profile/authorization server using some sort of traditional database (MSSQL/PostgreSQL/MySQL) or document based DB (CouchDB, SimpleDB, et cetera). Use LDAP for authorization, then hit the DB for more advanced stuff.
Use a traditional database (Relational or Document) for everything.
Are any of these three the best? Are there other solutions which fit the objectives above and are easier to implement?
** I should add that almost all applications that will be authenticating against the user database will be under our control. The lone few outsiders will be the applications we're removing the current user database from and perhaps 1 or 2 others. Nothing so broad as to need an openID server.
Its also important to know that a lot of these users have had these accounts for 5-8 years and know their logins and passwords, et cetera.
|
[
"There is a difference between authentication and authorization/profiling so don't force both necessarily into a single tool. Your second solution of using LDAP for authentication and a DB for authorization seems more robust as the LDAP data is controlled by the user and the DB would be controlled by an admin. The latter would likely morph in structure and complexity over time, but authentication is just that authentication. Separation of these functions will prove more manageable.\n",
"If you have an existing ActiveDirectory infrastructure, that will be the way to go. This will be particularly advantageous to companies that have already had Windows servers set up for authentication. If this is the case, I'm leaning towards your first bullet point in \"sample implementations\".\nOtherwise it will be a toss-up between AD and opensource LDAP options.\nIt might be not viable to roll your own authentication schema for single-sign-on (especially considering the high amount of documentation and integration work you might have to do), and obviously do not bundle your authentication server with any of the applications running on your system (since you want it to be able to be independent of the load of such applications).\nGoodluck!\n",
"Use LDAP/AD for authentication only, tie LDAP users to a most robust profile/authorization server using some sort of traditional database (MSSQL/PostgreSQL/MySQL) or document based DB (CouchDB, SimpleDB, et cetera). Use LDAP for authorization, then hit the DB for more advanced stuff.\n",
"We have different sites with around 100k users and they all work with normal databases. If most applications can access the db you can use this solution.\n"
] |
[
5,
2,
0,
0
] |
[
"You can always implement your own OpenID server. There is already a Python library for OpenID so it should be fairly easy. \nOf course you don't need to accept logins authorized by other servers in your applications. Accept credentials authorized only by your own server.\nEdit: I have found an implementation of OpenID server protocol in Django.\nEdit2: There is an obvious advantage in implementing OpenID for your users. They will be able to login to StackOverflow with their logins :-)\n"
] |
[
-1
] |
[
"active_directory",
"authentication",
"authorization",
"django",
"ldap"
] |
stackoverflow_0000078217_active_directory_authentication_authorization_django_ldap.txt
|
Q:
NHibernate to not cache a property
How can I configure NHibernate to not cache a file?
I know I can create a method that does an HSQL, but can I through a configuration setting in the <class>.xml file or the hibernate xml file itself to not cache a property?
A:
You cannot set secondary caching settings at property level (as far as I know), but you can individually tune cache settings for each entity directly in their XML files.
For instance:
<?xml version="1.0" encoding="utf-8" ?>
<hibernate-mapping xmlns="urn:nhibernate-mapping-2.2">
<class name="ClassName" table="Table">
<cache usage="nonstrict-read-write" />
<id name="Id" type="Int64" ...
Where the cache "usage" property can be any of the following values:
read-write: assures read committed isolation, makes sure data is consistent but doesn't reduce DB access as much as the other modes,
nonstrict-read-write: objects with rare writes, slight chance of inconsistency between DB and cache,
read-only: for data objects that never change, no chance of inconsistency.
|
NHibernate to not cache a property
|
How can I configure NHibernate to not cache a file?
I know I can create a method that does an HSQL, but can I through a configuration setting in the <class>.xml file or the hibernate xml file itself to not cache a property?
|
[
"You cannot set secondary caching settings at property level (as far as I know), but you can individually tune cache settings for each entity directly in their XML files.\nFor instance:\n<?xml version=\"1.0\" encoding=\"utf-8\" ?>\n\n<hibernate-mapping xmlns=\"urn:nhibernate-mapping-2.2\"> \n\n<class name=\"ClassName\" table=\"Table\">\n <cache usage=\"nonstrict-read-write\" />\n\n <id name=\"Id\" type=\"Int64\" ...\n\nWhere the cache \"usage\" property can be any of the following values:\n\nread-write: assures read committed isolation, makes sure data is consistent but doesn't reduce DB access as much as the other modes,\nnonstrict-read-write: objects with rare writes, slight chance of inconsistency between DB and cache,\nread-only: for data objects that never change, no chance of inconsistency.\n\n"
] |
[
3
] |
[] |
[] |
[
".net",
"caching",
"nhibernate"
] |
stackoverflow_0000075400_.net_caching_nhibernate.txt
|
Q:
Font rendering libraries for C# / dot-NET?
Are there any free, third-party libraries for rendering arbitrarily scaled and rotated text in dot-NET applications? Although native GDI+ allows for text scaling and rotation, its methods for determining the rendered text's dimensions are not sufficiently precise and the differences in kerning as text is added to a rendered string make it unsuitable for use in certain kinds of software (such as, for instance, graphics editing software).
Requirements:
Native .NET code.
Arbitrary scaling and rotation of text.
Precise text metrics.
Consistent kerning regardless of string length.
A:
Windows Presentation Foundation provides sophisticated support for typography.
|
Font rendering libraries for C# / dot-NET?
|
Are there any free, third-party libraries for rendering arbitrarily scaled and rotated text in dot-NET applications? Although native GDI+ allows for text scaling and rotation, its methods for determining the rendered text's dimensions are not sufficiently precise and the differences in kerning as text is added to a rendered string make it unsuitable for use in certain kinds of software (such as, for instance, graphics editing software).
Requirements:
Native .NET code.
Arbitrary scaling and rotation of text.
Precise text metrics.
Consistent kerning regardless of string length.
|
[
"Windows Presentation Foundation provides sophisticated support for typography.\n"
] |
[
2
] |
[] |
[] |
[
".net",
"c#",
"fonts",
"gdi+",
"text"
] |
stackoverflow_0000078351_.net_c#_fonts_gdi+_text.txt
|
Q:
What is the best way to handle URL mappings between an RIA version and plain old HTML version of a site?
So if you have a RIA version (Silverlight or Flash) and a standard HTML version (or AJAX even), should you have the same URL for both, or is it ok to have a different one for the RIA app and just redirect accordingly?
So, for instance, if you have a site (http://example.com), is it ok to have the about page URL for the RIA app be http://example.com/#/about and the html be http://example.com/about? Does it matter?
Of course if you take the route with different URLs you will need to map between them.
A:
It's perfectly acceptable to use 2 different link formats. If 2 users are not seeing the same content why should they be at the same URL.
A:
The URLs of your pages denote the identity of the content. In my view, if the content is the same but the presentation varies (i.e RIA vs. HTML), then the URL should be the same and you should use some other mechanism to select between the different presentation forms. Choices of other mechanisms include cookies, content negotiation, session identifiers or, if your users are identified, a persistent user preferences model. Even using a URL argument would at least keep the root of the URL consistent (e.g. http://your.si.te/foobar vs. http://your.si.te/foobar?view=plain)
If the content of the two presentations differs in some meaningful way, then you should make that difference meaningful in the URL. Exploiting the presence or absence of #, and other such hacks, would be a mistake in my view.
Try to pick URL's that do not change over time: so-called cool URL's. This will aide the long-term usefulness of your site to your users: consider what happens if they come back to a bookmarked page in a year's time. Consistency will also help you to get a better critical mass of links or reviews of your site in del.icio.us and similar bookmarking/review sites.
Ian
A:
I guess what I really need here is not a Question/Answer format but some kind of poll. While I agree (and accepted) that because they are getting two different views of the same content, that different urls are ok, but I'm thinking more of sharing these urls out.
Thanks for the reply though!
|
What is the best way to handle URL mappings between an RIA version and plain old HTML version of a site?
|
So if you have a RIA version (Silverlight or Flash) and a standard HTML version (or AJAX even), should you have the same URL for both, or is it ok to have a different one for the RIA app and just redirect accordingly?
So, for instance, if you have a site (http://example.com), is it ok to have the about page URL for the RIA app be http://example.com/#/about and the html be http://example.com/about? Does it matter?
Of course if you take the route with different URLs you will need to map between them.
|
[
"It's perfectly acceptable to use 2 different link formats. If 2 users are not seeing the same content why should they be at the same URL.\n",
"The URLs of your pages denote the identity of the content. In my view, if the content is the same but the presentation varies (i.e RIA vs. HTML), then the URL should be the same and you should use some other mechanism to select between the different presentation forms. Choices of other mechanisms include cookies, content negotiation, session identifiers or, if your users are identified, a persistent user preferences model. Even using a URL argument would at least keep the root of the URL consistent (e.g. http://your.si.te/foobar vs. http://your.si.te/foobar?view=plain)\nIf the content of the two presentations differs in some meaningful way, then you should make that difference meaningful in the URL. Exploiting the presence or absence of #, and other such hacks, would be a mistake in my view.\nTry to pick URL's that do not change over time: so-called cool URL's. This will aide the long-term usefulness of your site to your users: consider what happens if they come back to a bookmarked page in a year's time. Consistency will also help you to get a better critical mass of links or reviews of your site in del.icio.us and similar bookmarking/review sites.\nIan\n",
"I guess what I really need here is not a Question/Answer format but some kind of poll. While I agree (and accepted) that because they are getting two different views of the same content, that different urls are ok, but I'm thinking more of sharing these urls out. \nThanks for the reply though!\n"
] |
[
2,
2,
1
] |
[] |
[] |
[
"ria"
] |
stackoverflow_0000077428_ria.txt
|
Q:
Fighting with Protected Mode in Vista
Our application commonly used an ActiveX control to download and install our client on IE (XP and prior), however as our user base has drifted towards more Vista boxes with "Protected Mode" on, we are required to investigate.
So going forward, is it worth the headache of trying to use the protected mode API? Is this going to result in a deluge of dialog boxes and admin rights to do the things our app needs to do (write to some local file places, access some other applications, etc)?
I'm half bent on just adding a non-browser based installer app that will do the dirty work of downloading and installing the client, if need be... this would only need to be installed once and in large corporate structures it could be pushed out by IT.
Are there some other ideas I'm missing?
A:
This client, is it a desktop application and not some software that runs inside the browser? In that case, please just supply a regular download installer application. My personal experience with browser-hosted installers is that they are just confusing and the few I have seen seemed to be poorly coded in some way.
If you use an MSI based installer I'm sure lots of Windows domain administrators will love you too, as Microsoft has tools to deploy MSI based installations onto large sets of machines remotely.
A:
Have you checked out Microsoft's ClickOnce Deployment?
If I remember correctly you can embed a manifests which would help with dealing with protected modes automatically, saving you those headaches with the APIs.
I believe ClickOnce is geared for the same thing your ActiveX installer was designed to do.
Since you say your IT dept could push this out, I assume you could use this kind of technology as well.
Even though you might not be writing applications on the .NET CLR, you can use Visual Studio to generate those manifest and installers for you.
A:
Its far better to do this right than put it off any longer. Vista is Microsoft's way of saying they aren't letting people get away with ignoring security issues any more and encouraging people to update their code.
I'm sure other users here will be able to point you are some MSDN best practices about writing ActiveX controls.
|
Fighting with Protected Mode in Vista
|
Our application commonly used an ActiveX control to download and install our client on IE (XP and prior), however as our user base has drifted towards more Vista boxes with "Protected Mode" on, we are required to investigate.
So going forward, is it worth the headache of trying to use the protected mode API? Is this going to result in a deluge of dialog boxes and admin rights to do the things our app needs to do (write to some local file places, access some other applications, etc)?
I'm half bent on just adding a non-browser based installer app that will do the dirty work of downloading and installing the client, if need be... this would only need to be installed once and in large corporate structures it could be pushed out by IT.
Are there some other ideas I'm missing?
|
[
"This client, is it a desktop application and not some software that runs inside the browser? In that case, please just supply a regular download installer application. My personal experience with browser-hosted installers is that they are just confusing and the few I have seen seemed to be poorly coded in some way.\nIf you use an MSI based installer I'm sure lots of Windows domain administrators will love you too, as Microsoft has tools to deploy MSI based installations onto large sets of machines remotely.\n",
"Have you checked out Microsoft's ClickOnce Deployment?\nIf I remember correctly you can embed a manifests which would help with dealing with protected modes automatically, saving you those headaches with the APIs.\nI believe ClickOnce is geared for the same thing your ActiveX installer was designed to do.\nSince you say your IT dept could push this out, I assume you could use this kind of technology as well.\nEven though you might not be writing applications on the .NET CLR, you can use Visual Studio to generate those manifest and installers for you.\n",
"Its far better to do this right than put it off any longer. Vista is Microsoft's way of saying they aren't letting people get away with ignoring security issues any more and encouraging people to update their code.\nI'm sure other users here will be able to point you are some MSDN best practices about writing ActiveX controls.\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"activex",
"mode",
"protected",
"security",
"windows_vista"
] |
stackoverflow_0000078380_activex_mode_protected_security_windows_vista.txt
|
Q:
PostgreSQL DbLink Compilation on Solaris 10
After successfully building dblink on solaris 10 using Sun C 5.9
SunOS_sparc 2007/05/03 and gmake.
I ran gmake installcheck and got the following output:
========== running regression test queries ==========
test dblink ... FAILED
======================
1 of 1 tests failed.
The differences that caused some tests to fail can be viewed in the
file "./regression.diffs". A copy of the test summary that you see
above is saved in the file "./regression.out".
First error in regression.diffs file:
psql:dblink.sql:11: ERROR: could not load library "/apps/postgresql/
lib/dblink.so": ld.so.1: postgre
s: fatal: relocation error: file /apps/postgresql/lib/dblink.so:
symbol PG_GETARG_TEXT_PP: referenced symbol not found
I am running postgreSQL version 8.2.4 with the latest dblink source.
Has anyone got any idea what I need to do to solve this problem.
Thanks.
A:
To solve this issue I tried using the 8.2 dblink sources, instead of the latest version.
You also need to make sure you use gnu make not the sun make.
A:
Does the file it is looking for actually exist? Is it in that location?
It may be one of a few things I can think of:
1) The thing did not compile, and therefore does not exist.
2) It exists, but somewhere else, and the environment variable that tells it where to find it is set wrong.
3) The permissions are such that the ID that the postmaster is running as cannot traverse to that directory.
To check if it is somewhere else:
find / -type f|grep dblink.so
To check the permissions:
su -
su - postgres
less /apps/postgresql/ lib/dblink.so
|
PostgreSQL DbLink Compilation on Solaris 10
|
After successfully building dblink on solaris 10 using Sun C 5.9
SunOS_sparc 2007/05/03 and gmake.
I ran gmake installcheck and got the following output:
========== running regression test queries ==========
test dblink ... FAILED
======================
1 of 1 tests failed.
The differences that caused some tests to fail can be viewed in the
file "./regression.diffs". A copy of the test summary that you see
above is saved in the file "./regression.out".
First error in regression.diffs file:
psql:dblink.sql:11: ERROR: could not load library "/apps/postgresql/
lib/dblink.so": ld.so.1: postgre
s: fatal: relocation error: file /apps/postgresql/lib/dblink.so:
symbol PG_GETARG_TEXT_PP: referenced symbol not found
I am running postgreSQL version 8.2.4 with the latest dblink source.
Has anyone got any idea what I need to do to solve this problem.
Thanks.
|
[
"To solve this issue I tried using the 8.2 dblink sources, instead of the latest version.\nYou also need to make sure you use gnu make not the sun make.\n",
"Does the file it is looking for actually exist? Is it in that location?\nIt may be one of a few things I can think of:\n1) The thing did not compile, and therefore does not exist.\n2) It exists, but somewhere else, and the environment variable that tells it where to find it is set wrong.\n3) The permissions are such that the ID that the postmaster is running as cannot traverse to that directory.\nTo check if it is somewhere else:\nfind / -type f|grep dblink.so\n\nTo check the permissions:\nsu - \nsu - postgres\nless /apps/postgresql/ lib/dblink.so\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"dblink",
"postgresql"
] |
stackoverflow_0000069959_dblink_postgresql.txt
|
Q:
Not using widths & padding/margins on the same element?
I've seen numerous people mentions that you shouldn't use widths and padding or margins on the same element with CSS. Why is that?
A:
Because some browsers treat the width as the total width of the element including the padding and the margins, and others treat the width as the base width to which the padding and margins are added. As a result your design will look different in different browsers.
For more information, see the W3C page on Box Model and quirksmode's take.
A:
A lot of people still cling to notions about faulty box-models in IE and reckon that if you start messing around with element widths, padding and margins, you're going to get into trouble.
That's pretty outdated advice - assuming you're using a correct doctype, all fairly modern browsers (including IE6+) will work to the same box model, and you shouldn't really have too many issues related to it.
This being CSS, you will obviously have a million other cross-browser issues, but the infamous IE box-model is becoming a thing of the past.
A:
I've never come across a problem caused by using width, padding and/or margin together.
So long as you have a valid DOCTYPE and are not in Quirks Mode, you will have a predictable box model and therefor should use whichever is most appropriate out of margin/padding to represent what you are trying to do.
Note:
Margin applies outside of borders, padding applies inside of borders.
Width means inner width of the container, the Total width = margin+border+padding+width (remembering that the first three are added for both left and right hand side).
A:
Are you stating that padding and/or margin values shouldn't co-exist with a DOM element that also has a width value assigned to it? If so, that is only true if you do not want to write CSS that is compatible with both IE as well as browsers which implement web standards (e.g. Firefox). It would be difficult to achieve the layout you're looking for usually without some margin or padding value. But I suggest that you write CSS that is compatible for both browsers. If this is not what you are asking, then please correct me :)
A:
The reason may be the famous box model problem.
Summary: IE renders width different then the standard rendering if padding and margin used with width or height.
A:
I can think of 2 reasons:
1) the old "box model" of IE is really flaky, so when you have an element with the style { width: 300px; padding: 30px; margin: 20px;} it's outline might not actually match up to the expected 400px (300 px size, plus 2 30px padding, plus 2 20 px margin. I think its actual width would be 340, as it rolls the padding into the width calculation.
which brings is to...
2) Some people find the calculations a little confusing.
That said, I personally use widths along with padding and margins and have no problems with it. If you limit yourself to not using widths when using paddings/margins, that means you are peppering your code with a lot of non-semantic cruft. It does mean you have to be aware of what the various browsers are going to do with the element, but this is why we browsertest.
A:
One important point to note is that it can make using percentage widths almost impossible. Take this for example, if you want your "content" div to take the full width, but have a 10px padding:
#content {
width: 100%;
padding: 0 10px;
}
That works in the (sensible, but incorrect) IE model, but in standards compliant browsers your div will occupy a width of 100% + 20px which is no good. The solution is to use another "inner" container.
<div id="content">
<div class="inner">
content here.
</div>
</div>
#content {
width: 100%;
}
#content .inner {
padding: 0 10px;
}
It's a bit annoying have the extra markup, but it makes a lot of things easier in the long run.
|
Not using widths & padding/margins on the same element?
|
I've seen numerous people mentions that you shouldn't use widths and padding or margins on the same element with CSS. Why is that?
|
[
"Because some browsers treat the width as the total width of the element including the padding and the margins, and others treat the width as the base width to which the padding and margins are added. As a result your design will look different in different browsers.\nFor more information, see the W3C page on Box Model and quirksmode's take.\n",
"A lot of people still cling to notions about faulty box-models in IE and reckon that if you start messing around with element widths, padding and margins, you're going to get into trouble.\nThat's pretty outdated advice - assuming you're using a correct doctype, all fairly modern browsers (including IE6+) will work to the same box model, and you shouldn't really have too many issues related to it.\nThis being CSS, you will obviously have a million other cross-browser issues, but the infamous IE box-model is becoming a thing of the past.\n",
"I've never come across a problem caused by using width, padding and/or margin together.\nSo long as you have a valid DOCTYPE and are not in Quirks Mode, you will have a predictable box model and therefor should use whichever is most appropriate out of margin/padding to represent what you are trying to do.\nNote:\nMargin applies outside of borders, padding applies inside of borders.\nWidth means inner width of the container, the Total width = margin+border+padding+width (remembering that the first three are added for both left and right hand side).\n",
"Are you stating that padding and/or margin values shouldn't co-exist with a DOM element that also has a width value assigned to it? If so, that is only true if you do not want to write CSS that is compatible with both IE as well as browsers which implement web standards (e.g. Firefox). It would be difficult to achieve the layout you're looking for usually without some margin or padding value. But I suggest that you write CSS that is compatible for both browsers. If this is not what you are asking, then please correct me :)\n",
"The reason may be the famous box model problem.\nSummary: IE renders width different then the standard rendering if padding and margin used with width or height.\n",
"I can think of 2 reasons:\n1) the old \"box model\" of IE is really flaky, so when you have an element with the style { width: 300px; padding: 30px; margin: 20px;} it's outline might not actually match up to the expected 400px (300 px size, plus 2 30px padding, plus 2 20 px margin. I think its actual width would be 340, as it rolls the padding into the width calculation.\nwhich brings is to...\n2) Some people find the calculations a little confusing. \nThat said, I personally use widths along with padding and margins and have no problems with it. If you limit yourself to not using widths when using paddings/margins, that means you are peppering your code with a lot of non-semantic cruft. It does mean you have to be aware of what the various browsers are going to do with the element, but this is why we browsertest.\n",
"One important point to note is that it can make using percentage widths almost impossible. Take this for example, if you want your \"content\" div to take the full width, but have a 10px padding:\n#content {\n width: 100%;\n padding: 0 10px;\n}\n\nThat works in the (sensible, but incorrect) IE model, but in standards compliant browsers your div will occupy a width of 100% + 20px which is no good. The solution is to use another \"inner\" container.\n<div id=\"content\">\n <div class=\"inner\">\n content here.\n </div>\n</div>\n\n#content {\n width: 100%;\n}\n#content .inner {\n padding: 0 10px;\n}\n\nIt's a bit annoying have the extra markup, but it makes a lot of things easier in the long run.\n"
] |
[
9,
4,
2,
0,
0,
0,
0
] |
[] |
[] |
[
"css",
"html"
] |
stackoverflow_0000075732_css_html.txt
|
Q:
What's a simple method to dump pipe input to a file? (Linux)
I'm looking for a little shell script that will take anything piped into it, and dump it to a file.. for email debugging purposes. Any ideas?
A:
The unix command tee does this.
man tee
A:
cat > FILENAME
A:
You're not alone in needing something similar... in fact, someone wanted that functionality decades ago and developed tee :-)
Of course, you can redirect stdout directly to a file in any shell using the > character:
echo "hello, world!" > the-file.txt
A:
The standard unix tool tee can do this. It copies input to output, while also logging it to a file.
A:
Use Procmail. Procmail is your friend. Procmail is made for this sort of thing.
A:
If you want to analyze it in the script:
while /bin/true; do
read LINE
echo $LINE > $OUTPUT
done
But you can simply use cat. If cat gets something on the stdin, it will echo it to the stdout, so you'll have to pipe it to cat >$OUTPUT. These will do the same. The second works for binary data also.
A:
If you want a shell script, try this:
#!/bin/sh
exec cat >/path/to/file
A:
If exim or sendmail is what's writing into the pipe, then procmail is a good answer because it'll give you file locking/serialization and you can put it all in the same file.
If you just want to write into a file, then
- tee > /tmp/log.$$
or
- cat > /tmp/log.$$
might be good enough.
A:
Use <<command>> | tee <<file>> for piping a command <<command>> into a file <<file>>.
This will also show the output.
A:
Huh? I guess, I don't get the question?
Can't you just end your pipe into a >> ~file
For example
echo "Foobar" >> /home/mo/dumpfile
will append Foobar to the dumpfile (and create dumpfile if necessary). No need for a shell script... Is that what you were looking for?
A:
if you don't care about outputting the result
cat - > filename
or
cat > filename
|
What's a simple method to dump pipe input to a file? (Linux)
|
I'm looking for a little shell script that will take anything piped into it, and dump it to a file.. for email debugging purposes. Any ideas?
|
[
"The unix command tee does this.\nman tee\n\n",
"cat > FILENAME\n\n",
"You're not alone in needing something similar... in fact, someone wanted that functionality decades ago and developed tee :-)\nOf course, you can redirect stdout directly to a file in any shell using the > character:\necho \"hello, world!\" > the-file.txt\n\n",
"The standard unix tool tee can do this. It copies input to output, while also logging it to a file.\n",
"Use Procmail. Procmail is your friend. Procmail is made for this sort of thing.\n",
"If you want to analyze it in the script:\nwhile /bin/true; do\n read LINE\n echo $LINE > $OUTPUT\ndone\n\nBut you can simply use cat. If cat gets something on the stdin, it will echo it to the stdout, so you'll have to pipe it to cat >$OUTPUT. These will do the same. The second works for binary data also.\n",
"If you want a shell script, try this:\n#!/bin/sh\nexec cat >/path/to/file\n\n",
"If exim or sendmail is what's writing into the pipe, then procmail is a good answer because it'll give you file locking/serialization and you can put it all in the same file.\nIf you just want to write into a file, then\n - tee > /tmp/log.$$\nor\n - cat > /tmp/log.$$\nmight be good enough.\n",
"Use <<command>> | tee <<file>> for piping a command <<command>> into a file <<file>>. \nThis will also show the output.\n",
"Huh? I guess, I don't get the question?\nCan't you just end your pipe into a >> ~file\nFor example\necho \"Foobar\" >> /home/mo/dumpfile\n\nwill append Foobar to the dumpfile (and create dumpfile if necessary). No need for a shell script... Is that what you were looking for?\n",
"if you don't care about outputting the result\ncat - > filename\n\nor\ncat > filename\n\n"
] |
[
25,
11,
10,
5,
2,
1,
1,
1,
1,
0,
0
] |
[] |
[] |
[
"exim",
"linux",
"pipe"
] |
stackoverflow_0000076700_exim_linux_pipe.txt
|
Q:
Trigger update on DataTable bound to DataGridView
In my .NET/Forms app I have a DataGridView which is bound to a DataTable. The user selects a row of the DataGridView by double-clicking and does some interaction with the app. After that the content of the row is updated programmatically.
When the user selects a new row the changes on the previous one are automagically propagated to the DataTable by the framework. How can I trigger this update from my code so the user does not have to select a new row?
A:
I just had the same issue, and found the answer here:
When the user navigates away from the
row, the control commits all row
changes. The user can also press
CTRL+ENTER to commit row changes
without leaving the row. To commit row
changes programmatically, call the
form's Validate method. If your data
source is a BindingSource, you can
also call BindingSource.EndEdit.
Calling Validate() worked for me.
A:
I guess it depends on what triggers the update to take place, if it is in a validation routine you could simply call that after the user clicks OK on editing the data. Your question is vague it would be easier to answer with more information. What is this interaction? Is it a dialog? What actually updates the data?
A:
Here is the process to clarify this:
user doubleclicks row
app fetches data from db, processes fetched data and fills controls on the same form as the DataGridView
user interacts with controls and finally presses apply button on the same form
app processes state of controls, writes data to db and writes data to DataGridView
IF user moves selection on DataGridView
THEN changes are propagated to the bound DataTable
I would like to trigger 6 instantly after modifying the DataGridView from my code.
|
Trigger update on DataTable bound to DataGridView
|
In my .NET/Forms app I have a DataGridView which is bound to a DataTable. The user selects a row of the DataGridView by double-clicking and does some interaction with the app. After that the content of the row is updated programmatically.
When the user selects a new row the changes on the previous one are automagically propagated to the DataTable by the framework. How can I trigger this update from my code so the user does not have to select a new row?
|
[
"I just had the same issue, and found the answer here:\n\nWhen the user navigates away from the\n row, the control commits all row\n changes. The user can also press\n CTRL+ENTER to commit row changes\n without leaving the row. To commit row\n changes programmatically, call the\n form's Validate method. If your data\n source is a BindingSource, you can\n also call BindingSource.EndEdit.\n\nCalling Validate() worked for me.\n",
"I guess it depends on what triggers the update to take place, if it is in a validation routine you could simply call that after the user clicks OK on editing the data. Your question is vague it would be easier to answer with more information. What is this interaction? Is it a dialog? What actually updates the data?\n",
"Here is the process to clarify this:\n\nuser doubleclicks row\napp fetches data from db, processes fetched data and fills controls on the same form as the DataGridView\nuser interacts with controls and finally presses apply button on the same form\napp processes state of controls, writes data to db and writes data to DataGridView\nIF user moves selection on DataGridView\nTHEN changes are propagated to the bound DataTable\n\nI would like to trigger 6 instantly after modifying the DataGridView from my code.\n"
] |
[
4,
0,
0
] |
[] |
[] |
[
".net",
"data_binding",
"datagridview",
"datatable",
"forms"
] |
stackoverflow_0000072791_.net_data_binding_datagridview_datatable_forms.txt
|
Q:
I have P & G-- how do I use the Wincrypt API to generate a Diffie-Hellman keypair?
There's an MSDN article here, but I'm not getting very far:
p = 139;
g = 5;
CRYPT_DATA_BLOB pblob;
pblob.cbData = sizeof( ULONG );
pblob.pbData = ( LPBYTE ) &p;
CRYPT_DATA_BLOB gblob;
gblob.cbData = sizeof( ULONG );
gblob.pbData = ( LPBYTE ) &g;
HCRYPTKEY hKey;
if ( ::CryptGenKey( m_hCryptoProvider, CALG_DH_SF,
CRYPT_PREGEN, &hKey ) )
{
::CryptSetKeyParam( hKey, KP_P, ( LPBYTE ) &pblob, 0 );
Fails here with NTE_BAD_DATA. I'm using MS_DEF_DSS_DH_PROV. What gives?
A:
It may be that it just doesn't like the very short keys you're using.
I found the desktop version of that article which may help, as it has a full example.
EDIT:
The OP realised from the example that you have to tell CryptGenKey how long the keys are, which you do by setting the top 16-bits of the flags to the number of bits you want to use. If you leave this as 0, you get the default key length. This is documented in the Remarks section of the device documentation, and with the dwFlags parameter in the desktop documentation.
For the Diffie-Hellman key-exchange algorithm, the Base provider defaults to 512-bit keys and the Enhanced provider (which is the default) defaults to 1024-bit keys, on Windows XP and later. There doesn't seem to be any documentation for the default lengths on CE.
The code should therefore be:
BYTE p[64] = { 139 }; // little-endian, all other bytes set to 0
BYTE g[64] = { 5 };
CRYPT_DATA_BLOB pblob;
pblob.cbData = sizeof( p);
pblob.pbData = p;
CRYPT_DATA_BLOB gblob;
gblob.cbData = sizeof( g );
gblob.pbData = g;
HCRYPTKEY hKey;
if ( ::CryptGenKey( m_hCryptoProvider, CALG_DH_SF,
( 512 << 16 ) | CRYPT_PREGEN, &hKey ) )
{
::CryptSetKeyParam( hKey, KP_P, ( LPBYTE ) &pblob, 0 );
A:
It looks to me that KP_P, KP_G, KP_Q are for DSS keys (Digital Signature Standard?). For Diffie-Hellman it looks like you're supposed to use KP_PUB_PARAMS and pass a DATA_BLOB that points to a DHPUBKEY_VER3 structure.
Note that the article you're pointing to is from the Windows Mobile/Windows CE SDK. It wouldn't be the first time that CE worked differently from the desktop/server.
EDIT: CE does not implement KP_PUB_PARAMS. To use this structure on the desktop, see Diffie-Hellman Version 3 Public Key BLOBs.
|
I have P & G-- how do I use the Wincrypt API to generate a Diffie-Hellman keypair?
|
There's an MSDN article here, but I'm not getting very far:
p = 139;
g = 5;
CRYPT_DATA_BLOB pblob;
pblob.cbData = sizeof( ULONG );
pblob.pbData = ( LPBYTE ) &p;
CRYPT_DATA_BLOB gblob;
gblob.cbData = sizeof( ULONG );
gblob.pbData = ( LPBYTE ) &g;
HCRYPTKEY hKey;
if ( ::CryptGenKey( m_hCryptoProvider, CALG_DH_SF,
CRYPT_PREGEN, &hKey ) )
{
::CryptSetKeyParam( hKey, KP_P, ( LPBYTE ) &pblob, 0 );
Fails here with NTE_BAD_DATA. I'm using MS_DEF_DSS_DH_PROV. What gives?
|
[
"It may be that it just doesn't like the very short keys you're using.\nI found the desktop version of that article which may help, as it has a full example.\nEDIT:\nThe OP realised from the example that you have to tell CryptGenKey how long the keys are, which you do by setting the top 16-bits of the flags to the number of bits you want to use. If you leave this as 0, you get the default key length. This is documented in the Remarks section of the device documentation, and with the dwFlags parameter in the desktop documentation. \nFor the Diffie-Hellman key-exchange algorithm, the Base provider defaults to 512-bit keys and the Enhanced provider (which is the default) defaults to 1024-bit keys, on Windows XP and later. There doesn't seem to be any documentation for the default lengths on CE.\nThe code should therefore be:\nBYTE p[64] = { 139 }; // little-endian, all other bytes set to 0\nBYTE g[64] = { 5 };\n\nCRYPT_DATA_BLOB pblob;\npblob.cbData = sizeof( p);\npblob.pbData = p;\n\nCRYPT_DATA_BLOB gblob;\ngblob.cbData = sizeof( g );\ngblob.pbData = g;\n\nHCRYPTKEY hKey;\nif ( ::CryptGenKey( m_hCryptoProvider, CALG_DH_SF,\n ( 512 << 16 ) | CRYPT_PREGEN, &hKey ) )\n{\n ::CryptSetKeyParam( hKey, KP_P, ( LPBYTE ) &pblob, 0 );\n\n",
"It looks to me that KP_P, KP_G, KP_Q are for DSS keys (Digital Signature Standard?). For Diffie-Hellman it looks like you're supposed to use KP_PUB_PARAMS and pass a DATA_BLOB that points to a DHPUBKEY_VER3 structure.\nNote that the article you're pointing to is from the Windows Mobile/Windows CE SDK. It wouldn't be the first time that CE worked differently from the desktop/server.\nEDIT: CE does not implement KP_PUB_PARAMS. To use this structure on the desktop, see Diffie-Hellman Version 3 Public Key BLOBs.\n"
] |
[
2,
1
] |
[] |
[] |
[
"cryptoapi",
"cryptography",
"diffie_hellman"
] |
stackoverflow_0000076581_cryptoapi_cryptography_diffie_hellman.txt
|
Q:
How Many Network Connections Can a Computer Support?
When writing a custom server, what are the best practices or techniques to determine maximum number of users that can connect to the server at any given time?
I would assume that the capabilities of the computer hardware, network capacity, and server protocol would all be important factors.
Also, do you think it is a good practice to limit the number of network connections to a certain maximum number of users? Or should the server not limit the number of network connections and let performance degrade until the response time is extremely high?
A:
Dan Kegel put together a summary of techniques for handling large amounts of network connections from a single server, here: http://www.kegel.com/c10k.html
A:
In general modern servers can handle very large numbers of concurrent connections. I've worked on systems having over 8,000 concurrently open TCP/IP sockets.
You will need a high quality servicing interface to handle that kind of load, check out libevent or libev.
A:
That is a good question and it definitely is situational. What is your computer? Do you have a 4 socket machine filled with Quad Core Xeons, 128 GB of RAM, and Fiber Channel Connectivity (like the pair of Dell R900s we just bought)? Or are you running on a p3 550 with 256 MB of RAM, and 56K modem? How much load does each connection place on your server? What kind of response is acceptible?
These are the questions you need to answer. I guess the best way to find the answer is through load testing. Create a unit test of the expected (and maybe some unexpected) paths that your code will perform against your server. Find a load testing framework that will allow you to simulate 10, 100, 1000, 10000 users performing those tasks at the same time.
That will tell you how many connections your computer can support.
The great thing about the load/unit test scenario is that you can put in response time expectations in your unit tests and increase the load until you fall outside of your response time. If you have a requirement of supporting X number of Users with Y second response, you will be able to demonstrate it with your load tests.
A:
One of the biggest setbacks in high concurrency connections is actually the routers involved. Home user oriented routers usually have a small NAT table, preventing the router from actually servicing the server the connections.
Be sure to research your router/ network infrastructure setup just as well.
A:
I think you shouldn't limit the number of connections your server will allow - just catch and handle properly any exceptions that might occur when accepting and closing connections and you should be fine. You should leave that kind of lower level programming to the underlying OS layers - that way you can port your server easier etc.
A:
This really depends on your operating system.
Different Unix flavors will support "unlimited" number of file handles / sockets others have high values like 32768.
A typical user limit is 8192 but it can usually be set higher.
I think windows is more limiting but the server version may have higher limits.
|
How Many Network Connections Can a Computer Support?
|
When writing a custom server, what are the best practices or techniques to determine maximum number of users that can connect to the server at any given time?
I would assume that the capabilities of the computer hardware, network capacity, and server protocol would all be important factors.
Also, do you think it is a good practice to limit the number of network connections to a certain maximum number of users? Or should the server not limit the number of network connections and let performance degrade until the response time is extremely high?
|
[
"Dan Kegel put together a summary of techniques for handling large amounts of network connections from a single server, here: http://www.kegel.com/c10k.html\n",
"In general modern servers can handle very large numbers of concurrent connections. I've worked on systems having over 8,000 concurrently open TCP/IP sockets.\nYou will need a high quality servicing interface to handle that kind of load, check out libevent or libev.\n",
"That is a good question and it definitely is situational. What is your computer? Do you have a 4 socket machine filled with Quad Core Xeons, 128 GB of RAM, and Fiber Channel Connectivity (like the pair of Dell R900s we just bought)? Or are you running on a p3 550 with 256 MB of RAM, and 56K modem? How much load does each connection place on your server? What kind of response is acceptible?\nThese are the questions you need to answer. I guess the best way to find the answer is through load testing. Create a unit test of the expected (and maybe some unexpected) paths that your code will perform against your server. Find a load testing framework that will allow you to simulate 10, 100, 1000, 10000 users performing those tasks at the same time.\nThat will tell you how many connections your computer can support.\nThe great thing about the load/unit test scenario is that you can put in response time expectations in your unit tests and increase the load until you fall outside of your response time. If you have a requirement of supporting X number of Users with Y second response, you will be able to demonstrate it with your load tests.\n",
"One of the biggest setbacks in high concurrency connections is actually the routers involved. Home user oriented routers usually have a small NAT table, preventing the router from actually servicing the server the connections. \nBe sure to research your router/ network infrastructure setup just as well.\n",
"I think you shouldn't limit the number of connections your server will allow - just catch and handle properly any exceptions that might occur when accepting and closing connections and you should be fine. You should leave that kind of lower level programming to the underlying OS layers - that way you can port your server easier etc. \n",
"This really depends on your operating system.\nDifferent Unix flavors will support \"unlimited\" number of file handles / sockets others have high values like 32768.\nA typical user limit is 8192 but it can usually be set higher.\nI think windows is more limiting but the server version may have higher limits.\n"
] |
[
8,
5,
4,
0,
0,
0
] |
[] |
[] |
[
"network_programming",
"networking"
] |
stackoverflow_0000078422_network_programming_networking.txt
|
Q:
Anyone have experience with Sphinx speech recognition?
Has anyone used the Sphinx speech recognition stack to build IVR applications? I am looking for open source alternatives to the expensive and somewhat limiting choices from MSFT and others. I have not been able to find a comprehensive package that ties open source speech/voip applications together.
A:
You could try integrating Sphinx with Asterisk:
http://www.syednetworks.com/asterisk-integration-with-sphinx-voice-recognition-system
http://www.voip-info.org/wiki/view/Sphinx
A:
Last I looked at Sphinx, it had issues with 8khz audio which resulted
in really poor performance. There's not a lot of people talking about
successful deployments of Sphinx in real environments, but you might
be able to get it to work with some trailblazing effort. See here for
more info:
http://www.voip-info.org/wiki-Sphinx
The closest thing to open-source that really works is using LumenVox
with Asterisk. Asterisk is the open-source PBX that you can use to
integrate with a VoIP service or gateway, or even the PSTN. LumenVox
is a commercial speech engine that integrates with Asterisk:
http://www.asterisk.org
http://www.lumenvox.com
http://www.lumenvox.com/partners/digium/Asterisk.aspx
There's lots of people successfully using LumenVox with Asterisk.
|
Anyone have experience with Sphinx speech recognition?
|
Has anyone used the Sphinx speech recognition stack to build IVR applications? I am looking for open source alternatives to the expensive and somewhat limiting choices from MSFT and others. I have not been able to find a comprehensive package that ties open source speech/voip applications together.
|
[
"You could try integrating Sphinx with Asterisk:\n\nhttp://www.syednetworks.com/asterisk-integration-with-sphinx-voice-recognition-system\nhttp://www.voip-info.org/wiki/view/Sphinx\n\n",
"\nLast I looked at Sphinx, it had issues with 8khz audio which resulted\nin really poor performance. There's not a lot of people talking about\nsuccessful deployments of Sphinx in real environments, but you might\nbe able to get it to work with some trailblazing effort. See here for\nmore info:\n\nhttp://www.voip-info.org/wiki-Sphinx\n\nThe closest thing to open-source that really works is using LumenVox\nwith Asterisk. Asterisk is the open-source PBX that you can use to\nintegrate with a VoIP service or gateway, or even the PSTN. LumenVox\nis a commercial speech engine that integrates with Asterisk:\n\nhttp://www.asterisk.org\nhttp://www.lumenvox.com\nhttp://www.lumenvox.com/partners/digium/Asterisk.aspx\n\nThere's lots of people successfully using LumenVox with Asterisk.\n\n"
] |
[
2,
1
] |
[] |
[] |
[
"cmusphinx",
"ivr",
"speech_recognition"
] |
stackoverflow_0000036054_cmusphinx_ivr_speech_recognition.txt
|
Q:
What is a good equivalent to Perl lists in bash?
In perl one would simply do the following to store and iterate over a list of names
my @fruit = (apple, orange, kiwi);
foreach (@fruit) {
print $_;
}
What would the equivalent be in bash?
A:
bash (unlike POSIX sh) supports arrays:
fruits=(apple orange kiwi "dried mango")
for fruit in "${fruits[@]}"; do
echo "${fruit}"
done
This has the advantage that array elements may contain spaces or other members of $IFS; as long as they were correctly inserted as separate elements, they are read out the same way.
A:
Like this:
FRUITS="apple orange kiwi"
for FRUIT in $FRUITS; do
echo $FRUIT
done
Notice this won't work if there are spaces in the names of your fruits. In that case, see this answer instead, which is slightly less portable but much more robust.
A:
Now that the answer I like has been accepted as the correct answer, I'll now move into another topic: how to use IFS for personal gain. :-P
fruits="apple,orange,kiwifruit,dried mango"
(IFS=,
for fruit in $fruits; do
echo "$fruit"
done)
I've put the code in brackets so that the IFS change is isolated into its own subprocess; thus at the end of the bracketed section, IFS is reverted back to its old value. :-)
A:
for i in apple orange kiwi
do
echo $i
done
|
What is a good equivalent to Perl lists in bash?
|
In perl one would simply do the following to store and iterate over a list of names
my @fruit = (apple, orange, kiwi);
foreach (@fruit) {
print $_;
}
What would the equivalent be in bash?
|
[
"bash (unlike POSIX sh) supports arrays:\nfruits=(apple orange kiwi \"dried mango\")\nfor fruit in \"${fruits[@]}\"; do\n echo \"${fruit}\"\ndone\n\nThis has the advantage that array elements may contain spaces or other members of $IFS; as long as they were correctly inserted as separate elements, they are read out the same way.\n",
"Like this:\nFRUITS=\"apple orange kiwi\"\nfor FRUIT in $FRUITS; do\n echo $FRUIT\ndone\n\nNotice this won't work if there are spaces in the names of your fruits. In that case, see this answer instead, which is slightly less portable but much more robust.\n",
"Now that the answer I like has been accepted as the correct answer, I'll now move into another topic: how to use IFS for personal gain. :-P\nfruits=\"apple,orange,kiwifruit,dried mango\"\n(IFS=,\n for fruit in $fruits; do\n echo \"$fruit\"\n done)\n\nI've put the code in brackets so that the IFS change is isolated into its own subprocess; thus at the end of the bracketed section, IFS is reverted back to its old value. :-)\n",
"\nfor i in apple orange kiwi\ndo\n echo $i\ndone\n\n"
] |
[
45,
11,
6,
4
] |
[] |
[] |
[
"arrays",
"bash"
] |
stackoverflow_0000078592_arrays_bash.txt
|
Q:
What runs in a C heap vs a Java heap in HP-UX environment JVMs?
I've been running into a peculiar issue with certain Java applications in the HP-UX environment.
The heap is set to -mx512, yet, looking at the memory regions for this java process using gpm, it shows it using upwards of 1.6GBs of RSS memory, with 1.1GB allocated to the DATA region. Grows quite rapidly over a 24-48hour period and then slows down substantially, still growing 2MB every few hours. However, the Java heap shows no sign of leakage.
Curious how this was possible I researched a bit and found this HP write-up on memory leaks in java heap and c heap: http://docs.hp.com/en/JAVAPERFTUNE/Memory-Management.pdf
My question is what determines what is ran in the C heap vs the java heap, and for things that do not run through the java heap, how would you identify those objects being run on the C heap? Additionally does the java heap sit inside the C heap?
A:
Consider what makes up a Java process.
You have:
the JVM (a C program)
JNI Data
Java byte codes
Java data
Notably, they ALL live in the C heap (the JVM Heap is part of the C heap, naturally).
In the Java heap is simply Java byte codes and the Java data. But what is also in the Java heap is "free space".
The typical (i.e. Sun) JVM only grows it Java Heap as necessary, but never shrinks it. Once it reaches its defined maximum (-Xmx512M), it stops growing and deals with whatever is left. When that maximum heap is exhausted, you get the OutOfMemory exception.
What that Xmx512M option DOES NOT do, is limit the overall size of the process. It limits only the Java Heap part of the process.
For example, you could have a contrived Java program that uses 10mb of Java heap, but calls a JNI call that allocates 500MB of C Heap. You can see how your process size is large, even though the Java heap is small. Also, with the new NIO libraries, you can attach memory outside of the heap as well.
The other aspect that you must consider is that the Java GC is typically a "Copying Collector". Which means it takes the "live" data from memory it's collecting, and copies it to a different section of memory. This empty space that is copies to IS NOT PART OF THE HEAP, at least, not in terms of the Xmx parameter. It's, like, "the new Heap", and becomes part of the heap after the copy (the old space is used for the next GC). If you have a 512MB heap, and it's at 510MB, Java is going to copy the live data someplace. The naive thought would be to another large open space (like 500+MB). If all of your data were "live", then it would need a large chunk like that to copy into.
So, you can see that in the most extreme edge case, you need at least double the free memory on your system to handle a specific heap size. At least 1GB for a 512MB heap.
Turns out that not the case in practice, and memory allocation and such is more complicated than that, but you do need a large chunk of free memory to handle the heap copies, and this impacts the overall process size.
Finally, note that the JVM does fun things like mapping in the rt.jar classes in to the VM to ease startup. They're mapped in a read only block, and can be shared across other Java processes. These shared pages will "count" against all Java processes, even though it is really only consuming physical memory once (the magic of virtual memory).
Now as to why your process continues to grow, if you never hit the Java OOM message, that means that your leak is NOT in the Java heap, but that doesn't mean it may not be in something else (the JRE runtime, a 3rd party JNI librariy, a native JDBC driver, etc.).
A:
In general, only the data in Java objects is stored on the Java heap, all other memory required by the Java VM is allocated from the "native" or "C" heap (in fact, the Java heap itself is just one contiguous chunk allocated from the C heap).
Since the JVM requires the Java heap (or heaps if generational garbage collection is in use) to be a contiguous piece of memory, the whole maximum heap size (-mx value) is usually allocated at JVM start time. In practice, the Java VM will attempt to minimise its use of this space so that the Operating System doesn't need to reserve any real memory to it (the OS is canny enough to know when a piece of storage has never been written to).
The Java heap, therefore, will occupy a certain amount of space in memory.
The rest of the storage will be used by the Java VM and any JNI code in use. For example, the JVM requires memory to store Java bytecode and constant pools from loaded classes, the result of JIT compiled code, work areas for compiling JIT code, native thread stacks and other such sundries.
JNI code is just platform-specific (compiled) C code that can be bound to a Java object in the form of a "native" method. When this method is executed the bound code is executed and can allocate memory using standard C routines (eg malloc) which will consume memory on the C heap.
A:
My only guess with the figures you have given is a memory leak in the Java VM. You might want to try one of the other VMs they listed in the paper you referred. Another (much more difficult) alternative might be to compile the open java on the HP platform.
Sun's Java isn't 100% open yet, they are working on it, but I believe that there is one in sourceforge that is.
Java also thrashes memory by the way. Sometimes it confuses OS memory management a little (you see it when windows runs out of memory and asks Java to free some up, Java touches all it's objects causing them to be loaded in from the swapfile, windows screams in agony and dies), but I don't think that's what you are seeing.
|
What runs in a C heap vs a Java heap in HP-UX environment JVMs?
|
I've been running into a peculiar issue with certain Java applications in the HP-UX environment.
The heap is set to -mx512, yet, looking at the memory regions for this java process using gpm, it shows it using upwards of 1.6GBs of RSS memory, with 1.1GB allocated to the DATA region. Grows quite rapidly over a 24-48hour period and then slows down substantially, still growing 2MB every few hours. However, the Java heap shows no sign of leakage.
Curious how this was possible I researched a bit and found this HP write-up on memory leaks in java heap and c heap: http://docs.hp.com/en/JAVAPERFTUNE/Memory-Management.pdf
My question is what determines what is ran in the C heap vs the java heap, and for things that do not run through the java heap, how would you identify those objects being run on the C heap? Additionally does the java heap sit inside the C heap?
|
[
"Consider what makes up a Java process.\nYou have:\n\nthe JVM (a C program) \nJNI Data\nJava byte codes\nJava data \n\nNotably, they ALL live in the C heap (the JVM Heap is part of the C heap, naturally).\nIn the Java heap is simply Java byte codes and the Java data. But what is also in the Java heap is \"free space\".\nThe typical (i.e. Sun) JVM only grows it Java Heap as necessary, but never shrinks it. Once it reaches its defined maximum (-Xmx512M), it stops growing and deals with whatever is left. When that maximum heap is exhausted, you get the OutOfMemory exception.\nWhat that Xmx512M option DOES NOT do, is limit the overall size of the process. It limits only the Java Heap part of the process.\nFor example, you could have a contrived Java program that uses 10mb of Java heap, but calls a JNI call that allocates 500MB of C Heap. You can see how your process size is large, even though the Java heap is small. Also, with the new NIO libraries, you can attach memory outside of the heap as well.\nThe other aspect that you must consider is that the Java GC is typically a \"Copying Collector\". Which means it takes the \"live\" data from memory it's collecting, and copies it to a different section of memory. This empty space that is copies to IS NOT PART OF THE HEAP, at least, not in terms of the Xmx parameter. It's, like, \"the new Heap\", and becomes part of the heap after the copy (the old space is used for the next GC). If you have a 512MB heap, and it's at 510MB, Java is going to copy the live data someplace. The naive thought would be to another large open space (like 500+MB). If all of your data were \"live\", then it would need a large chunk like that to copy into.\nSo, you can see that in the most extreme edge case, you need at least double the free memory on your system to handle a specific heap size. At least 1GB for a 512MB heap.\nTurns out that not the case in practice, and memory allocation and such is more complicated than that, but you do need a large chunk of free memory to handle the heap copies, and this impacts the overall process size.\nFinally, note that the JVM does fun things like mapping in the rt.jar classes in to the VM to ease startup. They're mapped in a read only block, and can be shared across other Java processes. These shared pages will \"count\" against all Java processes, even though it is really only consuming physical memory once (the magic of virtual memory).\nNow as to why your process continues to grow, if you never hit the Java OOM message, that means that your leak is NOT in the Java heap, but that doesn't mean it may not be in something else (the JRE runtime, a 3rd party JNI librariy, a native JDBC driver, etc.).\n",
"In general, only the data in Java objects is stored on the Java heap, all other memory required by the Java VM is allocated from the \"native\" or \"C\" heap (in fact, the Java heap itself is just one contiguous chunk allocated from the C heap).\nSince the JVM requires the Java heap (or heaps if generational garbage collection is in use) to be a contiguous piece of memory, the whole maximum heap size (-mx value) is usually allocated at JVM start time. In practice, the Java VM will attempt to minimise its use of this space so that the Operating System doesn't need to reserve any real memory to it (the OS is canny enough to know when a piece of storage has never been written to).\nThe Java heap, therefore, will occupy a certain amount of space in memory.\nThe rest of the storage will be used by the Java VM and any JNI code in use. For example, the JVM requires memory to store Java bytecode and constant pools from loaded classes, the result of JIT compiled code, work areas for compiling JIT code, native thread stacks and other such sundries. \nJNI code is just platform-specific (compiled) C code that can be bound to a Java object in the form of a \"native\" method. When this method is executed the bound code is executed and can allocate memory using standard C routines (eg malloc) which will consume memory on the C heap.\n",
"My only guess with the figures you have given is a memory leak in the Java VM. You might want to try one of the other VMs they listed in the paper you referred. Another (much more difficult) alternative might be to compile the open java on the HP platform.\nSun's Java isn't 100% open yet, they are working on it, but I believe that there is one in sourceforge that is.\nJava also thrashes memory by the way. Sometimes it confuses OS memory management a little (you see it when windows runs out of memory and asks Java to free some up, Java touches all it's objects causing them to be loaded in from the swapfile, windows screams in agony and dies), but I don't think that's what you are seeing.\n"
] |
[
5,
3,
1
] |
[] |
[] |
[
"c",
"heap_memory",
"hp_ux",
"java",
"jvm"
] |
stackoverflow_0000078352_c_heap_memory_hp_ux_java_jvm.txt
|
Q:
Remoting facilities on Visual Studio 2008
I'm toying with my first remoting project and I need to create a RemotableType DLL. I know I can compile it by hand with csc, but I wonder if there are some facilities in place on Visual Studio to handle the Remoting case, or, more specificly, to tell it that a specific file should be compiled as a .dll without having to add another project to a solution exclusively to compile a class or two into DLLs.
NOTE: I know I should toy with my first WCF project, but this has to run on 2.0.
A:
None that I know of using VS 2008 at the moment.
But you might want to check out NAnt. It is made for this kind of work.
A:
You can get away with just calling csc.exe on the pre-build event if you don't want to mess with the .proj file directly and add build events.
|
Remoting facilities on Visual Studio 2008
|
I'm toying with my first remoting project and I need to create a RemotableType DLL. I know I can compile it by hand with csc, but I wonder if there are some facilities in place on Visual Studio to handle the Remoting case, or, more specificly, to tell it that a specific file should be compiled as a .dll without having to add another project to a solution exclusively to compile a class or two into DLLs.
NOTE: I know I should toy with my first WCF project, but this has to run on 2.0.
|
[
"None that I know of using VS 2008 at the moment.\nBut you might want to check out NAnt. It is made for this kind of work.\n",
"You can get away with just calling csc.exe on the pre-build event if you don't want to mess with the .proj file directly and add build events.\n"
] |
[
0,
0
] |
[] |
[] |
[
"c#",
"remoting",
"visual_studio"
] |
stackoverflow_0000078303_c#_remoting_visual_studio.txt
|
Q:
What workarounds/coping-strategies have you implemented to deal with multiple tabs v. two connection limit issues?
The two connection limit can be particularly troublesome when you have multiple tabs open simultaneously. Besides "ignore the problem," what coping mechanisms have you seen used to get multiple tabs both doing heavily interactive Ajax despite the two connection limit?
A:
If you send your Ajax requests to a different subdomain they won't interfere with the connection limit of your regular pages. It will cost an extra DNS lookup though
A:
The two connection limit is a "suggestion" and this article describes how to get around it where possible. Other Firefox configuration is discussed on this about the about:config capability in Firefox.
Also, if you own the website, you can tweak the performance of the site using suggestions form this book from the Chief Performance Yahoo.
|
What workarounds/coping-strategies have you implemented to deal with multiple tabs v. two connection limit issues?
|
The two connection limit can be particularly troublesome when you have multiple tabs open simultaneously. Besides "ignore the problem," what coping mechanisms have you seen used to get multiple tabs both doing heavily interactive Ajax despite the two connection limit?
|
[
"If you send your Ajax requests to a different subdomain they won't interfere with the connection limit of your regular pages. It will cost an extra DNS lookup though \n",
"The two connection limit is a \"suggestion\" and this article describes how to get around it where possible. Other Firefox configuration is discussed on this about the about:config capability in Firefox.\nAlso, if you own the website, you can tweak the performance of the site using suggestions form this book from the Chief Performance Yahoo. \n"
] |
[
3,
1
] |
[] |
[] |
[
"ajax",
"tabs",
"two_connection_limit"
] |
stackoverflow_0000078262_ajax_tabs_two_connection_limit.txt
|
Q:
Troubleshooting Timeout SqlExceptions
I have some curious behavior that I'm having trouble figuring out why is occurring. I'm seeing intermittent timeout exceptions. I'm pretty sure it's related to volume because it's not reproducible in our development environment. As a bandaid solution, I tried upping the sql command timeout to sixty seconds, but as I've found, this doesn't seem to help. Here's the strange part, when I check my logs on the process that is failing, here are the start and end times:
09/16/2008 16:21:49
09/16/2008 16:22:19
So how could it be that it's timing out in thirty seconds when I've set the command timeout to sixty??
Just for reference, here's the exception being thrown:
System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection)
at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj)
at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj)
at System.Data.SqlClient.SqlDataReader.ConsumeMetaData()
at System.Data.SqlClient.SqlDataReader.get_MetaData()
at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString)
at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async)
at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result)
at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method)
at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method)
at System.Data.SqlClient.SqlCommand.ExecuteReader()
at SetClear.DataAccess.SqlHelper.ExecuteReader(CommandType commandType, String commandText, SqlParameter[] commandArgs)
A:
This may sound stupid, but just hear me out. Check all the indexes and primary keys involved in your query. Do they exist? Are they fragmented? I've had a problem where, so some reason, running the script outright worked just find, but then when I did it through the application, it was slow as dirt. The reader's basically act like cursors, so indexing is extremely important.
It might not be this, but it's always the first thing that I check.
A:
SQL commands time out because the query you're using takes longer than that to execute. Execute it in Query Analyzer or Management Studio, with a representative amount of data in the database, and look at the execution plan to find out what's slow.
If something is taking a large percentage of the time and is described as a 'table scan' or 'clustered index scan', look at whether you can create an index that would turn that operation into a key lookup (an index seek or clustered index seek).
A:
Try changing the SqlConnection's timeout property, rather than that of the command
A:
Because the timeout is happening on the connection, not the command. You need to set the connection.TimeOut property
A:
I had this problem once, and I tracked it to some really inefficient SQL code in one of my database's views. Someone had put a complex condition with a subquery into the ON clause for a table join, instead of into the WHERE clause where it belonged. Once I corrected this error, the problem went away.
|
Troubleshooting Timeout SqlExceptions
|
I have some curious behavior that I'm having trouble figuring out why is occurring. I'm seeing intermittent timeout exceptions. I'm pretty sure it's related to volume because it's not reproducible in our development environment. As a bandaid solution, I tried upping the sql command timeout to sixty seconds, but as I've found, this doesn't seem to help. Here's the strange part, when I check my logs on the process that is failing, here are the start and end times:
09/16/2008 16:21:49
09/16/2008 16:22:19
So how could it be that it's timing out in thirty seconds when I've set the command timeout to sixty??
Just for reference, here's the exception being thrown:
System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection)
at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj)
at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj)
at System.Data.SqlClient.SqlDataReader.ConsumeMetaData()
at System.Data.SqlClient.SqlDataReader.get_MetaData()
at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString)
at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async)
at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result)
at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method)
at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method)
at System.Data.SqlClient.SqlCommand.ExecuteReader()
at SetClear.DataAccess.SqlHelper.ExecuteReader(CommandType commandType, String commandText, SqlParameter[] commandArgs)
|
[
"This may sound stupid, but just hear me out. Check all the indexes and primary keys involved in your query. Do they exist? Are they fragmented? I've had a problem where, so some reason, running the script outright worked just find, but then when I did it through the application, it was slow as dirt. The reader's basically act like cursors, so indexing is extremely important.\nIt might not be this, but it's always the first thing that I check.\n",
"SQL commands time out because the query you're using takes longer than that to execute. Execute it in Query Analyzer or Management Studio, with a representative amount of data in the database, and look at the execution plan to find out what's slow.\nIf something is taking a large percentage of the time and is described as a 'table scan' or 'clustered index scan', look at whether you can create an index that would turn that operation into a key lookup (an index seek or clustered index seek).\n",
"Try changing the SqlConnection's timeout property, rather than that of the command\n",
"Because the timeout is happening on the connection, not the command. You need to set the connection.TimeOut property\n",
"I had this problem once, and I tracked it to some really inefficient SQL code in one of my database's views. Someone had put a complex condition with a subquery into the ON clause for a table join, instead of into the WHERE clause where it belonged. Once I corrected this error, the problem went away.\n"
] |
[
4,
4,
1,
1,
1
] |
[] |
[] |
[
"ado.net",
"sql_server",
"sqlexception"
] |
stackoverflow_0000076891_ado.net_sql_server_sqlexception.txt
|
Q:
How do I enable Platform Builder mode in VS2008
After installing VS2008, the platform builder mod, and the WM7 aku, VS usually prompts you, upon first startup, for your default mode. If you make a mistake and select something other than PB7, how do you get back into PB mode?
I can get the device window, but it is always greyed out. I can also configure my normal connection settings, but VS will never connect to the device.
I have other machines, where I did select the default option correctly. They work just fine.
I'm hoping I do not have to reinstall everything.
namaste,
Mark
A:
Go to the Tool menu and select "Import and Export Settings..." option. Then select "Reset all settings" in the Wizard. On the next screen you can save current settings. Then on the final screen, it should allow you to select which collection of settings you want to use. I don't have platform builder but hopefully that option should show up there.
A:
Tools > Import and Export Settings...
X Reset All Settings
(Save or don't save)
Select new Setting.
|
How do I enable Platform Builder mode in VS2008
|
After installing VS2008, the platform builder mod, and the WM7 aku, VS usually prompts you, upon first startup, for your default mode. If you make a mistake and select something other than PB7, how do you get back into PB mode?
I can get the device window, but it is always greyed out. I can also configure my normal connection settings, but VS will never connect to the device.
I have other machines, where I did select the default option correctly. They work just fine.
I'm hoping I do not have to reinstall everything.
namaste,
Mark
|
[
"Go to the Tool menu and select \"Import and Export Settings...\" option. Then select \"Reset all settings\" in the Wizard. On the next screen you can save current settings. Then on the final screen, it should allow you to select which collection of settings you want to use. I don't have platform builder but hopefully that option should show up there.\n",
"Tools > Import and Export Settings...\nX Reset All Settings\n(Save or don't save)\nSelect new Setting.\n"
] |
[
4,
1
] |
[] |
[] |
[
"platform_builder",
"smartphone",
"visual_studio",
"visual_studio_2008",
"windows_mobile"
] |
stackoverflow_0000078669_platform_builder_smartphone_visual_studio_visual_studio_2008_windows_mobile.txt
|
Q:
Fetch unread messages, by user
I want to maintain a list of global messages that will be displayed to all users of a web app. I want each user to be able to mark these messages as read individually. I've created 2 tables; messages (id, body) and messages_read (user_id, message_id).
Can you provide an sql statement that selects the unread messages for a single user? Or do you have any suggestions for a better way to handle this?
Thanks!
A:
Well, you could use
SELECT id FROM messages m WHERE m.id NOT IN(
SELECT message_id FROM messages_read WHERE user_id = ?)
Where ? is passed in by your app.
A:
If the table definitions you mentioned are complete, you might want to include a date for each message, so you can order them by date.
Also, this might be a slightly more efficient way to do the select:
SELECT id, message
FROM messages
LEFT JOIN messages_read
ON messages_read.message_id = messages.id
AND messages_read.[user_id] = @user_id
WHERE
messages_read.message_id IS NULL
A:
Something like:
SELECT id, body FROM messages LEFT JOIN
(SELECT message_id FROM messages_read WHERE user_id = ?)
ON id=message_id WHERE message_id IS NULL
Slightly tricky and I'm not sure how the performance will scale up, but it should work.
|
Fetch unread messages, by user
|
I want to maintain a list of global messages that will be displayed to all users of a web app. I want each user to be able to mark these messages as read individually. I've created 2 tables; messages (id, body) and messages_read (user_id, message_id).
Can you provide an sql statement that selects the unread messages for a single user? Or do you have any suggestions for a better way to handle this?
Thanks!
|
[
"Well, you could use\nSELECT id FROM messages m WHERE m.id NOT IN(\n SELECT message_id FROM messages_read WHERE user_id = ?)\n\nWhere ? is passed in by your app.\n",
"If the table definitions you mentioned are complete, you might want to include a date for each message, so you can order them by date.\nAlso, this might be a slightly more efficient way to do the select:\nSELECT id, message\nFROM messages\nLEFT JOIN messages_read\n ON messages_read.message_id = messages.id\n AND messages_read.[user_id] = @user_id\nWHERE\n messages_read.message_id IS NULL\n\n",
"Something like:\nSELECT id, body FROM messages LEFT JOIN\n (SELECT message_id FROM messages_read WHERE user_id = ?)\n ON id=message_id WHERE message_id IS NULL \n\nSlightly tricky and I'm not sure how the performance will scale up, but it should work.\n"
] |
[
5,
2,
0
] |
[] |
[] |
[
"database",
"mysql",
"sql"
] |
stackoverflow_0000076680_database_mysql_sql.txt
|
Q:
How to tell if a process is running on a mobile device
I have the handle of process 'A' on a Pocket PC 2003 device. I need to determine if that process is still running from process 'B'. Process 'B' is written in Embedded Visual C++ 4.0.
A:
GetExitCodeProcess will return STILL_ACTIVE if the process was running when the function was called.
A:
Process handles are waitable. They are signalled - will release any waiting thread - when the process exits. You can use them with WaitForSingleObject, WaitForMultipleObjects, etc.
|
How to tell if a process is running on a mobile device
|
I have the handle of process 'A' on a Pocket PC 2003 device. I need to determine if that process is still running from process 'B'. Process 'B' is written in Embedded Visual C++ 4.0.
|
[
"GetExitCodeProcess will return STILL_ACTIVE if the process was running when the function was called.\n",
"Process handles are waitable. They are signalled - will release any waiting thread - when the process exits. You can use them with WaitForSingleObject, WaitForMultipleObjects, etc.\n"
] |
[
1,
0
] |
[] |
[] |
[
"evc",
"mobile",
"native",
"pocketpc",
"process"
] |
stackoverflow_0000076484_evc_mobile_native_pocketpc_process.txt
|
Q:
How can I get Unicode characters to display properly for the tooltip for the IMG ALT in IE7?
I've got some Japanese in the ALT attribute, but the tooltip is showing me the ugly block characters in the tooltip. The rest of the content on the page renders correctly. So far, it seems to be limited to the tooltips.
A:
This is because the font used in the tooltip doesn't include the characters you are trying to display. Try installing a font pack that includes those characters. I'm affraid you can't do much for your site's visitors other than implementating a tooltip yourself using javascript.
A:
I'm not sure about the unicode issue but if you want the tooltip effect you should be using the title attribute, not alt.
Alt is for text you want screenreaders to speak, and it's what gets displayed if an image can't be loaded.
A:
Where's your Japanese input coming from? It could be that it's in a non-unicode (e.g. http://en.wikipedia.org/wiki/JIS_X_0208) encoding, whereas your file is in unicode so the browser attempts to interpret the non-unicode characters as unicode and gets confused. I tried throwing together an example to reproduce your problem:
<img src="test.png" alt="日本語" />
The tooltip displays properly under IE7 with the Japanese language pack installed.
A:
Do note that the alt attribute isn't intended to be a tooltip. Alt is for describing the image where the image itself is not available. If you want to use tooltips, use the title attribute instead.
A:
Can you sanitize the alt text so that it doesn't have the characters in it, preferably by replacing the entire text with something useful (rather than just filtering the string)? That's not ideal, but neither is displaying broken characters, or telling your users to install a new font pack.
A:
In IE and Firefox on Win2000/WinXP/Vista, with the Japanese Language support installed from Regional Options, this just works. On Win95/98/ME, it only worked on a Japanese OS, at least with IE, because of limitations in the Windows tooltip control in non-NT systems. (Regarding other answers which guide you to the title attribute: the same behavior applied with the title attribute).
However, it's possible that font linking/font mapping won't kick in if you haven't installed the language support, or if you've just copied some font to your fonts folder. It's also possible that your default font choice for tooltips doesn't support Japanese, though GDI font-linking fallback should kick in on Win2000 or above, unless the font lies about what it supports.
The "empty square" phenomenon is typically suggestive of a font mapping problem, though it's remotely possible that the encoding is wrong.
Are your users Japanese-speakers? Does this problem occur on a system with a Japanese default system locale?
|
How can I get Unicode characters to display properly for the tooltip for the IMG ALT in IE7?
|
I've got some Japanese in the ALT attribute, but the tooltip is showing me the ugly block characters in the tooltip. The rest of the content on the page renders correctly. So far, it seems to be limited to the tooltips.
|
[
"This is because the font used in the tooltip doesn't include the characters you are trying to display. Try installing a font pack that includes those characters. I'm affraid you can't do much for your site's visitors other than implementating a tooltip yourself using javascript.\n",
"I'm not sure about the unicode issue but if you want the tooltip effect you should be using the title attribute, not alt.\nAlt is for text you want screenreaders to speak, and it's what gets displayed if an image can't be loaded.\n",
"Where's your Japanese input coming from? It could be that it's in a non-unicode (e.g. http://en.wikipedia.org/wiki/JIS_X_0208) encoding, whereas your file is in unicode so the browser attempts to interpret the non-unicode characters as unicode and gets confused. I tried throwing together an example to reproduce your problem:\n<img src=\"test.png\" alt=\"日本語\" />\n\nThe tooltip displays properly under IE7 with the Japanese language pack installed.\n",
"Do note that the alt attribute isn't intended to be a tooltip. Alt is for describing the image where the image itself is not available. If you want to use tooltips, use the title attribute instead.\n",
"Can you sanitize the alt text so that it doesn't have the characters in it, preferably by replacing the entire text with something useful (rather than just filtering the string)? That's not ideal, but neither is displaying broken characters, or telling your users to install a new font pack.\n",
"In IE and Firefox on Win2000/WinXP/Vista, with the Japanese Language support installed from Regional Options, this just works. On Win95/98/ME, it only worked on a Japanese OS, at least with IE, because of limitations in the Windows tooltip control in non-NT systems. (Regarding other answers which guide you to the title attribute: the same behavior applied with the title attribute).\nHowever, it's possible that font linking/font mapping won't kick in if you haven't installed the language support, or if you've just copied some font to your fonts folder. It's also possible that your default font choice for tooltips doesn't support Japanese, though GDI font-linking fallback should kick in on Win2000 or above, unless the font lies about what it supports.\nThe \"empty square\" phenomenon is typically suggestive of a font mapping problem, though it's remotely possible that the encoding is wrong.\nAre your users Japanese-speakers? Does this problem occur on a system with a Japanese default system locale?\n"
] |
[
5,
2,
1,
1,
0,
0
] |
[] |
[] |
[
"internet_explorer",
"unicode"
] |
stackoverflow_0000011690_internet_explorer_unicode.txt
|
Q:
SQL Server Express 64 bit prerequisite to include in setup deployment project
Where can I obtain the SQL Server Express 64 bit prerequisite to include in a Visual Studio 2008 setup deployment project. The prerequisite that comes with Visual Studio 2008 is 32 bit only.
A:
Looks like there is no SQL Express x64 based on this post and it also says on the express download page that it runs in WOW. However, SQL Server Express 2008 does come in a x64 version.
|
SQL Server Express 64 bit prerequisite to include in setup deployment project
|
Where can I obtain the SQL Server Express 64 bit prerequisite to include in a Visual Studio 2008 setup deployment project. The prerequisite that comes with Visual Studio 2008 is 32 bit only.
|
[
"Looks like there is no SQL Express x64 based on this post and it also says on the express download page that it runs in WOW. However, SQL Server Express 2008 does come in a x64 version.\n"
] |
[
1
] |
[] |
[] |
[
"sql_server"
] |
stackoverflow_0000078731_sql_server.txt
|
Q:
Displaying vector graphics in a browser
I need to display some interactive (attaching with DOM listeners etc. and event handling) vector graphics in web site I am working on. There is a W3C recommendation for SVG though this format is still not recognized by Internet Explorer support of which is a must (for a public website). IE handles VML though and there are even javascript libraries that do some canvas-like drawing depending on a browser (SVG vs. VML) - excanvas, GFX of Dojo Toolkit and more. That would be nice and acceptable though none of them can display an SVG image from the given markup.
So the question actually consists of several parts:
Are there any cross-browser Javascript libraries that display vector graphics from given markup (not obligatory SVG) and offer availability to attach to DOM events?
If there are not, which of the most pupular browser-embedded technologies would be most suitable for doing such a task? I can choose from Flex/Flash, Java applet. Silverlight is not an option because of Windows lock-in.
[EDIT] Thank you all for your comments/suggestions. Below are just some my random notes/conclusions on this matter:
The level of interactivity I need is ability to detect DOM events on the vector image being displayed - mouseover, mouseout, click etc. - and ability to react on them like changing background color, displaying dialog etc.
The idea of sticking with SVG format is quite well as it is native on many browsers except the most popular one - IE. After some experimenting with displaying dynamic SVG I realized that IE version 7 the most problematic. There's too much hassle because of browser incompatibilities.
Cake seems a great Javascript framework, though I could not get the examples working on IE7.
Java Applets - I liked that idea the most as I though I could use the Apache Batik library, a quality SVG renderer. However, Batik is very big library and I cannot afford deploying an applet that weights few megabytes.
I decided to stick with the Flex option. I found a nice vector graphics library Degrafa. It uses its own markup format however it recognizes SVG path notation, so in my case it is going to be quite easy to transform my SVGs using XSLT or just parsing them.
[EDIT 2] Some more comments appeared. I'd like to clarify that by "Windows lock-in" I mean the situation that Silverlight would normally run on Windows, more specifically, IE. I doubt it is an accepted solution (like Flash or Java Applet, for instance) on other systems. Yes, I have no doubt that one is able to launch Silverlight app on any system though I fear it would be too much effort for an average user.
@Akira: Have you had any problems with those "SVG renderers" on IE7? I get thrown Javascript errors all the time.
A:
Safari, Opera and Firefox all support SVG natively (eg. without plugins) to varying degrees of completeness and correctness, including the ability to script the svg from javascript.
There's also the canvas element which is now being standardised in html5, and is already supported in the above browsers as well (with various quirks in certain edge cases due to relatively recent changes in the html5 draft).
Unfortunately any standards based approach is kind of destroyed by IE's willful disregard of what is happening outside its own ecosystem, however there are a number of libraries that try to convert canvas/svg into VML (IE's proprietary vector language) such as iecanvas.
[Edit: whoops, i forgot my favourite js library -- Cake! which can parse and display svg in canvas, and believe supports IE as well]
[Yet another edit: Cake actually has a demo doing what i think you want to do]
A:
Take a look at the Raphael Javascript library. It's early days but it looks very promising.
I remember the IE roadmap that had SVG support listed in IE7.2.
Depends on how interactive you want it?
A:
Can you clarify what you mean by the "Windows lock-in" thing with Silverlight? It runs on Windows and MacIntel, and the vector portions run just fine on Linux with the Moonlight plugin.
Were you thrown off by the lack of Amiga support?
A:
Have a look at the new Canvas element which has been implemented in many browsers. I heard also that there is an ActiveX control for IE that implements the Canvas element too.
Update: Wait, you already said that. I should read the whole question next time! :)
A:
Walter Zorn has a JavaScript library for arbitrary vector graphics. It looks decent.
A:
Go for SVG - and just tell the users to get the ADOBE SVG plug in for IE.
See this excellent site - which is a UK Government Site (public service)
ELGIN
A:
IE supports VML, but nothing else does and it's ugly. Microsoft claimed that they'd dropped it (with new XAML and all) but it's still part of their Office XML format (it's how Excel .xlsx supports comments, weirdly enough).
FX and loads more support the new Canvas element. Many support SVG, but given the work MS are doing on Silverlight I can't see IE supporting SVG any time soon.
Microsoft are supposed to be providing Silverlight plug ins for no MS operating systems.
I've been using Flex - it's pretty good despite using Eclipse. You don't need to buy the hugely expensive Adobe server components to use Flex - it can consume SOAP services.
The dev tools for Flex are quite affordable, and nearly everyone has Flash.
A:
I don't think SVG is a good choice for the future. From Wikipedia:
"The most common IE plugin was produced by Adobe. Adobe, however, are planning to withdraw this product at the beginning of 2009"
"... Internet Explorer which will also not support SVG in the upcoming version IE8"
"...all have incomplete support for the SVG 1.1..."
"The Corel SVG Viewer plugin was once offered from Corel. Its development has stopped."
A:
Of all the possibilities you list, the only one that's not a horrible abuse of an existing technology (Javascript), barely supported (SVG, Canvas element) or a lot of work (Java) is Flash. It was designed as a vector graphics package and is compatible with more browsers than SVG and the canvas tag.
The only reason I wouldn't choose Flash over all other options is if you're aiming at mobile browsers or don't have the budget for the Flash package.
|
Displaying vector graphics in a browser
|
I need to display some interactive (attaching with DOM listeners etc. and event handling) vector graphics in web site I am working on. There is a W3C recommendation for SVG though this format is still not recognized by Internet Explorer support of which is a must (for a public website). IE handles VML though and there are even javascript libraries that do some canvas-like drawing depending on a browser (SVG vs. VML) - excanvas, GFX of Dojo Toolkit and more. That would be nice and acceptable though none of them can display an SVG image from the given markup.
So the question actually consists of several parts:
Are there any cross-browser Javascript libraries that display vector graphics from given markup (not obligatory SVG) and offer availability to attach to DOM events?
If there are not, which of the most pupular browser-embedded technologies would be most suitable for doing such a task? I can choose from Flex/Flash, Java applet. Silverlight is not an option because of Windows lock-in.
[EDIT] Thank you all for your comments/suggestions. Below are just some my random notes/conclusions on this matter:
The level of interactivity I need is ability to detect DOM events on the vector image being displayed - mouseover, mouseout, click etc. - and ability to react on them like changing background color, displaying dialog etc.
The idea of sticking with SVG format is quite well as it is native on many browsers except the most popular one - IE. After some experimenting with displaying dynamic SVG I realized that IE version 7 the most problematic. There's too much hassle because of browser incompatibilities.
Cake seems a great Javascript framework, though I could not get the examples working on IE7.
Java Applets - I liked that idea the most as I though I could use the Apache Batik library, a quality SVG renderer. However, Batik is very big library and I cannot afford deploying an applet that weights few megabytes.
I decided to stick with the Flex option. I found a nice vector graphics library Degrafa. It uses its own markup format however it recognizes SVG path notation, so in my case it is going to be quite easy to transform my SVGs using XSLT or just parsing them.
[EDIT 2] Some more comments appeared. I'd like to clarify that by "Windows lock-in" I mean the situation that Silverlight would normally run on Windows, more specifically, IE. I doubt it is an accepted solution (like Flash or Java Applet, for instance) on other systems. Yes, I have no doubt that one is able to launch Silverlight app on any system though I fear it would be too much effort for an average user.
@Akira: Have you had any problems with those "SVG renderers" on IE7? I get thrown Javascript errors all the time.
|
[
"Safari, Opera and Firefox all support SVG natively (eg. without plugins) to varying degrees of completeness and correctness, including the ability to script the svg from javascript.\nThere's also the canvas element which is now being standardised in html5, and is already supported in the above browsers as well (with various quirks in certain edge cases due to relatively recent changes in the html5 draft).\nUnfortunately any standards based approach is kind of destroyed by IE's willful disregard of what is happening outside its own ecosystem, however there are a number of libraries that try to convert canvas/svg into VML (IE's proprietary vector language) such as iecanvas.\n[Edit: whoops, i forgot my favourite js library -- Cake! which can parse and display svg in canvas, and believe supports IE as well]\n[Yet another edit: Cake actually has a demo doing what i think you want to do]\n",
"Take a look at the Raphael Javascript library. It's early days but it looks very promising.\nI remember the IE roadmap that had SVG support listed in IE7.2. \nDepends on how interactive you want it?\n",
"Can you clarify what you mean by the \"Windows lock-in\" thing with Silverlight? It runs on Windows and MacIntel, and the vector portions run just fine on Linux with the Moonlight plugin.\nWere you thrown off by the lack of Amiga support?\n",
"Have a look at the new Canvas element which has been implemented in many browsers. I heard also that there is an ActiveX control for IE that implements the Canvas element too.\nUpdate: Wait, you already said that. I should read the whole question next time! :)\n",
"Walter Zorn has a JavaScript library for arbitrary vector graphics. It looks decent.\n",
"Go for SVG - and just tell the users to get the ADOBE SVG plug in for IE.\nSee this excellent site - which is a UK Government Site (public service)\nELGIN\n",
"IE supports VML, but nothing else does and it's ugly. Microsoft claimed that they'd dropped it (with new XAML and all) but it's still part of their Office XML format (it's how Excel .xlsx supports comments, weirdly enough).\nFX and loads more support the new Canvas element. Many support SVG, but given the work MS are doing on Silverlight I can't see IE supporting SVG any time soon.\nMicrosoft are supposed to be providing Silverlight plug ins for no MS operating systems.\nI've been using Flex - it's pretty good despite using Eclipse. You don't need to buy the hugely expensive Adobe server components to use Flex - it can consume SOAP services.\nThe dev tools for Flex are quite affordable, and nearly everyone has Flash.\n",
"I don't think SVG is a good choice for the future. From Wikipedia:\n\n\"The most common IE plugin was produced by Adobe. Adobe, however, are planning to withdraw this product at the beginning of 2009\"\n\"... Internet Explorer which will also not support SVG in the upcoming version IE8\"\n\"...all have incomplete support for the SVG 1.1...\"\n\"The Corel SVG Viewer plugin was once offered from Corel. Its development has stopped.\"\n\n",
"Of all the possibilities you list, the only one that's not a horrible abuse of an existing technology (Javascript), barely supported (SVG, Canvas element) or a lot of work (Java) is Flash. It was designed as a vector graphics package and is compatible with more browsers than SVG and the canvas tag.\nThe only reason I wouldn't choose Flash over all other options is if you're aiming at mobile browsers or don't have the budget for the Flash package.\n"
] |
[
9,
6,
2,
1,
1,
0,
0,
0,
0
] |
[] |
[] |
[
"cross_browser",
"svg",
"vector_graphics",
"vml"
] |
stackoverflow_0000009615_cross_browser_svg_vector_graphics_vml.txt
|
Q:
Custom events in C++?
Is it possible to create custom events in C++? For example, say I have the variable X, and the variable Y. Whenever X changes, I would like to execute a function that sets Y equal to 3X. Is there a way to create such a trigger/event? (triggers are common in some databases)
A:
This is basically an instance of the Observer pattern (as others have mentioned and linked). However, you can use template magic to render it a little more syntactically palettable. Consider something like...
template <typename T>
class Observable
{
T underlying;
public:
Observable<T>& operator=(const T &rhs) {
underlying = rhs;
fireObservers();
return *this;
}
operator T() { return underlying; }
void addObserver(ObsType obs) { ... }
void fireObservers() { /* Pass every event handler a const & to this instance /* }
};
Then you can write...
Observable<int> x;
x.registerObserver(...);
x = 5;
int y = x;
What method you use to write your observer callback functions are entirely up to you; I suggest http://www.boost.org's function or functional modules (you can also use simple functors). I also caution you to be careful about this type of operator overloading. Whilst it can make certain coding styles clearer, reckless use an render something like
seemsLikeAnIntToMe = 10;
a very expensive operation, that might well explode, and cause debugging nightmares for years to come.
A:
Boost signals is another commonly used library you might come across to do Observer Pattern (aka Publish-Subscribe). Buyer beware here, I've heard its performance is terrible.
A:
Think you should read a little about Design Patterns, specifically the Observer Pattern.
Qt from Trolltech have implemented a nice solutions they call Signals and Slots.
A:
Use the Observer pattern
code project example
wiki page
A:
As far as I am aware you can't do it with default variables, however if you wrote a class that took a callback function you could let other classes register that they want to be notified of any changes.
|
Custom events in C++?
|
Is it possible to create custom events in C++? For example, say I have the variable X, and the variable Y. Whenever X changes, I would like to execute a function that sets Y equal to 3X. Is there a way to create such a trigger/event? (triggers are common in some databases)
|
[
"This is basically an instance of the Observer pattern (as others have mentioned and linked). However, you can use template magic to render it a little more syntactically palettable. Consider something like...\ntemplate <typename T>\nclass Observable\n{\n T underlying;\n\npublic:\n Observable<T>& operator=(const T &rhs) {\n underlying = rhs;\n fireObservers();\n\n return *this;\n }\n operator T() { return underlying; }\n\n void addObserver(ObsType obs) { ... }\n void fireObservers() { /* Pass every event handler a const & to this instance /* }\n};\n\nThen you can write...\nObservable<int> x;\nx.registerObserver(...);\n\nx = 5;\nint y = x;\n\nWhat method you use to write your observer callback functions are entirely up to you; I suggest http://www.boost.org's function or functional modules (you can also use simple functors). I also caution you to be careful about this type of operator overloading. Whilst it can make certain coding styles clearer, reckless use an render something like\nseemsLikeAnIntToMe = 10;\na very expensive operation, that might well explode, and cause debugging nightmares for years to come.\n",
"Boost signals is another commonly used library you might come across to do Observer Pattern (aka Publish-Subscribe). Buyer beware here, I've heard its performance is terrible.\n",
"Think you should read a little about Design Patterns, specifically the Observer Pattern.\nQt from Trolltech have implemented a nice solutions they call Signals and Slots.\n",
"Use the Observer pattern\ncode project example\nwiki page\n",
"As far as I am aware you can't do it with default variables, however if you wrote a class that took a callback function you could let other classes register that they want to be notified of any changes.\n"
] |
[
10,
3,
1,
1,
1
] |
[] |
[] |
[
"c++",
"design_patterns"
] |
stackoverflow_0000077996_c++_design_patterns.txt
|
Q:
How to iterate over all the page breaks in an Excel 2003 worksheet via COM
I've been trying to retrieve the locations of all the page breaks on a given Excel 2003 worksheet over COM. Here's an example of the kind of thing I'm trying to do:
Excel::HPageBreaksPtr pHPageBreaks = pSheet->GetHPageBreaks();
long count = pHPageBreaks->Count;
for (long i=0; i < count; ++i)
{
Excel::HPageBreakPtr pHPageBreak = pHPageBreaks->GetItem(i+1);
Excel::RangePtr pLocation = pHPageBreak->GetLocation();
printf("Page break at row %d\n", pLocation->Row);
pLocation.Release();
pHPageBreak.Release();
}
pHPageBreaks.Release();
I expect this to print out the row numbers of each of the horizontal page breaks in pSheet. The problem I'm having is that although count correctly indicates the number of page breaks in the worksheet, I can only ever seem to retrieve the first one. On the second run through the loop, calling pHPageBreaks->GetItem(i) throws an exception, with error number 0x8002000b, "invalid index".
Attempting to use pHPageBreaks->Get_NewEnum() to get an enumerator to iterate over the collection also fails with the same error, immediately on the call to Get_NewEnum().
I've looked around for a solution, and the closest thing I've found so far is http://support.microsoft.com/kb/210663/en-us. I have tried activating various cells beyond the page breaks, including the cells just beyond the range to be printed, as well as the lower-right cell (IV65536), but it didn't help.
If somebody can tell me how to get Excel to return the locations of all of the page breaks in a sheet, that would be awesome!
Thank you.
@Joel: Yes, I have tried displaying the user interface, and then setting ScreenUpdating to true - it produced the same results. Also, I have since tried combinations of setting pSheet->PrintArea to the entire worksheet and/or calling pSheet->ResetAllPageBreaks() before my call to get the HPageBreaks collection, which didn't help either.
@Joel: I've used pSheet->UsedRange to determine the row to scroll past, and Excel does scroll past all the horizontal breaks, but I'm still having the same issue when I try to access the second one. Unfortunately, switching to Excel 2007 did not help either.
A:
Experimenting with Excel 2007 from Visual Basic, I discovered that the page break isn't known unless it has been displayed on the screen at least once.
The best workaround I could find was to page down, from the top of the sheet to the last row containing data. Then you can enumerate all the page breaks.
Here's the VBA code... let me know if you have any problem converting this to COM:
Range("A1").Select
numRows = Range("A1").End(xlDown).Row
While ActiveWindow.ScrollRow < numRows
ActiveWindow.LargeScroll Down:=1
Wend
For Each x In ActiveSheet.HPageBreaks
Debug.Print x.Location.Row
Next
This code made one simplifying assumption:
I used the .End(xlDown) method to figure out how far the data goes... this assumes that you have continuous data from A1 down to the bottom of the sheet. If you don't, you need to use some other method to figure out how far to keep scrolling.
A:
Did you set ScreenUpdating to True, as mentioned in the KB article?
You may want to actually toggle it to True to force a screen repaint. It sounds like the calculation of page breaks is a side-effect of actually rendering the page, rather than something Excel does on demand, so you have to trigger a page rendering on the screen.
|
How to iterate over all the page breaks in an Excel 2003 worksheet via COM
|
I've been trying to retrieve the locations of all the page breaks on a given Excel 2003 worksheet over COM. Here's an example of the kind of thing I'm trying to do:
Excel::HPageBreaksPtr pHPageBreaks = pSheet->GetHPageBreaks();
long count = pHPageBreaks->Count;
for (long i=0; i < count; ++i)
{
Excel::HPageBreakPtr pHPageBreak = pHPageBreaks->GetItem(i+1);
Excel::RangePtr pLocation = pHPageBreak->GetLocation();
printf("Page break at row %d\n", pLocation->Row);
pLocation.Release();
pHPageBreak.Release();
}
pHPageBreaks.Release();
I expect this to print out the row numbers of each of the horizontal page breaks in pSheet. The problem I'm having is that although count correctly indicates the number of page breaks in the worksheet, I can only ever seem to retrieve the first one. On the second run through the loop, calling pHPageBreaks->GetItem(i) throws an exception, with error number 0x8002000b, "invalid index".
Attempting to use pHPageBreaks->Get_NewEnum() to get an enumerator to iterate over the collection also fails with the same error, immediately on the call to Get_NewEnum().
I've looked around for a solution, and the closest thing I've found so far is http://support.microsoft.com/kb/210663/en-us. I have tried activating various cells beyond the page breaks, including the cells just beyond the range to be printed, as well as the lower-right cell (IV65536), but it didn't help.
If somebody can tell me how to get Excel to return the locations of all of the page breaks in a sheet, that would be awesome!
Thank you.
@Joel: Yes, I have tried displaying the user interface, and then setting ScreenUpdating to true - it produced the same results. Also, I have since tried combinations of setting pSheet->PrintArea to the entire worksheet and/or calling pSheet->ResetAllPageBreaks() before my call to get the HPageBreaks collection, which didn't help either.
@Joel: I've used pSheet->UsedRange to determine the row to scroll past, and Excel does scroll past all the horizontal breaks, but I'm still having the same issue when I try to access the second one. Unfortunately, switching to Excel 2007 did not help either.
|
[
"Experimenting with Excel 2007 from Visual Basic, I discovered that the page break isn't known unless it has been displayed on the screen at least once.\nThe best workaround I could find was to page down, from the top of the sheet to the last row containing data. Then you can enumerate all the page breaks.\nHere's the VBA code... let me know if you have any problem converting this to COM:\nRange(\"A1\").Select\nnumRows = Range(\"A1\").End(xlDown).Row\n\nWhile ActiveWindow.ScrollRow < numRows\n ActiveWindow.LargeScroll Down:=1\nWend\n\nFor Each x In ActiveSheet.HPageBreaks\n Debug.Print x.Location.Row\nNext\n\nThis code made one simplifying assumption:\n\nI used the .End(xlDown) method to figure out how far the data goes... this assumes that you have continuous data from A1 down to the bottom of the sheet. If you don't, you need to use some other method to figure out how far to keep scrolling.\n\n",
"Did you set ScreenUpdating to True, as mentioned in the KB article?\nYou may want to actually toggle it to True to force a screen repaint. It sounds like the calculation of page breaks is a side-effect of actually rendering the page, rather than something Excel does on demand, so you have to trigger a page rendering on the screen.\n"
] |
[
2,
0
] |
[] |
[] |
[
"c++",
"com",
"excel",
"vba"
] |
stackoverflow_0000078053_c++_com_excel_vba.txt
|
Q:
OpenID Migration
I'm curious about OpenID. While I agree that the idea of unified credentials is great, I have a few reservations. What is to prevent an OpenID provider from going crazy and holding the OpenID accounts they have hostage until you pay $n? If I decide I don't like the provider I'm with this there a way to migrate to a different provider with out losing all my information at various sites?
Edit: I feel like my question is being misunderstood. It has been said that I can simple create a delegation and this is partially true. I can do this if I haven't already created an account at, for example, SO. If I decide to set up my own OpenID provider at some point, there is no way that I can see to move and keep my account information. That is the sort of think I was wondering about.
Second Edit:
I see that there is a uservoice about adding this to SO. Link
A:
This is why you can use OpenID delegation, i.e. you set up two META tags on your personal website and then you can use that site's URL as an alias for your current OpenID provider of choice. Should it get unfriendly you just switch to another and update your tags.
Additionally you can always operate your own OpenID identity provider (if you have a server with, for example, a web server and PHP on it). I use phpMyID for this.
Update: regarding the updated question: OpenID consumers (sites where you log in using OpenID) may allow you to switch the OpenID used for sign-on at their discretion. Sourceforge, for example, does. To prevent problems it's best to use delegation right from the start. Otherwise this is a necessary limitation imposed by OpenID's design.
A:
It's an OpenID relying party best practice to allow multiple OpenIDs to be associated with a single account.
It's also an OpenID relying party best practice to allow people to recover their accounts without access to their old OpenID.
If Stack Overflow doesn't do these things, then this is a shortcoming of Stack Overflow, not OpenID.
A:
Nothing prevents the provider from holding your account to ransom. You should pick a provider that you know to be reliable. Or, if you trust nobody but yourself, you can be your own provider:
http://wiki.openid.net/Run_your_own_identity_server
A:
There's no way to stop Google from holding my gmail inbox hostage until I pay them $n. It's a trust thing, I guess.
A:
This may help:
OpenID
A:
I think you might be mixing free-market providers with governments. Latter abuse their power because you got nobody else to go to (try to get an "alternative" passport). Since the OpenID prividers have competition, you can always leave one provider and go to another.
A:
A site that implements OpenID authentication in a good way would allow you to switch your ID to another URL or to specify a secondary ID in cases when your primary provider happens to be down.
Currently, most sites still don't have this option, and yes -- if our OpenID providers would delete our accounts one day, we'd have trouble getting to our accounts on some sites. We trust them in not denying us the service.
|
OpenID Migration
|
I'm curious about OpenID. While I agree that the idea of unified credentials is great, I have a few reservations. What is to prevent an OpenID provider from going crazy and holding the OpenID accounts they have hostage until you pay $n? If I decide I don't like the provider I'm with this there a way to migrate to a different provider with out losing all my information at various sites?
Edit: I feel like my question is being misunderstood. It has been said that I can simple create a delegation and this is partially true. I can do this if I haven't already created an account at, for example, SO. If I decide to set up my own OpenID provider at some point, there is no way that I can see to move and keep my account information. That is the sort of think I was wondering about.
Second Edit:
I see that there is a uservoice about adding this to SO. Link
|
[
"This is why you can use OpenID delegation, i.e. you set up two META tags on your personal website and then you can use that site's URL as an alias for your current OpenID provider of choice. Should it get unfriendly you just switch to another and update your tags.\nAdditionally you can always operate your own OpenID identity provider (if you have a server with, for example, a web server and PHP on it). I use phpMyID for this.\nUpdate: regarding the updated question: OpenID consumers (sites where you log in using OpenID) may allow you to switch the OpenID used for sign-on at their discretion. Sourceforge, for example, does. To prevent problems it's best to use delegation right from the start. Otherwise this is a necessary limitation imposed by OpenID's design.\n",
"It's an OpenID relying party best practice to allow multiple OpenIDs to be associated with a single account.\nIt's also an OpenID relying party best practice to allow people to recover their accounts without access to their old OpenID.\nIf Stack Overflow doesn't do these things, then this is a shortcoming of Stack Overflow, not OpenID.\n",
"Nothing prevents the provider from holding your account to ransom. You should pick a provider that you know to be reliable. Or, if you trust nobody but yourself, you can be your own provider:\nhttp://wiki.openid.net/Run_your_own_identity_server\n",
"There's no way to stop Google from holding my gmail inbox hostage until I pay them $n. It's a trust thing, I guess.\n",
"This may help:\nOpenID\n",
"I think you might be mixing free-market providers with governments. Latter abuse their power because you got nobody else to go to (try to get an \"alternative\" passport). Since the OpenID prividers have competition, you can always leave one provider and go to another.\n",
"A site that implements OpenID authentication in a good way would allow you to switch your ID to another URL or to specify a secondary ID in cases when your primary provider happens to be down.\nCurrently, most sites still don't have this option, and yes -- if our OpenID providers would delete our accounts one day, we'd have trouble getting to our accounts on some sites. We trust them in not denying us the service.\n"
] |
[
15,
6,
5,
3,
2,
0,
0
] |
[] |
[] |
[
"openid"
] |
stackoverflow_0000078523_openid.txt
|
Q:
Replicating between SQL Server 2005 and SQL Server Compact Edition
Can it be done and if so, how?
A:
You can also check out Sync Services for Sql Server and Compact edition. The benefit of Sync Services is that you don't need a replication server or IIS and you can also sync between compact edition databases. This method involves writing a fair bit more code and is fairly involved, but I'd recommend looking into it as a lightweight service.
A:
You can use Merge Replication. Theres a tutorial here SQL Server Compact 3.5 How-to Tutorials (Number 5).
A:
Certainly replication is possible, as is Sync Services if you're not afraid to get your hands dirty. It depends on the details of what you need:
Sometimes-connected application wanting to have a read-only cache: Sync Services
Sometimes-connected application wanting to have part or full update ability: Sync Services
Remote site with multiple workstations needing read/write access to data: replication if you can get a secure network connection that's stable enough, otherwise look at extending Syn Services to work with SQL Express (or full SQL Server) based on the sample here: Sync using SQL Express
If you just want a SQL CE database and you're working with a SQL 2008 server then the wizard in Visual Studio 2008 SP1 will do all the work for you, you need only add 1 line of code to it if you want bi-directional support. If you can't upgrade then it will take more work with SQL 2005, and it's only reliable if you have at least SP2.
I'm in the middle of a project that requires multiple sites to have a sub-set of data in an environment where each site may lose it's connection to the head office at times, we've managed to get Sync Services to work with SQL 2008 at the head office and SQL Express 2008 at each site with full change tracking (2008 feature) and it's working great. It does require a reasonable amount of code (C# and SQL), so we've used some pretty smart templates to help. Be aware that.
Perhaps you could refine your question with more details?
A:
Because of budget constraints I think it will have to beta-tester's approch,i tried following the guide and cant seem to get it working. Before I spend time getting it to work, I just confrim, Replicating between SqlServer 2005 and Compact Edition is something that can be done?
A:
I just confrim, Replicating between
SqlServer 2005 and Compact Edition is
something that can be done?
Yes it can definately be done using either Merge Replication or Sync Services
|
Replicating between SQL Server 2005 and SQL Server Compact Edition
|
Can it be done and if so, how?
|
[
"You can also check out Sync Services for Sql Server and Compact edition. The benefit of Sync Services is that you don't need a replication server or IIS and you can also sync between compact edition databases. This method involves writing a fair bit more code and is fairly involved, but I'd recommend looking into it as a lightweight service.\n",
"You can use Merge Replication. Theres a tutorial here SQL Server Compact 3.5 How-to Tutorials (Number 5).\n",
"Certainly replication is possible, as is Sync Services if you're not afraid to get your hands dirty. It depends on the details of what you need:\n\nSometimes-connected application wanting to have a read-only cache: Sync Services\nSometimes-connected application wanting to have part or full update ability: Sync Services\nRemote site with multiple workstations needing read/write access to data: replication if you can get a secure network connection that's stable enough, otherwise look at extending Syn Services to work with SQL Express (or full SQL Server) based on the sample here: Sync using SQL Express\n\nIf you just want a SQL CE database and you're working with a SQL 2008 server then the wizard in Visual Studio 2008 SP1 will do all the work for you, you need only add 1 line of code to it if you want bi-directional support. If you can't upgrade then it will take more work with SQL 2005, and it's only reliable if you have at least SP2.\nI'm in the middle of a project that requires multiple sites to have a sub-set of data in an environment where each site may lose it's connection to the head office at times, we've managed to get Sync Services to work with SQL 2008 at the head office and SQL Express 2008 at each site with full change tracking (2008 feature) and it's working great. It does require a reasonable amount of code (C# and SQL), so we've used some pretty smart templates to help. Be aware that.\nPerhaps you could refine your question with more details?\n",
"Because of budget constraints I think it will have to beta-tester's approch,i tried following the guide and cant seem to get it working. Before I spend time getting it to work, I just confrim, Replicating between SqlServer 2005 and Compact Edition is something that can be done?\n",
"\nI just confrim, Replicating between\n SqlServer 2005 and Compact Edition is\n something that can be done?\n\nYes it can definately be done using either Merge Replication or Sync Services\n"
] |
[
4,
2,
2,
0,
0
] |
[] |
[] |
[
"sql_server"
] |
stackoverflow_0000007676_sql_server.txt
|
Q:
Report templates for Team Foundation Server 2008
Does anybody have links to site or pre built report running on the SQL Analysis Service provided by TFS2008?
Creating a meaningful Excel report or a new report sometime is a very boring and difficult taks, maybe finding a way to share reports could be a good idea?
A:
Try this download: Creating and Customizing TFS Reports, it includes a few samples and some guidance. More here.
Also try the TFS Reporting Samples.zip linked from this site.
This site links to a large number of TFS reporting resources:
http://blogs.msdn.com/teams_wit_tools/archive/2007/03/26/tfs-report-developer-resources.aspx
A:
If you are using SCRUM, this has Sprint reports and Product Burndown reports:
http://www.scrumforteamsystem.com/en/default.aspx
|
Report templates for Team Foundation Server 2008
|
Does anybody have links to site or pre built report running on the SQL Analysis Service provided by TFS2008?
Creating a meaningful Excel report or a new report sometime is a very boring and difficult taks, maybe finding a way to share reports could be a good idea?
|
[
"Try this download: Creating and Customizing TFS Reports, it includes a few samples and some guidance. More here.\nAlso try the TFS Reporting Samples.zip linked from this site. \nThis site links to a large number of TFS reporting resources:\nhttp://blogs.msdn.com/teams_wit_tools/archive/2007/03/26/tfs-report-developer-resources.aspx \n",
"If you are using SCRUM, this has Sprint reports and Product Burndown reports:\nhttp://www.scrumforteamsystem.com/en/default.aspx\n"
] |
[
4,
0
] |
[] |
[] |
[
"tfs"
] |
stackoverflow_0000073072_tfs.txt
|
Q:
Redirect command to input of another in Python
I would like to replicate this in python:
gvimdiff <(hg cat file.txt) file.txt
(hg cat file.txt outputs the most recently committed version of file.txt)
I know how to pipe the file to gvimdiff, but it won't accept another file:
$ hg cat file.txt | gvimdiff file.txt -
Too many edit arguments: "-"
Getting to the python part...
# hgdiff.py
import subprocess
import sys
file = sys.argv[1]
subprocess.call(["gvimdiff", "<(hg cat %s)" % file, file])
When subprocess is called it merely passes <(hg cat file) onto gvimdiff as a filename.
So, is there any way to redirect a command as bash does?
For simplicity's sake just cat a file and redirect it to diff:
diff <(cat file.txt) file.txt
A:
It can be done. As of Python 2.5, however, this mechanism is Linux-specific and not portable:
import subprocess
import sys
file = sys.argv[1]
p1 = subprocess.Popen(['hg', 'cat', file], stdout=subprocess.PIPE)
p2 = subprocess.Popen([
'gvimdiff',
'/proc/self/fd/%s' % p1.stdout.fileno(),
file])
p2.wait()
That said, in the specific case of diff, you can simply take one of the files from stdin, and remove the need to use the bash-alike functionality in question:
file = sys.argv[1]
p1 = subprocess.Popen(['hg', 'cat', file], stdout=subprocess.PIPE)
p2 = subprocess.Popen(['diff', '-', file], stdin=p1.stdout)
diff_text = p2.communicate()[0]
A:
There is also the commands module:
import commands
status, output = commands.getstatusoutput("gvimdiff <(hg cat file.txt) file.txt")
There is also the popen set of functions, if you want to actually grok the data from a command as it is running.
A:
This is actually an example in the docs:
p1 = Popen(["dmesg"], stdout=PIPE)
p2 = Popen(["grep", "hda"], stdin=p1.stdout, stdout=PIPE)
output = p2.communicate()[0]
which means for you:
import subprocess
import sys
file = sys.argv[1]
p1 = Popen(["hg", "cat", file], stdout=PIPE)
p2 = Popen(["gvimdiff", "file.txt"], stdin=p1.stdout, stdout=PIPE)
output = p2.communicate()[0]
This removes the use of the linux-specific /proc/self/fd bits, making it probably work on other unices like Solaris and the BSDs (including MacOS) and maybe even work on Windows.
|
Redirect command to input of another in Python
|
I would like to replicate this in python:
gvimdiff <(hg cat file.txt) file.txt
(hg cat file.txt outputs the most recently committed version of file.txt)
I know how to pipe the file to gvimdiff, but it won't accept another file:
$ hg cat file.txt | gvimdiff file.txt -
Too many edit arguments: "-"
Getting to the python part...
# hgdiff.py
import subprocess
import sys
file = sys.argv[1]
subprocess.call(["gvimdiff", "<(hg cat %s)" % file, file])
When subprocess is called it merely passes <(hg cat file) onto gvimdiff as a filename.
So, is there any way to redirect a command as bash does?
For simplicity's sake just cat a file and redirect it to diff:
diff <(cat file.txt) file.txt
|
[
"It can be done. As of Python 2.5, however, this mechanism is Linux-specific and not portable:\nimport subprocess\nimport sys\n\nfile = sys.argv[1]\np1 = subprocess.Popen(['hg', 'cat', file], stdout=subprocess.PIPE)\np2 = subprocess.Popen([\n 'gvimdiff',\n '/proc/self/fd/%s' % p1.stdout.fileno(),\n file])\np2.wait()\n\nThat said, in the specific case of diff, you can simply take one of the files from stdin, and remove the need to use the bash-alike functionality in question:\nfile = sys.argv[1]\np1 = subprocess.Popen(['hg', 'cat', file], stdout=subprocess.PIPE)\np2 = subprocess.Popen(['diff', '-', file], stdin=p1.stdout)\ndiff_text = p2.communicate()[0]\n\n",
"There is also the commands module:\nimport commands\n\nstatus, output = commands.getstatusoutput(\"gvimdiff <(hg cat file.txt) file.txt\")\n\nThere is also the popen set of functions, if you want to actually grok the data from a command as it is running.\n",
"This is actually an example in the docs:\np1 = Popen([\"dmesg\"], stdout=PIPE)\np2 = Popen([\"grep\", \"hda\"], stdin=p1.stdout, stdout=PIPE)\noutput = p2.communicate()[0]\n\nwhich means for you:\nimport subprocess\nimport sys\n\nfile = sys.argv[1]\np1 = Popen([\"hg\", \"cat\", file], stdout=PIPE)\np2 = Popen([\"gvimdiff\", \"file.txt\"], stdin=p1.stdout, stdout=PIPE)\noutput = p2.communicate()[0]\n\nThis removes the use of the linux-specific /proc/self/fd bits, making it probably work on other unices like Solaris and the BSDs (including MacOS) and maybe even work on Windows.\n"
] |
[
10,
2,
2
] |
[
"It just dawned on me that you are probably looking for one of the popen functions.\nfrom: http://docs.python.org/lib/module-popen2.html\npopen3(cmd[, bufsize[, mode]])\n Executes cmd as a sub-process. Returns the file objects (child_stdout, child_stdin, child_stderr). \nnamaste,\nMark\n"
] |
[
-1
] |
[
"bash",
"diff",
"python",
"redirect",
"vimdiff"
] |
stackoverflow_0000078431_bash_diff_python_redirect_vimdiff.txt
|
Q:
Mongrel hangs with 100% CPU / EBADF (Bad file descriptor)
We have a server with 10 running mongrel_cluster instances with apache
in front of them, and every now and then one or some of them hang.
No activity is seen in the database (we're using activerecord sessions).
Mysql with innodb tables. show innodb status shows no locks. show
processlist shows nothing.
The server is linux debian 4.0
Ruby is: ruby 1.8.6 (2008-03-03 patchlevel 114) [i486-linux]
Rails is: Rails 1.1.2 (yes, quite old)
We're using the native mysql connector (gem install mysql)
"strace -p PID" gives the following in a loop for the hung mongrel
process:
gettimeofday({1219834026, 235289}, NULL) = 0
select(4, [3], [0], [], {0, 905241}) = -1 EBADF (Bad file descriptor)
gettimeofday({1219834026, 235477}, NULL) = 0
select(4, [3], [0], [], {0, 905053}) = -1 EBADF (Bad file descriptor)
gettimeofday({1219834026, 235654}, NULL) = 0
select(4, [3], [0], [], {0, 904875}) = -1 EBADF (Bad file descriptor)
gettimeofday({1219834026, 235829}, NULL) = 0
select(4, [3], [0], [], {0, 904700}) = -1 EBADF (Bad file descriptor)
gettimeofday({1219834026, 236017}, NULL) = 0
select(4, [3], [0], [], {0, 904513}) = -1 EBADF (Bad file descriptor)
gettimeofday({1219834026, 236192}, NULL) = 0
select(4, [3], [0], [], {0, 904338}) = -1 EBADF (Bad file descriptor)
gettimeofday({1219834026, 236367}, NULL) = 0
...
I used lsof and found that the process used 67 file descriptors (lsof -p
PID |wc -l)
Is there any other way I can debug this, so that I could for example
determine which file descriptor is "bad"?
Any other info or suggestions? Anybody else seen this?
The site is fairly used, but not overly so, load averages usually around
0.3.
Some additional info. I installed mongrelproctitle to show what the
hung processes were doing, and it seems they are hanging on a method
that displays images using file_column / images from the database /
rmagick to resize and make the images greyscale.
Not conclusive the
problem is here, but it is a suspicion.
Is there something obviously wrong with the following? The method
displays a static image if the order doesn't contain an image, else the
image resized from the order. The cache stuff is so that the image gets
updated in the browser every time. The image is inserted in the page
with a normal image tag.
code:
def preview_image
@order = session[:order]
if @order.image.nil?
@headers['Pragma'] = 'no-cache'
@headers['Cache-Control'] = 'no-cache, must-revalidate'
send_data(EMPTY_PIC.to_blob, :filename => "img.jpg", :type =>
"image/jpeg", :disposition => "inline")
else
@pic = Image.read(@order.image)[0]
if (@order.crop)
@pic.crop!(@order.crop[:x1].to_i, @order.crop[:y1].to_i,
@order.crop[:width].to_i, @order.crop[:height].to_i, true)
end
@pic.resize!(103,130)
@pic = @pic.quantize(256, Magick::GRAYColorspace)
@headers['Pragma'] = 'no-cache'
@headers['Cache-Control'] = 'no-cache, must-revalidate'
send_data(@pic.to_blob, :filename => "img.jpg", :type =>
"image/jpeg", :disposition => "inline")
end
end
Here is the lsof output if anybody can find any problems in it. Don't
know how it will format in this message...
lsof: WARNING: can't stat() ext3 file system /dev/.static/dev
Output information may be incomplete.
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
mongrel_r 11628 username cwd DIR 9,2 4096 1870688
/home/domains/example.com/usernameOrder/releases/20080831121802
mongrel_r 11628 username rtd DIR 9,1 4096 2 /
mongrel_r 11628 username txt REG 9,1 3564 167172
/usr/bin/ruby1.8
mongrel_r 11628 username mem REG 0,0 0
[heap] (stat: No such file or directory)
mongrel_r 11628 username DEL REG 0,8 15560245
/dev/zero
mongrel_r 11628 username DEL REG 0,8 15560242
/dev/zero
mongrel_r 11628 username DEL REG 0,8 15560602
/dev/zero
mongrel_r 11628 username DEL REG 0,8 15560601
/dev/zero
mongrel_r 11628 username DEL REG 0,8 15560684
/dev/zero
mongrel_r 11628 username DEL REG 0,8 15560683
/dev/zero
mongrel_r 11628 username DEL REG 0,8 15560685
/dev/zero
mongrel_r 11628 username DEL REG 0,8 15560568
/dev/zero
mongrel_r 11628 username DEL REG 0,8 15560607
/dev/zero
mongrel_r 11628 username DEL REG 0,8 15560569
/dev/zero
mongrel_r 11628 username mem REG 9,1 1933648 456972
/usr/lib/libmysqlclient.so.15.0.0
mongrel_r 11628 username DEL REG 0,8 15442414
/dev/zero
mongrel_r 11628 username DEL REG 0,8 15560546
/dev/zero
mongrel_r 11628 username mem REG 9,1 67408 457393
/lib/i686/cmov/libresolv-2.7.so
mongrel_r 11628 username mem REG 9,1 17884 457386
/lib/i686/cmov/libnss_dns-2.7.so
mongrel_r 11628 username DEL REG 0,8 15560541
/dev/zero
mongrel_r 11628 username DEL REG 0,8 15560246
/dev/zero
mongrel_r 11628 username DEL REG 0,8 15560693
/dev/zero
mongrel_r 11628 username DEL REG 0,8 15560608
/dev/zero
mongrel_r 11628 username mem REG 9,1 25700 164963
/usr/lib/gconv/gconv-modules.cache
mongrel_r 11628 username mem REG 9,1 83708 457384
/lib/i686/cmov/libnsl-2.7.so
mongrel_r 11628 username mem REG 9,1 140602 506903
/var/lib/gems/1.8/gems/mysql-2.7/lib/mysql.so
mongrel_r 11628 username mem REG 9,1 1282816 180935
...
mongrel_r 11628 username 1w REG 9,2 462923 1575329
/home/domains/example.com/usernameOrder/shared/log/mongrel.8001.log
mongrel_r 11628 username 2w REG 9,2 462923 1575329
/home/domains/example.com/usernameOrder/shared/log/mongrel.8001.log
mongrel_r 11628 username 3u IPv4 15442350 TCP
localhost:8001 (LISTEN)
mongrel_r 11628 username 4w REG 9,2 118943548 1575355
/home/domains/example.com/usernameOrder/shared/log/production.log
mongrel_r 11628 username 5u REG 9,1 145306 234226
/tmp/mongrel.11628.0 (deleted)
mongrel_r 11628 username 7u unix 0xc3c12480 15442417
socket
mongrel_r 11628 username 11u REG 9,1 50 234180
/tmp/CGI.11628.2
mongrel_r 11628 username 12u REG 9,1 26228 234227
/tmp/CGI.11628.3
I have installed monit to monitor the server. No automatic restarts yet because of the PID file issue, but maybe I will get the newest version which supports deleting stale PID-files.
It would be nice though to actually fix the problem, because somebody will get disconnects etc if the server need to be restarted all the time (~10 times a day)
The mongrel-processes don't take any large amount of memory when this is happening, and the machine isn't even swapping, so it's probably not a memory leak.
total used free shared buffers cached
Mem: 4152796 4083000 69796 0 616624 2613364
-/+ buffers/cache: 853012 3299784
Swap: 1999992 52 1999940
A:
Consider using ImageScience, RMagick is known to leak massive amounts of memory and lock.
A:
Chapter 6.3 in the book Deploying Rails Applications (A Step by Step Guide) has a good section on installing and configuring the Monitoring utility Monit on Linux and using it to monitor your mongrels. It can restart your mongrels when they fail.
Older versions of Mongrel had trouble re-starting because of a duplicate PID file existing on disk. Newer versions support the --clean option that will get rid of the leftover PID files, if they exist. So you have to upgrade Mongrel to a version that supports --clean to get around the stale PID file issue, Monit alone can't do this.
|
Mongrel hangs with 100% CPU / EBADF (Bad file descriptor)
|
We have a server with 10 running mongrel_cluster instances with apache
in front of them, and every now and then one or some of them hang.
No activity is seen in the database (we're using activerecord sessions).
Mysql with innodb tables. show innodb status shows no locks. show
processlist shows nothing.
The server is linux debian 4.0
Ruby is: ruby 1.8.6 (2008-03-03 patchlevel 114) [i486-linux]
Rails is: Rails 1.1.2 (yes, quite old)
We're using the native mysql connector (gem install mysql)
"strace -p PID" gives the following in a loop for the hung mongrel
process:
gettimeofday({1219834026, 235289}, NULL) = 0
select(4, [3], [0], [], {0, 905241}) = -1 EBADF (Bad file descriptor)
gettimeofday({1219834026, 235477}, NULL) = 0
select(4, [3], [0], [], {0, 905053}) = -1 EBADF (Bad file descriptor)
gettimeofday({1219834026, 235654}, NULL) = 0
select(4, [3], [0], [], {0, 904875}) = -1 EBADF (Bad file descriptor)
gettimeofday({1219834026, 235829}, NULL) = 0
select(4, [3], [0], [], {0, 904700}) = -1 EBADF (Bad file descriptor)
gettimeofday({1219834026, 236017}, NULL) = 0
select(4, [3], [0], [], {0, 904513}) = -1 EBADF (Bad file descriptor)
gettimeofday({1219834026, 236192}, NULL) = 0
select(4, [3], [0], [], {0, 904338}) = -1 EBADF (Bad file descriptor)
gettimeofday({1219834026, 236367}, NULL) = 0
...
I used lsof and found that the process used 67 file descriptors (lsof -p
PID |wc -l)
Is there any other way I can debug this, so that I could for example
determine which file descriptor is "bad"?
Any other info or suggestions? Anybody else seen this?
The site is fairly used, but not overly so, load averages usually around
0.3.
Some additional info. I installed mongrelproctitle to show what the
hung processes were doing, and it seems they are hanging on a method
that displays images using file_column / images from the database /
rmagick to resize and make the images greyscale.
Not conclusive the
problem is here, but it is a suspicion.
Is there something obviously wrong with the following? The method
displays a static image if the order doesn't contain an image, else the
image resized from the order. The cache stuff is so that the image gets
updated in the browser every time. The image is inserted in the page
with a normal image tag.
code:
def preview_image
@order = session[:order]
if @order.image.nil?
@headers['Pragma'] = 'no-cache'
@headers['Cache-Control'] = 'no-cache, must-revalidate'
send_data(EMPTY_PIC.to_blob, :filename => "img.jpg", :type =>
"image/jpeg", :disposition => "inline")
else
@pic = Image.read(@order.image)[0]
if (@order.crop)
@pic.crop!(@order.crop[:x1].to_i, @order.crop[:y1].to_i,
@order.crop[:width].to_i, @order.crop[:height].to_i, true)
end
@pic.resize!(103,130)
@pic = @pic.quantize(256, Magick::GRAYColorspace)
@headers['Pragma'] = 'no-cache'
@headers['Cache-Control'] = 'no-cache, must-revalidate'
send_data(@pic.to_blob, :filename => "img.jpg", :type =>
"image/jpeg", :disposition => "inline")
end
end
Here is the lsof output if anybody can find any problems in it. Don't
know how it will format in this message...
lsof: WARNING: can't stat() ext3 file system /dev/.static/dev
Output information may be incomplete.
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
mongrel_r 11628 username cwd DIR 9,2 4096 1870688
/home/domains/example.com/usernameOrder/releases/20080831121802
mongrel_r 11628 username rtd DIR 9,1 4096 2 /
mongrel_r 11628 username txt REG 9,1 3564 167172
/usr/bin/ruby1.8
mongrel_r 11628 username mem REG 0,0 0
[heap] (stat: No such file or directory)
mongrel_r 11628 username DEL REG 0,8 15560245
/dev/zero
mongrel_r 11628 username DEL REG 0,8 15560242
/dev/zero
mongrel_r 11628 username DEL REG 0,8 15560602
/dev/zero
mongrel_r 11628 username DEL REG 0,8 15560601
/dev/zero
mongrel_r 11628 username DEL REG 0,8 15560684
/dev/zero
mongrel_r 11628 username DEL REG 0,8 15560683
/dev/zero
mongrel_r 11628 username DEL REG 0,8 15560685
/dev/zero
mongrel_r 11628 username DEL REG 0,8 15560568
/dev/zero
mongrel_r 11628 username DEL REG 0,8 15560607
/dev/zero
mongrel_r 11628 username DEL REG 0,8 15560569
/dev/zero
mongrel_r 11628 username mem REG 9,1 1933648 456972
/usr/lib/libmysqlclient.so.15.0.0
mongrel_r 11628 username DEL REG 0,8 15442414
/dev/zero
mongrel_r 11628 username DEL REG 0,8 15560546
/dev/zero
mongrel_r 11628 username mem REG 9,1 67408 457393
/lib/i686/cmov/libresolv-2.7.so
mongrel_r 11628 username mem REG 9,1 17884 457386
/lib/i686/cmov/libnss_dns-2.7.so
mongrel_r 11628 username DEL REG 0,8 15560541
/dev/zero
mongrel_r 11628 username DEL REG 0,8 15560246
/dev/zero
mongrel_r 11628 username DEL REG 0,8 15560693
/dev/zero
mongrel_r 11628 username DEL REG 0,8 15560608
/dev/zero
mongrel_r 11628 username mem REG 9,1 25700 164963
/usr/lib/gconv/gconv-modules.cache
mongrel_r 11628 username mem REG 9,1 83708 457384
/lib/i686/cmov/libnsl-2.7.so
mongrel_r 11628 username mem REG 9,1 140602 506903
/var/lib/gems/1.8/gems/mysql-2.7/lib/mysql.so
mongrel_r 11628 username mem REG 9,1 1282816 180935
...
mongrel_r 11628 username 1w REG 9,2 462923 1575329
/home/domains/example.com/usernameOrder/shared/log/mongrel.8001.log
mongrel_r 11628 username 2w REG 9,2 462923 1575329
/home/domains/example.com/usernameOrder/shared/log/mongrel.8001.log
mongrel_r 11628 username 3u IPv4 15442350 TCP
localhost:8001 (LISTEN)
mongrel_r 11628 username 4w REG 9,2 118943548 1575355
/home/domains/example.com/usernameOrder/shared/log/production.log
mongrel_r 11628 username 5u REG 9,1 145306 234226
/tmp/mongrel.11628.0 (deleted)
mongrel_r 11628 username 7u unix 0xc3c12480 15442417
socket
mongrel_r 11628 username 11u REG 9,1 50 234180
/tmp/CGI.11628.2
mongrel_r 11628 username 12u REG 9,1 26228 234227
/tmp/CGI.11628.3
I have installed monit to monitor the server. No automatic restarts yet because of the PID file issue, but maybe I will get the newest version which supports deleting stale PID-files.
It would be nice though to actually fix the problem, because somebody will get disconnects etc if the server need to be restarted all the time (~10 times a day)
The mongrel-processes don't take any large amount of memory when this is happening, and the machine isn't even swapping, so it's probably not a memory leak.
total used free shared buffers cached
Mem: 4152796 4083000 69796 0 616624 2613364
-/+ buffers/cache: 853012 3299784
Swap: 1999992 52 1999940
|
[
"Consider using ImageScience, RMagick is known to leak massive amounts of memory and lock.\n",
"Chapter 6.3 in the book Deploying Rails Applications (A Step by Step Guide) has a good section on installing and configuring the Monitoring utility Monit on Linux and using it to monitor your mongrels. It can restart your mongrels when they fail.\nOlder versions of Mongrel had trouble re-starting because of a duplicate PID file existing on disk. Newer versions support the --clean option that will get rid of the leftover PID files, if they exist. So you have to upgrade Mongrel to a version that supports --clean to get around the stale PID file issue, Monit alone can't do this.\n"
] |
[
2,
1
] |
[] |
[] |
[
"mongrel",
"rmagick",
"ruby",
"ruby_on_rails"
] |
stackoverflow_0000077748_mongrel_rmagick_ruby_ruby_on_rails.txt
|
Q:
How to check for memory leaks in Guile extension modules?
I develop an extension module for Guile, written in C. This extension module embeds a Python interpreter.
Since this extension module invokes the Python interpreter, I need to verify that it properly manages the memory occupied by Python objects.
I found that the Python interpreter is well-behaved in its own memory handling, so that by running valgrind I can find memory leaks due to bugs in my own Python interpreter embedding code, if there are no other interfering factors.
However, when I run Guile under valgrind, valgrind reports memory leaks. Such memory leaks obscure any memory leaks due to my own code.
The question is what can I do to separate memory leaks due to bugs in my code from memory leaks reported by valgrind as due to Guile. Another tool instead of valgrind? Special valgrind options? Give up and rely upon manual code walkthrough?
A:
You've got a couple options. One is to write a supressions file for valgrind that turns off reporting of stuff that you're not working on. Python has such a file, for example:
http://svn.python.org/projects/python/trunk/Misc/valgrind-python.supp
If valgrind doesn't like your setup, another possibility is using libmudflap; you compile your program with gcc -fmudflap -lmudflap, and the resulting code is instrumented for pointer debugging. Described in the gcc docs, and here: http://gcc.gnu.org/wiki/Mudflap_Pointer_Debugging
|
How to check for memory leaks in Guile extension modules?
|
I develop an extension module for Guile, written in C. This extension module embeds a Python interpreter.
Since this extension module invokes the Python interpreter, I need to verify that it properly manages the memory occupied by Python objects.
I found that the Python interpreter is well-behaved in its own memory handling, so that by running valgrind I can find memory leaks due to bugs in my own Python interpreter embedding code, if there are no other interfering factors.
However, when I run Guile under valgrind, valgrind reports memory leaks. Such memory leaks obscure any memory leaks due to my own code.
The question is what can I do to separate memory leaks due to bugs in my code from memory leaks reported by valgrind as due to Guile. Another tool instead of valgrind? Special valgrind options? Give up and rely upon manual code walkthrough?
|
[
"You've got a couple options. One is to write a supressions file for valgrind that turns off reporting of stuff that you're not working on. Python has such a file, for example: \nhttp://svn.python.org/projects/python/trunk/Misc/valgrind-python.supp\nIf valgrind doesn't like your setup, another possibility is using libmudflap; you compile your program with gcc -fmudflap -lmudflap, and the resulting code is instrumented for pointer debugging. Described in the gcc docs, and here: http://gcc.gnu.org/wiki/Mudflap_Pointer_Debugging\n"
] |
[
7
] |
[] |
[] |
[
"guile",
"memory_leaks",
"python",
"valgrind"
] |
stackoverflow_0000078900_guile_memory_leaks_python_valgrind.txt
|
Q:
How can I create a loop in an onClick event?
I want to write an onClick event which submits a form several times, iterating through selected items in a multi-select field, submitting once for each.
How do I code the loop?
I'm working in Ruby on Rails and using remote_function() to generate the JavaScript for the ajax call.
A:
My quick answer (as I've not coded it yet) would be to create another function that creates a POST using XMLHTTPRequest and the specific parameters for a single call. Then inside your onClick() handler call that function as you loop through your selected items.
I would suggest that you do a Proof of Concept just using a dummy HTML page and javascript and then try to figure out how to get it to work in RoR.
Also, why are you attempting to make the multiple calls from the browser as opposed to handling the looping conditions in the RoR controller?
A:
You'd have to manually write some javascript. Rails' generators won't do something this complex for you.
Prototype.js will do almost all of the heavy lifting for you though. Off the top of my head, the code would look like this: (UNTESTED)
<%= javascript_include_tag 'prototype' %>
<form id="my-form">
<input type="text" name="username" />
<select multiple="true" id="select-box">
<option value="1">First</option>
<option value="2">Second</option>
<option value="3">Third</option>
<option value="4">Fourth</option>
</select>
</form>
<script type="text/javascript" language="javascript">
submitFormMultipleTimes = function() {
$F('select-box').each(function(selectedItemValue){
new Ajax.Request('/somewhere?val='+selectedItemValue,
{method: 'POST', postBody: Form.serialize('my-form')});
});
}
</script>
<a href="#" onclick="submitFormMultipleTimes(); return false;">Clicky Clicky</a>
Note:
Using Prototype's $F() method to get the selected item values. It returns an array for multiple-select boxes
Using Ajax.Request to send the data to the server as a POST.
To the server, this looks exactly the same as just submitting a normal form
Using Form.serialize to get the data out of the form and stick it in the request's body.
This is the exact same data that would get sent if you submitted the form normally
A:
Unless you're modifying the browser DOM, I can't think of a reason that you would want to do this. (But without knowing fully what you're trying to do, I could be wrong in this case =)
You should be able to send back data from mulitple objects (even nested complex objects in your form) in just one POST.
Chances are the rails code will be a lot less complex, easier to write (and easier to debug!) than any javascript you come up with.
If you need to update different parts of the page depending on what the user has selected, you can still make multiple updates to the DOM via RJS in your render :update block, so that shouldn't be an issue.
You'll also have the (large) benefit of only one server round-trip instead of the multiple trips you would need using multiple POSTS.
|
How can I create a loop in an onClick event?
|
I want to write an onClick event which submits a form several times, iterating through selected items in a multi-select field, submitting once for each.
How do I code the loop?
I'm working in Ruby on Rails and using remote_function() to generate the JavaScript for the ajax call.
|
[
"My quick answer (as I've not coded it yet) would be to create another function that creates a POST using XMLHTTPRequest and the specific parameters for a single call. Then inside your onClick() handler call that function as you loop through your selected items.\nI would suggest that you do a Proof of Concept just using a dummy HTML page and javascript and then try to figure out how to get it to work in RoR.\nAlso, why are you attempting to make the multiple calls from the browser as opposed to handling the looping conditions in the RoR controller?\n",
"You'd have to manually write some javascript. Rails' generators won't do something this complex for you.\nPrototype.js will do almost all of the heavy lifting for you though. Off the top of my head, the code would look like this: (UNTESTED)\n<%= javascript_include_tag 'prototype' %>\n\n<form id=\"my-form\">\n <input type=\"text\" name=\"username\" />\n\n <select multiple=\"true\" id=\"select-box\">\n <option value=\"1\">First</option>\n <option value=\"2\">Second</option>\n <option value=\"3\">Third</option>\n <option value=\"4\">Fourth</option>\n </select>\n</form>\n\n<script type=\"text/javascript\" language=\"javascript\">\nsubmitFormMultipleTimes = function() {\n $F('select-box').each(function(selectedItemValue){\n new Ajax.Request('/somewhere?val='+selectedItemValue, \n {method: 'POST', postBody: Form.serialize('my-form')});\n });\n}\n</script>\n\n<a href=\"#\" onclick=\"submitFormMultipleTimes(); return false;\">Clicky Clicky</a>\n\nNote:\n\nUsing Prototype's $F() method to get the selected item values. It returns an array for multiple-select boxes\nUsing Ajax.Request to send the data to the server as a POST.\nTo the server, this looks exactly the same as just submitting a normal form\nUsing Form.serialize to get the data out of the form and stick it in the request's body.\nThis is the exact same data that would get sent if you submitted the form normally\n\n",
"Unless you're modifying the browser DOM, I can't think of a reason that you would want to do this. (But without knowing fully what you're trying to do, I could be wrong in this case =)\nYou should be able to send back data from mulitple objects (even nested complex objects in your form) in just one POST.\nChances are the rails code will be a lot less complex, easier to write (and easier to debug!) than any javascript you come up with. \nIf you need to update different parts of the page depending on what the user has selected, you can still make multiple updates to the DOM via RJS in your render :update block, so that shouldn't be an issue.\nYou'll also have the (large) benefit of only one server round-trip instead of the multiple trips you would need using multiple POSTS.\n"
] |
[
5,
2,
1
] |
[] |
[] |
[
"ajax",
"javascript",
"loops",
"ruby",
"ruby_on_rails"
] |
stackoverflow_0000061372_ajax_javascript_loops_ruby_ruby_on_rails.txt
|
Q:
Publishing vs Copying
What is the difference between publishing a website with visual studio and just copying the files over to the server? Is the only difference that the publish files are pre-compiled?
A:
There is not much difference between "publish", and copying the files. Publish appears in a webapplication. The only difference really is publishing gives you the option to only include html and dll's, where as copying you would need to parse out source code manually. There is no full precompiling in the publish option, as Fully precompiled means no HTML at all; The aspx files are just placeholders; All html is in the compiled binaries.
A:
I believe you are correct in your assumption. It has been my experience that the only difference is that published files are compiled. Visual Studio® 2008 Web Deployment Projects is a nice enhancement for customizing your build scripts for both your Websites and Web Applications.
|
Publishing vs Copying
|
What is the difference between publishing a website with visual studio and just copying the files over to the server? Is the only difference that the publish files are pre-compiled?
|
[
"There is not much difference between \"publish\", and copying the files. Publish appears in a webapplication. The only difference really is publishing gives you the option to only include html and dll's, where as copying you would need to parse out source code manually. There is no full precompiling in the publish option, as Fully precompiled means no HTML at all; The aspx files are just placeholders; All html is in the compiled binaries.\n",
"I believe you are correct in your assumption. It has been my experience that the only difference is that published files are compiled. Visual Studio® 2008 Web Deployment Projects is a nice enhancement for customizing your build scripts for both your Websites and Web Applications.\n"
] |
[
3,
1
] |
[] |
[] |
[
"asp.net",
"visual_studio"
] |
stackoverflow_0000078850_asp.net_visual_studio.txt
|
Q:
What Java versions does Griffon support?
I want to write a Swing application in Griffon but I am not sure what versions of Java I can support.
A:
According to the Griffon website, 1.5 or higher.
http://groovy.codehaus.org/Installing+Griffon
|
What Java versions does Griffon support?
|
I want to write a Swing application in Griffon but I am not sure what versions of Java I can support.
|
[
"According to the Griffon website, 1.5 or higher.\nhttp://groovy.codehaus.org/Installing+Griffon\n"
] |
[
4
] |
[] |
[] |
[
"griffon",
"groovy",
"java",
"swing"
] |
stackoverflow_0000079002_griffon_groovy_java_swing.txt
|
Q:
How do you keep two related, but separate, systems in sync with each other?
My current development project has two aspects to it. First, there is a public website where external users can submit and update information for various purposes. This information is then saved to a local SQL Server at the colo facility.
The second aspect is an internal application which employees use to manage those same records (conceptually) and provide status updates, approvals, etc. This application is hosted within the corporate firewall with its own local SQL Server database.
The two networks are connected by a hardware VPN solution, which is decent, but obviously not the speediest thing in the world.
The two databases are similar, and share many of the same tables, but they are not 100% the same. Many of the tables on both sides are very specific to either the internal or external application.
So the question is: when a user updates their information or submits a record on the public website, how do you transfer that data to the internal application's database so it can be managed by the internal staff? And vice versa... how do you push updates made by the staff back out to the website?
It is worth mentioning that the more "real time" these updates occur, the better. Not that it has to be instant, just reasonably quick.
So far, I have thought about using the following types of approaches:
Bi-directional replication
Web service interfaces on both sides with code to sync the changes as they are made (in real time).
Web service interfaces on both sides with code to asynchronously sync the changes (using a queueing mechanism).
Any advice? Has anyone run into this problem before? Did you come up with a solution that worked well for you?
A:
This is a pretty common integration scenario, I believe. Personally, I think an asynchronous messaging solution using a queue is ideal.
You should be able to achieve near real time synchronization without the overhead or complexity of something like replication.
Synchronous web services are not ideal because your code will have to be very sophisticated to handle failure scenarios. What happens when one system is restarted while the other continues to publish changes? Does the sending system get timeouts? What does it do with those? Unless you are prepared to lose data, you'll want some sort of transactional queue (like MSMQ) to receive the change notices and take care of making sure they get to the other system. If either system is down, the changes (passed as messages) will just accumulate and as soon as a connection can be established the re-starting server will process all the queued messages and catch up, making system integrity much, much easier to achieve.
There are some open source tools that can really make this easy for you if you are using .NET (especially if you want to use MSMQ).
nServiceBus by Udi Dahan
Mass Transit by Dru Sellers and Chris Patterson
There are commercial products also, and if you are considering a commercial option see here for a list of of options on .NET. Of course, WCF can do async messaging using MSMQ bindings, but a tool like nServiceBus or MassTransit will give you a very simple Send/Receive or Pub/Sub API that will make your requirement a very straightforward job.
If you're using Java, there are any number of open source service bus implementations that will make this kind of bi-directional, asynchronous messaging a snap, like Mule or maybe just ActiveMQ.
You may also want to consider reading Udi Dahan's blog, listening to some of his podcasts. Here are some more good resources to get you started.
A:
I'm mid-way through a similar project except I have multiple sites that need to keep in sync over slow connections (dial-up in some cases).
Firstly you need to track changes, if you can use SQL 2008 (even the Express version is enough if the 2Gb limit isn't a problem) this will ease the pain greatly, just turn on Change Tracking on the database and each table. We're using SQL Server 2008 at the head office with the extended schema and SQL Express 2008 at each site with a sub-set of data and limited schema.
Secondly you need to track your changes, Sync Services does the trick nicely and supports using a WCF gateway into the main database. In this example you will need to use the Sync using SQL Express Client sample as a starting point, note that it's based on SQL 2005 so you'll need to update it to take advantage of the Change Tracking features in 2008. By default the Sync Services uses SQL CE on the clients, which I'm sure isn't enough in your case. You'll need a service that runs on your Web Server that periodically (could be as often as every 10 seconds if you want) runs the Synchronize() method. This will tell your main database about changes made locally and then ask the server for all changes made there. You can set up the get and apply SQL code to call stored procedures and you can add event handlers to handle conflicts (e.g. Client Update vs Server Update) and resolve them accordingly at each end.
A:
We have a shop as a client, with three stores connected to the same VPN
Two of the shops have a computer running as a "server" for that shop and the the third one has the "master database"
To synchronize all to the master we don't have the best solution, but it works: there is a dedicated PC running an application that checks the timestamp of every record in every table of the two stores and if it is different that the last time you synchronize, it copies the results
Note that this works both ways. I.e. if you update a product in the master database, this change will propagate to the other two shops. If you have a new order in one of the shops, it will be transmitted to the "master".
With some optimizations you can have all the shops synchronize in around 20minutes
A:
Recently I have had a lot of success with SQL Server Service Broker which offers reliable, persisted asynchronous messaging out of the box with very little implementation pain.
It is quick to set up and as you learn more you can use some of the more advanced features.
Unknown to most, it is also part of the desktop editions so it can be used as a workstation messaging system
If you have existing T-SQL skills they can be leveraged as all the code to read and write messages is done in SQL
It is blindingly fast
It is a vastly under-hyped part of SQL Server and well worth a look.
A:
I'd say just have a job that copies the data in the pub database input table into a private database pending table. Then once you update the data on the private side have it replicated to the public side. If you don't have any of the replicated data on the public side updated it should be a fairly easy transactional replication solution.
|
How do you keep two related, but separate, systems in sync with each other?
|
My current development project has two aspects to it. First, there is a public website where external users can submit and update information for various purposes. This information is then saved to a local SQL Server at the colo facility.
The second aspect is an internal application which employees use to manage those same records (conceptually) and provide status updates, approvals, etc. This application is hosted within the corporate firewall with its own local SQL Server database.
The two networks are connected by a hardware VPN solution, which is decent, but obviously not the speediest thing in the world.
The two databases are similar, and share many of the same tables, but they are not 100% the same. Many of the tables on both sides are very specific to either the internal or external application.
So the question is: when a user updates their information or submits a record on the public website, how do you transfer that data to the internal application's database so it can be managed by the internal staff? And vice versa... how do you push updates made by the staff back out to the website?
It is worth mentioning that the more "real time" these updates occur, the better. Not that it has to be instant, just reasonably quick.
So far, I have thought about using the following types of approaches:
Bi-directional replication
Web service interfaces on both sides with code to sync the changes as they are made (in real time).
Web service interfaces on both sides with code to asynchronously sync the changes (using a queueing mechanism).
Any advice? Has anyone run into this problem before? Did you come up with a solution that worked well for you?
|
[
"This is a pretty common integration scenario, I believe. Personally, I think an asynchronous messaging solution using a queue is ideal. \nYou should be able to achieve near real time synchronization without the overhead or complexity of something like replication. \nSynchronous web services are not ideal because your code will have to be very sophisticated to handle failure scenarios. What happens when one system is restarted while the other continues to publish changes? Does the sending system get timeouts? What does it do with those? Unless you are prepared to lose data, you'll want some sort of transactional queue (like MSMQ) to receive the change notices and take care of making sure they get to the other system. If either system is down, the changes (passed as messages) will just accumulate and as soon as a connection can be established the re-starting server will process all the queued messages and catch up, making system integrity much, much easier to achieve.\nThere are some open source tools that can really make this easy for you if you are using .NET (especially if you want to use MSMQ).\n\nnServiceBus by Udi Dahan\nMass Transit by Dru Sellers and Chris Patterson\n\nThere are commercial products also, and if you are considering a commercial option see here for a list of of options on .NET. Of course, WCF can do async messaging using MSMQ bindings, but a tool like nServiceBus or MassTransit will give you a very simple Send/Receive or Pub/Sub API that will make your requirement a very straightforward job.\nIf you're using Java, there are any number of open source service bus implementations that will make this kind of bi-directional, asynchronous messaging a snap, like Mule or maybe just ActiveMQ.\nYou may also want to consider reading Udi Dahan's blog, listening to some of his podcasts. Here are some more good resources to get you started.\n",
"I'm mid-way through a similar project except I have multiple sites that need to keep in sync over slow connections (dial-up in some cases).\nFirstly you need to track changes, if you can use SQL 2008 (even the Express version is enough if the 2Gb limit isn't a problem) this will ease the pain greatly, just turn on Change Tracking on the database and each table. We're using SQL Server 2008 at the head office with the extended schema and SQL Express 2008 at each site with a sub-set of data and limited schema.\nSecondly you need to track your changes, Sync Services does the trick nicely and supports using a WCF gateway into the main database. In this example you will need to use the Sync using SQL Express Client sample as a starting point, note that it's based on SQL 2005 so you'll need to update it to take advantage of the Change Tracking features in 2008. By default the Sync Services uses SQL CE on the clients, which I'm sure isn't enough in your case. You'll need a service that runs on your Web Server that periodically (could be as often as every 10 seconds if you want) runs the Synchronize() method. This will tell your main database about changes made locally and then ask the server for all changes made there. You can set up the get and apply SQL code to call stored procedures and you can add event handlers to handle conflicts (e.g. Client Update vs Server Update) and resolve them accordingly at each end.\n",
"We have a shop as a client, with three stores connected to the same VPN\nTwo of the shops have a computer running as a \"server\" for that shop and the the third one has the \"master database\"\nTo synchronize all to the master we don't have the best solution, but it works: there is a dedicated PC running an application that checks the timestamp of every record in every table of the two stores and if it is different that the last time you synchronize, it copies the results\nNote that this works both ways. I.e. if you update a product in the master database, this change will propagate to the other two shops. If you have a new order in one of the shops, it will be transmitted to the \"master\".\nWith some optimizations you can have all the shops synchronize in around 20minutes\n",
"Recently I have had a lot of success with SQL Server Service Broker which offers reliable, persisted asynchronous messaging out of the box with very little implementation pain. \n\nIt is quick to set up and as you learn more you can use some of the more advanced features. \nUnknown to most, it is also part of the desktop editions so it can be used as a workstation messaging system\nIf you have existing T-SQL skills they can be leveraged as all the code to read and write messages is done in SQL\nIt is blindingly fast\n\nIt is a vastly under-hyped part of SQL Server and well worth a look.\n",
"I'd say just have a job that copies the data in the pub database input table into a private database pending table. Then once you update the data on the private side have it replicated to the public side. If you don't have any of the replicated data on the public side updated it should be a fairly easy transactional replication solution. \n"
] |
[
24,
3,
1,
1,
0
] |
[] |
[] |
[
"database",
"distributed",
"sql_server",
"synchronization"
] |
stackoverflow_0000013485_database_distributed_sql_server_synchronization.txt
|
Q:
Are liquid layouts still relevant?
Now that most of the major browsers support full page zoom (at present, the only notable exception being Google Chrome), are liquid or elastic layouts no longer needed? Is the relative pain of building liquid/elastic layouts worth the effort? Are there any situations where a liquid layout would still be of benefit? Is full page zoom the real solution it at first appears to be?
A:
Yes, because there are a vast variety of screens out there commonly ranging from 15" to 32".
There is also some variation in what people consider a "comfortable" font size.
All of which adds up to quite a range of sizes that your content will need to fit into.
If anything, liquid layout is becoming even more necessary as we scale up to huge monitors, and down to cellphone devices.
A:
Doing full page zoom in CSS isn't really worth it, especially as most browsers now do this kind of zooming natively (and do it much better - ref [img] tags).
As to using fixed width, there is a secondary feature with this... if you increase the font size, less words will be shown per line, which can help some people with reading.
As in, have you ever read a block of text which is extremely wide, and found that you have read the same line twice? If the line height was increased (same effect though font-size), with less words per line, this becomes less of an issue.
A:
Yes, yes yes! Having to scroll horizontally on a site because some designer assumed the users always maximize their browsers is a huge pet peeve for me and I'm sure I'm not alone. On top of that, as someone with really crappy vision, let me say that full page zooming works best when the layout is liquid. Otherwise you end up with your nav bar off the (visible) screen.
A:
I had a real world problem with this. The design called for a fixed width page within a nice border. Fitted within 800 pixels wide minus a few pixels for the browser window. Subtract 200 pixels for the left menu and the content area was about 600 pixels wide.
The problem was, part of the site content was dynamic, resulting in users editing and browsing data in tables, on their nice 1280x1024 screens, with tables restricted to 600 pixels wide.
You should allow for the width of the browser window in dynamic content, unless that dynamic content is going to be predominantly text.
A:
Stretchy layouts are not so much about zooming as they are about wrapping - allowing a user to fit more information on screen if the screen is higher resolution while still making the content acessible for those with lower resolution screens. Page zooming does not achieve this.
A:
i think liquid layouts are still needed, even though browsers have this full page zoom feature i bet a lot of people dont know about it or know how to use it.
A:
Page zoom is horrible from an accessibility perspective. It's the equivalent of saying "we couldn't be bothered to design our pages properly [designers], so have a larger font and scroll the page horizontally [browser developers]". I cannot believe Firefox jumped off the cliff after Microsoft and made this the default.
A:
Yes - you don't know what resolution the reader is using, or what size screen - or even if accessibility is required/used. As mentioned above, not everybody knows about full page zoom - I know about it, but hardly use it...
A:
Only your own site's visitors can tell you if liquid layouts are still relevant for your site.
Using a framework such as the YUI-CSS and Google Website Optimizer it's pretty easy to see what your visitors prefer and lay aside opinion and instead rely on cold hard results.
A:
Liquid layouts can cause usability problems, though.
Content containers that become too wide become exceptionally difficult to read.
Many blogs have fixed width content containers specifically for this reason.
Alternatively, you can create multi-column content containers so that you get an effect like a newspaper, with its multiple columns of thin containers of text. This can be difficult to do, though.
|
Are liquid layouts still relevant?
|
Now that most of the major browsers support full page zoom (at present, the only notable exception being Google Chrome), are liquid or elastic layouts no longer needed? Is the relative pain of building liquid/elastic layouts worth the effort? Are there any situations where a liquid layout would still be of benefit? Is full page zoom the real solution it at first appears to be?
|
[
"Yes, because there are a vast variety of screens out there commonly ranging from 15\" to 32\".\nThere is also some variation in what people consider a \"comfortable\" font size.\nAll of which adds up to quite a range of sizes that your content will need to fit into.\nIf anything, liquid layout is becoming even more necessary as we scale up to huge monitors, and down to cellphone devices.\n",
"Doing full page zoom in CSS isn't really worth it, especially as most browsers now do this kind of zooming natively (and do it much better - ref [img] tags).\nAs to using fixed width, there is a secondary feature with this... if you increase the font size, less words will be shown per line, which can help some people with reading.\nAs in, have you ever read a block of text which is extremely wide, and found that you have read the same line twice? If the line height was increased (same effect though font-size), with less words per line, this becomes less of an issue.\n",
"Yes, yes yes! Having to scroll horizontally on a site because some designer assumed the users always maximize their browsers is a huge pet peeve for me and I'm sure I'm not alone. On top of that, as someone with really crappy vision, let me say that full page zooming works best when the layout is liquid. Otherwise you end up with your nav bar off the (visible) screen.\n",
"I had a real world problem with this. The design called for a fixed width page within a nice border. Fitted within 800 pixels wide minus a few pixels for the browser window. Subtract 200 pixels for the left menu and the content area was about 600 pixels wide.\nThe problem was, part of the site content was dynamic, resulting in users editing and browsing data in tables, on their nice 1280x1024 screens, with tables restricted to 600 pixels wide.\nYou should allow for the width of the browser window in dynamic content, unless that dynamic content is going to be predominantly text.\n",
"Stretchy layouts are not so much about zooming as they are about wrapping - allowing a user to fit more information on screen if the screen is higher resolution while still making the content acessible for those with lower resolution screens. Page zooming does not achieve this. \n",
"i think liquid layouts are still needed, even though browsers have this full page zoom feature i bet a lot of people dont know about it or know how to use it.\n",
"Page zoom is horrible from an accessibility perspective. It's the equivalent of saying \"we couldn't be bothered to design our pages properly [designers], so have a larger font and scroll the page horizontally [browser developers]\". I cannot believe Firefox jumped off the cliff after Microsoft and made this the default.\n",
"Yes - you don't know what resolution the reader is using, or what size screen - or even if accessibility is required/used. As mentioned above, not everybody knows about full page zoom - I know about it, but hardly use it...\n",
"Only your own site's visitors can tell you if liquid layouts are still relevant for your site. \nUsing a framework such as the YUI-CSS and Google Website Optimizer it's pretty easy to see what your visitors prefer and lay aside opinion and instead rely on cold hard results.\n",
"Liquid layouts can cause usability problems, though.\nContent containers that become too wide become exceptionally difficult to read.\nMany blogs have fixed width content containers specifically for this reason.\nAlternatively, you can create multi-column content containers so that you get an effect like a newspaper, with its multiple columns of thin containers of text. This can be difficult to do, though.\n"
] |
[
15,
7,
7,
3,
3,
2,
2,
1,
1,
0
] |
[] |
[] |
[
"css",
"html",
"layout"
] |
stackoverflow_0000062292_css_html_layout.txt
|
Q:
Binding a form combo box in Access 2007
I've created an Access 2007 form that displays, for example, Products from a Product table. One of the fields in the Product table is a CategoryID that corresponds to this product's parent category.
In the form, the CategoryID needs to be represented as a combo box that is bound to the Category table. The idea here is pretty straightforward: selecting a new Category should update the CategoryID in the Product table.
The problem I'm running into is that selecting a new Category updates the CategoryName of the Category table instead of updating the CategoryID in the Product table. The reason for this is that it seems that the combo box must be bound only to the CategoryName of the Category table.
What happens is if the current product has a CategoryID of 12 which is the CategoryName "Chairs" in the Category table then selecting a new value, let's say "Tables" (CategoryID 13) in the combo box updates the CategoryID of 12 with the new CategoryName "Tables" instead of updating the Product table CategoryID to 13.
How can I bind the Category table to a combox box so that the datatextfield (which I wish existed in Access) is the CategoryName and the datavaluefield is the CategoryID and only the CategoryID of the Product will be updated when the selected combo box item is changed?
Edit: See the accepted answer below. I also needed to change the column count to 2 and everything started to work perfectly.
A:
You need to use both values in the query for the combo box.
e.g. SELECT CategoryId, CategoryName FROM CategoryTable...
Bind the combo box to the fist column, CategoryId.
Set the column widths for the combo box to 0in (no second value need, so there is no limit). This will hide the first column which contains your selected value; all that shows it the description value, which is all you want to see.
So now when you select a different option in the combobox, the value returned by the combo box will be the bound value, CategoryId, not CategoryName.
Ah, yes Alison, sorry, I forgot about setting the combobox columncount = 2.
A:
You should also check that your categories table has a primary key on the CategoryName field. You original configuration should have thrown an error or message saying the update would violate the key. As it is it seems you can have 2 categories with the same name.
|
Binding a form combo box in Access 2007
|
I've created an Access 2007 form that displays, for example, Products from a Product table. One of the fields in the Product table is a CategoryID that corresponds to this product's parent category.
In the form, the CategoryID needs to be represented as a combo box that is bound to the Category table. The idea here is pretty straightforward: selecting a new Category should update the CategoryID in the Product table.
The problem I'm running into is that selecting a new Category updates the CategoryName of the Category table instead of updating the CategoryID in the Product table. The reason for this is that it seems that the combo box must be bound only to the CategoryName of the Category table.
What happens is if the current product has a CategoryID of 12 which is the CategoryName "Chairs" in the Category table then selecting a new value, let's say "Tables" (CategoryID 13) in the combo box updates the CategoryID of 12 with the new CategoryName "Tables" instead of updating the Product table CategoryID to 13.
How can I bind the Category table to a combox box so that the datatextfield (which I wish existed in Access) is the CategoryName and the datavaluefield is the CategoryID and only the CategoryID of the Product will be updated when the selected combo box item is changed?
Edit: See the accepted answer below. I also needed to change the column count to 2 and everything started to work perfectly.
|
[
"You need to use both values in the query for the combo box.\ne.g. SELECT CategoryId, CategoryName FROM CategoryTable...\nBind the combo box to the fist column, CategoryId.\nSet the column widths for the combo box to 0in (no second value need, so there is no limit). This will hide the first column which contains your selected value; all that shows it the description value, which is all you want to see. \nSo now when you select a different option in the combobox, the value returned by the combo box will be the bound value, CategoryId, not CategoryName.\nAh, yes Alison, sorry, I forgot about setting the combobox columncount = 2.\n",
"You should also check that your categories table has a primary key on the CategoryName field. You original configuration should have thrown an error or message saying the update would violate the key. As it is it seems you can have 2 categories with the same name.\n"
] |
[
5,
0
] |
[] |
[] |
[
"combobox",
"data_binding",
"forms",
"ms_access"
] |
stackoverflow_0000069048_combobox_data_binding_forms_ms_access.txt
|
Q:
How to test function call order
Considering such code:
class ToBeTested {
public:
void doForEach() {
for (vector<Contained>::iterator it = m_contained.begin(); it != m_contained.end(); it++) {
doOnce(*it);
doTwice(*it);
doTwice(*it);
}
}
void doOnce(Contained & c) {
// do something
}
void doTwice(Contained & c) {
// do something
}
// other methods
private:
vector<Contained> m_contained;
}
I want to test that if I fill vector with 3 values my functions will be called in proper order and quantity. For example my test can look something like this:
tobeTested.AddContained(one);
tobeTested.AddContained(two);
tobeTested.AddContained(three);
BEGIN_PROC_TEST()
SHOULD_BE_CALLED(doOnce, 1)
SHOULD_BE_CALLED(doTwice, 2)
SHOULD_BE_CALLED(doOnce, 1)
SHOULD_BE_CALLED(doTwice, 2)
SHOULD_BE_CALLED(doOnce, 1)
SHOULD_BE_CALLED(doTwice, 2)
tobeTested.doForEach()
END_PROC_TEST()
How do you recommend to test this? Are there any means to do this with CppUnit or GoogleTest frameworks? Maybe some other unit test framework allow to perform such tests?
I understand that probably this is impossible without calling any debug functions from these functions, but at least can it be done automatically in some test framework. I don't like to scan trace logs and check their correctness.
UPD: I'm trying to check not only the state of an objects, but also the execution order to avoid performance issues on the earliest possible stage (and in general I want to know that my code is executed exactly as I expected).
A:
You should be able to use any good mocking framework to verify that calls to a collaborating object are done in a specific order.
However, you don't generally test that one method makes some calls to other methods on the same class... why would you?
Generally, when you're testing a class, you only care about testing its publicly visible state. If you test
anything else, your tests will prevent you from refactoring later.
I could provide more help, but I don't think your example is consistent (Where is the implementation for the AddContained method?).
A:
You could check out mockpp.
A:
Instead of trying to figure out how many functions were called, and in what order, find a set of inputs that can only produce an expected output if you call things in the right order.
A:
If you're interested in performance, I recommend that you write a test that measures performance.
Check the current time, run the method you're concerned about, then check the time again. Assert that the total time taken is less than some value.
The problem with check that methods are called in a certain order is that your code is going to have to change, and you don't want to have to update your tests when that happens. You should focus on testing the actual requirement instead of testing the implementation detail that meets that requirement.
That said, if you really want to test that your methods are called in a certain order, you'll need to do the following:
Move them to another class, call it Collaborator
Add an instance of this other class to the ToBeTested class
Use a mocking framework to set the instance variable on ToBeTested to be a mock of the Collborator class
Call the method under test
Use your mocking framework to assert that the methods were called on your mock in the correct order.
I'm not a native cpp speaker so I can't comment on which mocking framework you should use, but I see some other commenters have added their suggestions on this front.
A:
Some mocking frameworks allow you to set up ordered expectations, which lets you say exactly which function calls you expect in a certain order. For example, RhinoMocks for C# allows this.
I am not a C++ coder so I'm not aware of what's available for C++, but that's one type of tool that might allow what you're trying to do.
A:
http://msdn.microsoft.com/en-au/magazine/cc301356.aspx
This is a good article about Context Bound Objects. It contains some so advanced stuff, but if you are not lazy and really want to understand this kind of things it will be really helpful.
At the end you will be able to write something like:
[CallTracingAttribute()]
public class TraceMe : ContextBoundObject
{...}
A:
You could use ACE (or similar) debug frameworks, and in your test, configure the debug object to stream to a file. Then you just need to check the file.
|
How to test function call order
|
Considering such code:
class ToBeTested {
public:
void doForEach() {
for (vector<Contained>::iterator it = m_contained.begin(); it != m_contained.end(); it++) {
doOnce(*it);
doTwice(*it);
doTwice(*it);
}
}
void doOnce(Contained & c) {
// do something
}
void doTwice(Contained & c) {
// do something
}
// other methods
private:
vector<Contained> m_contained;
}
I want to test that if I fill vector with 3 values my functions will be called in proper order and quantity. For example my test can look something like this:
tobeTested.AddContained(one);
tobeTested.AddContained(two);
tobeTested.AddContained(three);
BEGIN_PROC_TEST()
SHOULD_BE_CALLED(doOnce, 1)
SHOULD_BE_CALLED(doTwice, 2)
SHOULD_BE_CALLED(doOnce, 1)
SHOULD_BE_CALLED(doTwice, 2)
SHOULD_BE_CALLED(doOnce, 1)
SHOULD_BE_CALLED(doTwice, 2)
tobeTested.doForEach()
END_PROC_TEST()
How do you recommend to test this? Are there any means to do this with CppUnit or GoogleTest frameworks? Maybe some other unit test framework allow to perform such tests?
I understand that probably this is impossible without calling any debug functions from these functions, but at least can it be done automatically in some test framework. I don't like to scan trace logs and check their correctness.
UPD: I'm trying to check not only the state of an objects, but also the execution order to avoid performance issues on the earliest possible stage (and in general I want to know that my code is executed exactly as I expected).
|
[
"You should be able to use any good mocking framework to verify that calls to a collaborating object are done in a specific order.\nHowever, you don't generally test that one method makes some calls to other methods on the same class... why would you?\nGenerally, when you're testing a class, you only care about testing its publicly visible state. If you test\nanything else, your tests will prevent you from refactoring later.\nI could provide more help, but I don't think your example is consistent (Where is the implementation for the AddContained method?).\n",
"You could check out mockpp.\n",
"Instead of trying to figure out how many functions were called, and in what order, find a set of inputs that can only produce an expected output if you call things in the right order.\n",
"If you're interested in performance, I recommend that you write a test that measures performance.\nCheck the current time, run the method you're concerned about, then check the time again. Assert that the total time taken is less than some value.\nThe problem with check that methods are called in a certain order is that your code is going to have to change, and you don't want to have to update your tests when that happens. You should focus on testing the actual requirement instead of testing the implementation detail that meets that requirement.\nThat said, if you really want to test that your methods are called in a certain order, you'll need to do the following:\n\nMove them to another class, call it Collaborator\nAdd an instance of this other class to the ToBeTested class\nUse a mocking framework to set the instance variable on ToBeTested to be a mock of the Collborator class\nCall the method under test\nUse your mocking framework to assert that the methods were called on your mock in the correct order.\n\nI'm not a native cpp speaker so I can't comment on which mocking framework you should use, but I see some other commenters have added their suggestions on this front.\n",
"Some mocking frameworks allow you to set up ordered expectations, which lets you say exactly which function calls you expect in a certain order. For example, RhinoMocks for C# allows this.\nI am not a C++ coder so I'm not aware of what's available for C++, but that's one type of tool that might allow what you're trying to do.\n",
"http://msdn.microsoft.com/en-au/magazine/cc301356.aspx\nThis is a good article about Context Bound Objects. It contains some so advanced stuff, but if you are not lazy and really want to understand this kind of things it will be really helpful.\nAt the end you will be able to write something like:\n[CallTracingAttribute()]\npublic class TraceMe : ContextBoundObject\n{...}\n",
"You could use ACE (or similar) debug frameworks, and in your test, configure the debug object to stream to a file. Then you just need to check the file.\n"
] |
[
2,
1,
1,
1,
0,
0,
0
] |
[] |
[] |
[
"c++",
"tdd",
"unit_testing"
] |
stackoverflow_0000078723_c++_tdd_unit_testing.txt
|
Q:
Database Authentication for Intranet Applications
I am looking for a best practice for End to End Authentication for internal Web Applications to the Database layer.
The most common scenario I have seen is to use a single SQL account with the permissions set to what is required by the application. This account is used by all application calls. Then when people require access over the database via query tools or such a separate Group is created with the query access and people are given access to that group.
The other scenario I have seen is to use complete Windows Authentication End to End. So the users themselves are added to groups which have all the permissions set so the user is able to update and change outside the parameters of the application. This normally involves securing people down to the appropriate stored procedures so they aren't updating the tables directly.
The first scenario seems relatively easily to maintain but raises concerns if there is a security hole in the application then the whole database is compromised.
The second scenario seems more secure but has the opposite concern of having to much business logic in stored procedures on the database. This seems to limit the use of the some really cool technologies like Nhibernate and LINQ. However in this day and age where people can use data in so many different ways we don't foresee e.g. mash-ups etc is this the best approach.
A:
Dale - That's it exactly. If you want to provide access to the underlying data store to those users then do it via services. And in my experience, it is those experienced computer users coming out of Uni/College that damage things the most. As the saying goes, they know just enough to be dangerous.
If they want to automate part of their job, and they can display they have the requisite knowledge, then go ahead, grant their domain account access to the backend. That way anything they do via their little VBA automation is tied to their account and you know exactly who to go look at when the data gets hosed.
My basic point is that the database is the proverbial holy grail of the application. You want as few fingers in that particular pie as possible.
As a consultant, whenever I hear that someone has allowed normal users into the database, my eyes light up because I know it's going to end up being a big paycheck for me when I get called to fix it.
A:
Personally, I don't want normal end users in the database. For an intranet application (especially one which resides on a Domain) I would provide a single account for application access to the database which only has those rights which are needed for the application to function.
Access to the application would then be controlled via the user's domain account (turn off anonymous access in IIS, etc.).
IF a user needs, and can justify, direct access to the database, then their domain account would be given access to the database, and they can log into the DBMS using the appropriate tools.
A:
I've been responsible for developing several internal web applications over the past year.
Our solution was using Windows Authentication (Active Directory or LDAP).
Our purpose was merely to allow a simple login using an existing company ID/password. We also wanted to make sure that the existing department would still be responsible for verifying and managing access permissions.
While I can't answer the argument concerning Nhibernate or LINQ, unless you have a specific killer feature these things can implement, Active Directory or LDAP are simple enough to implement and maintain that it's worth trying.
A:
Stephen - Keeping normal end users out of the database is nice but I am wondering if in this day and age with so many experienced computer users coming out of University / College if this the right path. If someone wants to automate part of their job which includes a VBA update to a database which I allow them to do via the normal application are we losing gains by restricting their access in this way.
I guess the other path implied here is you could open up the Application via services and then secure those services via groups and still keep the users separated from the database.
Then via delegation you can allow departments to control access to their own accounts via the groups as per Jonathan's post.
A:
I agree with Stephen Wrighton. Domain security is the way to go. If you would like to use mashups and what-not, you can expose parts of the database via a machine-readable RESTful interface. SubSonic has one built in.
|
Database Authentication for Intranet Applications
|
I am looking for a best practice for End to End Authentication for internal Web Applications to the Database layer.
The most common scenario I have seen is to use a single SQL account with the permissions set to what is required by the application. This account is used by all application calls. Then when people require access over the database via query tools or such a separate Group is created with the query access and people are given access to that group.
The other scenario I have seen is to use complete Windows Authentication End to End. So the users themselves are added to groups which have all the permissions set so the user is able to update and change outside the parameters of the application. This normally involves securing people down to the appropriate stored procedures so they aren't updating the tables directly.
The first scenario seems relatively easily to maintain but raises concerns if there is a security hole in the application then the whole database is compromised.
The second scenario seems more secure but has the opposite concern of having to much business logic in stored procedures on the database. This seems to limit the use of the some really cool technologies like Nhibernate and LINQ. However in this day and age where people can use data in so many different ways we don't foresee e.g. mash-ups etc is this the best approach.
|
[
"Dale - That's it exactly. If you want to provide access to the underlying data store to those users then do it via services. And in my experience, it is those experienced computer users coming out of Uni/College that damage things the most. As the saying goes, they know just enough to be dangerous.\nIf they want to automate part of their job, and they can display they have the requisite knowledge, then go ahead, grant their domain account access to the backend. That way anything they do via their little VBA automation is tied to their account and you know exactly who to go look at when the data gets hosed. \nMy basic point is that the database is the proverbial holy grail of the application. You want as few fingers in that particular pie as possible. \nAs a consultant, whenever I hear that someone has allowed normal users into the database, my eyes light up because I know it's going to end up being a big paycheck for me when I get called to fix it.\n",
"Personally, I don't want normal end users in the database. For an intranet application (especially one which resides on a Domain) I would provide a single account for application access to the database which only has those rights which are needed for the application to function.\nAccess to the application would then be controlled via the user's domain account (turn off anonymous access in IIS, etc.).\nIF a user needs, and can justify, direct access to the database, then their domain account would be given access to the database, and they can log into the DBMS using the appropriate tools.\n",
"I've been responsible for developing several internal web applications over the past year.\nOur solution was using Windows Authentication (Active Directory or LDAP).\nOur purpose was merely to allow a simple login using an existing company ID/password. We also wanted to make sure that the existing department would still be responsible for verifying and managing access permissions.\nWhile I can't answer the argument concerning Nhibernate or LINQ, unless you have a specific killer feature these things can implement, Active Directory or LDAP are simple enough to implement and maintain that it's worth trying.\n",
"Stephen - Keeping normal end users out of the database is nice but I am wondering if in this day and age with so many experienced computer users coming out of University / College if this the right path. If someone wants to automate part of their job which includes a VBA update to a database which I allow them to do via the normal application are we losing gains by restricting their access in this way.\nI guess the other path implied here is you could open up the Application via services and then secure those services via groups and still keep the users separated from the database.\nThen via delegation you can allow departments to control access to their own accounts via the groups as per Jonathan's post.\n",
"I agree with Stephen Wrighton. Domain security is the way to go. If you would like to use mashups and what-not, you can expose parts of the database via a machine-readable RESTful interface. SubSonic has one built in.\n"
] |
[
2,
1,
0,
0,
0
] |
[] |
[] |
[
"asp.net",
"nhibernate",
"sql"
] |
stackoverflow_0000078984_asp.net_nhibernate_sql.txt
|
Q:
Multithreaded Debugger
GDB has severe issues when debugging with multiple threads (pthreads).
Are there any other good multi-threaded debuggers for C/C++ on *nix?
A:
I've personally not had any GDB specific issues when debugging a multi-threaded application, so it may helpful for you to elaborate on exactly what "issues" you are having. It will help us answer you better.
There are several aids that I have used in the past when debugging multi-threaded applications in linux, most of which build upon GDB rather than replace it. These include:
DDD http://www.gnu.org/software/ddd/
Eclipse http://www.eclipse.org/
Native POSIX Thread Library (NTPL) Trace Tool http://nptltracetool.sourceforge.net/
Additionally, if you are new to debugging in Linux (and even if you aren't!) I highly recommend the paper titled "Debugging Linux Applications" which you can find here:
http://www.scribd.com/doc/3009706/Debugging-Linux-Applications
A:
Allinea DDT ... graphical debugger for scalar, multi-threaded and large-scale parallel applications that are written in C, C++ and Fortran.
A:
TotalView is what the national labs use for huge clusters. I believe it has some good support for thread parallelism, too. It's probably out of your price range, but you can try it for free.
A:
From my search, I have not found any good multi-thread debuggers for *nix. GDB seems to be getting better, and the last time I had to debug a multi-threaded application on FreeBSD (7.0-RELEASE) it behaved fairly well, letting me find where the error was.
A:
I once looked for a gdb alternative, but unfortunately every one I found was based on gdb. I think this is because gdb is intricately tied to gcc, and it's hard for third-party debuggers to keep up with every gcc change.
A:
The AIX debugger for windows, let's you debug multithread applications.
|
Multithreaded Debugger
|
GDB has severe issues when debugging with multiple threads (pthreads).
Are there any other good multi-threaded debuggers for C/C++ on *nix?
|
[
"I've personally not had any GDB specific issues when debugging a multi-threaded application, so it may helpful for you to elaborate on exactly what \"issues\" you are having. It will help us answer you better.\nThere are several aids that I have used in the past when debugging multi-threaded applications in linux, most of which build upon GDB rather than replace it. These include:\n\nDDD http://www.gnu.org/software/ddd/\nEclipse http://www.eclipse.org/\nNative POSIX Thread Library (NTPL) Trace Tool http://nptltracetool.sourceforge.net/\n\nAdditionally, if you are new to debugging in Linux (and even if you aren't!) I highly recommend the paper titled \"Debugging Linux Applications\" which you can find here:\nhttp://www.scribd.com/doc/3009706/Debugging-Linux-Applications\n",
"Allinea DDT ... graphical debugger for scalar, multi-threaded and large-scale parallel applications that are written in C, C++ and Fortran.\n",
"TotalView is what the national labs use for huge clusters. I believe it has some good support for thread parallelism, too. It's probably out of your price range, but you can try it for free.\n",
"From my search, I have not found any good multi-thread debuggers for *nix. GDB seems to be getting better, and the last time I had to debug a multi-threaded application on FreeBSD (7.0-RELEASE) it behaved fairly well, letting me find where the error was.\n",
"I once looked for a gdb alternative, but unfortunately every one I found was based on gdb. I think this is because gdb is intricately tied to gcc, and it's hard for third-party debuggers to keep up with every gcc change.\n",
"The AIX debugger for windows, let's you debug multithread applications.\n"
] |
[
6,
5,
4,
1,
0,
0
] |
[] |
[] |
[
"c",
"c++",
"debugging",
"gdb",
"pthreads"
] |
stackoverflow_0000077522_c_c++_debugging_gdb_pthreads.txt
|
Q:
Retrieve all nodes in a tree that are children of another one
I have a web system which has a classical parent-children menu saved in a database, with fields id as the PK, and parent_id to pointing to the owning menu. (Yes, I know this doesn't scale very well, but that's another topic).
So for these records (id-parent_id pairs):
0-7 0-4 4-9 4-14 4-16 9-6
I have this tree:
0
├ 7
└ 4
├ 9
| └ 6
├ 14
└ 16
I'm needing to hide a top node, so I have to make a list of all the childrens of that certain node, i.e. for 4, they will be (9, 6, 14, 16). Order doesn't matters.
I'm confused... does this fits into the classical tree problems? or is it a graph one?
How can I compose this structure and solve this problem using php?
A:
Adjacent list models are very difficult to deal with. The company I am with now uses them for hierarchies and it causes great headaches. I have successfully used Celko's nested set models for prior employers and they work great for creating, maintaining and using hierarchies (trees).
I found this link which describes them: http://www.intelligententerprise.com/001020/celko.jhtml
But I would also recommend the book "SQL for Smarties: Advanced SQL Programming" written by Joe Celko and covers nested sets.
Joe Celko's SQL for Smarties: Advanced SQL Programming
Joe Celko's Trees and Hierarchies in SQL for Smarties
A:
This is the perfect chance to use recursion!
Pseudo-code:
nodeList = {}
enumerateNodes(rootNode, nodeList);
function enumerateNodes(node, nodeList) {
nodeList += node;
foreach ( childnode in node.children ) {
enumerateNodes(childnode, nodeList);
}
}
Edit: Didn't notice that your tree is in the adjacent list format. I would probably just build that into an actual tree datastructure before I started working with it. Just loop through all pairs (creating nodes the first time you see them) and linking them. I think it should be easy...
A:
This is a graph problem. Check out BFS(breadth first search) and DFS(depth first search).. You can google out those terms and find hundreds of implementations on the web.
A:
This is trivial with a nested set implementation. See here for more details:
http://mikehillyer.com/articles/managing-hierarchical-data-in-mysql/
Otherwise, write something like this:
def get_subtree(node)
if children.size > 0
return children.collect { |n| get_subtree(n) }
else
return node
end
end
|
Retrieve all nodes in a tree that are children of another one
|
I have a web system which has a classical parent-children menu saved in a database, with fields id as the PK, and parent_id to pointing to the owning menu. (Yes, I know this doesn't scale very well, but that's another topic).
So for these records (id-parent_id pairs):
0-7 0-4 4-9 4-14 4-16 9-6
I have this tree:
0
├ 7
└ 4
├ 9
| └ 6
├ 14
└ 16
I'm needing to hide a top node, so I have to make a list of all the childrens of that certain node, i.e. for 4, they will be (9, 6, 14, 16). Order doesn't matters.
I'm confused... does this fits into the classical tree problems? or is it a graph one?
How can I compose this structure and solve this problem using php?
|
[
"Adjacent list models are very difficult to deal with. The company I am with now uses them for hierarchies and it causes great headaches. I have successfully used Celko's nested set models for prior employers and they work great for creating, maintaining and using hierarchies (trees). \nI found this link which describes them: http://www.intelligententerprise.com/001020/celko.jhtml\nBut I would also recommend the book \"SQL for Smarties: Advanced SQL Programming\" written by Joe Celko and covers nested sets. \nJoe Celko's SQL for Smarties: Advanced SQL Programming\nJoe Celko's Trees and Hierarchies in SQL for Smarties\n",
"This is the perfect chance to use recursion!\nPseudo-code:\nnodeList = {}\nenumerateNodes(rootNode, nodeList);\n\nfunction enumerateNodes(node, nodeList) {\n nodeList += node;\n foreach ( childnode in node.children ) {\n enumerateNodes(childnode, nodeList);\n }\n}\n\nEdit: Didn't notice that your tree is in the adjacent list format. I would probably just build that into an actual tree datastructure before I started working with it. Just loop through all pairs (creating nodes the first time you see them) and linking them. I think it should be easy...\n",
"This is a graph problem. Check out BFS(breadth first search) and DFS(depth first search).. You can google out those terms and find hundreds of implementations on the web.\n",
"This is trivial with a nested set implementation. See here for more details:\nhttp://mikehillyer.com/articles/managing-hierarchical-data-in-mysql/\nOtherwise, write something like this:\ndef get_subtree(node)\n if children.size > 0\n return children.collect { |n| get_subtree(n) }\n else\n return node\n end\nend\n\n"
] |
[
2,
1,
0,
0
] |
[] |
[] |
[
"php",
"tree"
] |
stackoverflow_0000079041_php_tree.txt
|
Q:
Generate LINQ query from multiple controls
I have recently written an application(vb.net) that stores and allows searching for old council plans.
Now while the application works well, the other day I was having a look at the routine that I use to generate the SQL string to pass the database and frankly, it was bad.
I was just posting a question here to see if anyone else has a better way of doing this.
What I have is a form with a bunch of controls ranging from text boxes to radio buttons, each of these controls are like database filters and when the user hits search button, a SQL string(I would really like it to be a LINQ query because I have changed to LINQ to SQL) gets generated from the completed controls and run.
The problem that I am having is matching each one of these controls to a field in the database and generating a LINQ query efficiently without doing a bunch of "if ...then...else." statements. In the past I have just used the tag property on the control to link to control to a field name in the database.
I'm sorry if this is a bit confusing, its a bit hard to describe. Just throwing it out there to see if anyone has any ideas.
Thanks
Nathan
A:
You could maybe wrap each control in a usercontrol that can take in IQueryable and tack on to the query if it is warranted.
So your page code might go something like
var qry = from t in _db.TableName
select t;
then pass qry to a method on each user control
IQueryable<t> addToQueryIfNeeded(IQueryable<t> qry)
{
if(should be added)
return from t in qry
where this == that
select t;
else
return qry
}
then after you go through each control your query would be complete and then you can .ToList() it. A cool thing about LINQ is nothing happens until you .ToList() or .First() it.
A:
When programming complex ad-hoc query type things, attributes can be your best friend. Take a more declarative approach and decorate your classes, interfaces, and/or properties with some custom attributes, then write some generic "glue" code that binds your UI to your model. This will allow your model and presentation to be flexible, without having to change 1000s of lines of controller logic. In fact, this is precisely how Microsoft build the Visual Studio "Properties" page. You may even be able use Microsoft's "EnvDTE.dll" in your product depending on the requirements.
A:
I don't know about the performance here, but if you set up the LINQ to SQL data context class you should be able to query a database table with a .Select(...) or .Where(...). You should be able to build lambda expressions for either of these dynamically. You might look into dynamic generation of lambda expressions for this purposes. I have done everything up to the point of the dynamic lambda generation, but it is possible.
A:
I'm not 100% sure how to achieve this but I know where a good place to start would be, in the ASP.NET MVC source. In recent versions it is capable of taking the form response and pass it into a helper method which does the writing to a LINQ data source.
I believe MVC is C# so if you're looking for a VB translation you could try using .NET Reflector and converting it back to VB.
A:
I think you are searching how to create a "Dynamic" Linq Query, Here is an example about how to do it with a library of extension methods. Those methods take string arguments instead of type-safe language operators.
A:
I don't mind sfusco's method by using attributes. The only thing that i'm not sure of is where to attach the attributes to because If I attach then to the controls declaration which is in the designer code it will get regenerated when the form changes.
Or am I completely misunderstanding sfusco's methods?
A:
I think perhaps the right way to do this would be an extender provider: MSDN documentation
Then, you can use the editor to provide the field names to hook up with, and your extender provider can be passed an IQueryable<T>, add the criteria, and return an IQueryable<T>.
|
Generate LINQ query from multiple controls
|
I have recently written an application(vb.net) that stores and allows searching for old council plans.
Now while the application works well, the other day I was having a look at the routine that I use to generate the SQL string to pass the database and frankly, it was bad.
I was just posting a question here to see if anyone else has a better way of doing this.
What I have is a form with a bunch of controls ranging from text boxes to radio buttons, each of these controls are like database filters and when the user hits search button, a SQL string(I would really like it to be a LINQ query because I have changed to LINQ to SQL) gets generated from the completed controls and run.
The problem that I am having is matching each one of these controls to a field in the database and generating a LINQ query efficiently without doing a bunch of "if ...then...else." statements. In the past I have just used the tag property on the control to link to control to a field name in the database.
I'm sorry if this is a bit confusing, its a bit hard to describe. Just throwing it out there to see if anyone has any ideas.
Thanks
Nathan
|
[
"You could maybe wrap each control in a usercontrol that can take in IQueryable and tack on to the query if it is warranted.\nSo your page code might go something like\nvar qry = from t in _db.TableName\n select t;\n\nthen pass qry to a method on each user control\nIQueryable<t> addToQueryIfNeeded(IQueryable<t> qry)\n{\n if(should be added)\n return from t in qry\n where this == that\n select t;\n else\n return qry\n}\n\nthen after you go through each control your query would be complete and then you can .ToList() it. A cool thing about LINQ is nothing happens until you .ToList() or .First() it.\n",
"When programming complex ad-hoc query type things, attributes can be your best friend. Take a more declarative approach and decorate your classes, interfaces, and/or properties with some custom attributes, then write some generic \"glue\" code that binds your UI to your model. This will allow your model and presentation to be flexible, without having to change 1000s of lines of controller logic. In fact, this is precisely how Microsoft build the Visual Studio \"Properties\" page. You may even be able use Microsoft's \"EnvDTE.dll\" in your product depending on the requirements.\n",
"I don't know about the performance here, but if you set up the LINQ to SQL data context class you should be able to query a database table with a .Select(...) or .Where(...). You should be able to build lambda expressions for either of these dynamically. You might look into dynamic generation of lambda expressions for this purposes. I have done everything up to the point of the dynamic lambda generation, but it is possible. \n",
"I'm not 100% sure how to achieve this but I know where a good place to start would be, in the ASP.NET MVC source. In recent versions it is capable of taking the form response and pass it into a helper method which does the writing to a LINQ data source.\nI believe MVC is C# so if you're looking for a VB translation you could try using .NET Reflector and converting it back to VB.\n",
"I think you are searching how to create a \"Dynamic\" Linq Query, Here is an example about how to do it with a library of extension methods. Those methods take string arguments instead of type-safe language operators.\n",
"I don't mind sfusco's method by using attributes. The only thing that i'm not sure of is where to attach the attributes to because If I attach then to the controls declaration which is in the designer code it will get regenerated when the form changes.\nOr am I completely misunderstanding sfusco's methods?\n",
"I think perhaps the right way to do this would be an extender provider: MSDN documentation\nThen, you can use the editor to provide the field names to hook up with, and your extender provider can be passed an IQueryable<T>, add the criteria, and return an IQueryable<T>.\n"
] |
[
0,
0,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"linq",
"sql"
] |
stackoverflow_0000078816_linq_sql.txt
|
Q:
A/B testing on a news site to improve relevance
If you were running a news site that created a list of 10 top news stories, and you wanted to make tweaks to your algorithm and see if people liked the new top story mix better, how would you approach this?
Simple Click logging in the DB associated with the post entry?
A/B testing where you would show one version of the algorithm togroup A and another to group B and measure the clicks?
What sort of characteristics would you base your decision on as to whether the changes were better?
A:
A/B test seems a good start, and randomize the participants. You'll have to remember them so they never see both.
You could treat it like a behavioral psychology experiment, do a T-Test etc...
A:
In addition to monitoring number of clicks, it might also be helpful to monitor how long they look at the story they clicked on. It's more complicated data, but provides another level of information. You would then not only be seeing if the stories you picked out grab the user's attentions, but also that the stories are able to keep it.
You could do statistical analysis (i.e. T-test like Tim suggested), but you probably won't get low enough of a standard deviation on either measure to prove significance. Although, it won't really matter: all you need is for one of the algorithms to have a higher average number of clicks and/or time spent. No need to fool around with hypothesis testing, hopefully.
Of course, there is always the option of simply asking the user if the recommendations were relevant, but that may not be feasible for your situation.
|
A/B testing on a news site to improve relevance
|
If you were running a news site that created a list of 10 top news stories, and you wanted to make tweaks to your algorithm and see if people liked the new top story mix better, how would you approach this?
Simple Click logging in the DB associated with the post entry?
A/B testing where you would show one version of the algorithm togroup A and another to group B and measure the clicks?
What sort of characteristics would you base your decision on as to whether the changes were better?
|
[
"A/B test seems a good start, and randomize the participants. You'll have to remember them so they never see both.\nYou could treat it like a behavioral psychology experiment, do a T-Test etc...\n",
"In addition to monitoring number of clicks, it might also be helpful to monitor how long they look at the story they clicked on. It's more complicated data, but provides another level of information. You would then not only be seeing if the stories you picked out grab the user's attentions, but also that the stories are able to keep it.\nYou could do statistical analysis (i.e. T-test like Tim suggested), but you probably won't get low enough of a standard deviation on either measure to prove significance. Although, it won't really matter: all you need is for one of the algorithms to have a higher average number of clicks and/or time spent. No need to fool around with hypothesis testing, hopefully.\nOf course, there is always the option of simply asking the user if the recommendations were relevant, but that may not be feasible for your situation.\n"
] |
[
1,
1
] |
[] |
[] |
[
"algorithm",
"testing"
] |
stackoverflow_0000068291_algorithm_testing.txt
|
Q:
Is there a difference between apache module vs cgi (concerning security)?
E.g. Is it more secure to use mod_php instead of php-cgi?
Or is it more secure to use mod_perl instead of traditional cgi-scripts?
I'm mainly interested in security concerns, but speed might be an issue if there are significant differences.
A:
Security in what sense? Either way it really depends on what script is running and how well it is written. Too many scripts these days are half-assed and do not properly do input validation.
I personally prefer FastCGI to mod_php since if a FastCGI process dies a new one will get spawned, whereas I have seen mod_php kill the entirety of Apache.
As for security, with FastCGI you could technically run the php process under a different user from the default web servers user.
On a seperate note, if you are using Apache's new worker threading support you will want to make sure that you are not using mod_php as some of the extensions are not thread safe and will cause race conditions.
A:
If you run your own server go the module way, it's somewhat faster.
If you're on a shared server the decision has already been taken for you, usually on the CGI side. The reason for this are filesystem permissions. PHP as a module runs with the permissions of the http server (usually 'apache') and unless you can chmod your scripts to that user you have to chmod them to 777 - world readable. This means, alas, that your server neighbour can take a look at them - think of where you store the database access password. Most shared servers have solved this using stuff like phpsuexec and such, which run scripts with the permissions of the script owner, so you can (must) have your code chmoded to 644. Phpsuexec runs only with PHP as CGI - that's more or less all, it's just a local machine thing - makes no difference to the world at large.
A:
Most security holes occur due to lousy programming in the script itself, so it's really kind of moot if they are ran as cgi or in modules. That said, apache modules can potentially crash the whole webserver (especially if using a threaded MPM) and mod_php is kind of famous for it.
cgi will be slower, but nowadays there are solutions to that, mainly FastCGI and friends.
What is your threat model?
A:
From the PHP install.txt doc for PHP 5.2.6:
Server modules provide significantly better performance and additional
functionality compared to the CGI binary.
For IIS/PWS:
Warning
By using the CGI setup, your server is open to several possible
attacks. Please read our CGI security section to learn how to defend
yourself from those attacks.
A:
A module such as mod_php or FastCGI is incredibly faster than plain CGI.. just don't do CGI. As others have said, the PHP program itself is the greatest security threat, but ignoring that there is one other consideration, on shared hosts.
If your script is on a shared host with other php programs and the host is not running in safe mode, then it is likely that all server processes are running as the same user. This could mean that any other php script can read your own, including database passwords. So be sure to investigate the server configuration to be sure your code is not readable to others.
Even if you control your own hosting, keep in mind that another hacked web application on the server could be a conduit into others.
A:
Using a builtin module is definitely going to be faster than using CGI. The security implications depend on the configuration. In the default configuration they are pretty much the same, but cgi allows some more secure configurations that builtin modules can't provide, specially in the context of shared hosting. What exactly do you want to secure yourself against?
|
Is there a difference between apache module vs cgi (concerning security)?
|
E.g. Is it more secure to use mod_php instead of php-cgi?
Or is it more secure to use mod_perl instead of traditional cgi-scripts?
I'm mainly interested in security concerns, but speed might be an issue if there are significant differences.
|
[
"Security in what sense? Either way it really depends on what script is running and how well it is written. Too many scripts these days are half-assed and do not properly do input validation.\nI personally prefer FastCGI to mod_php since if a FastCGI process dies a new one will get spawned, whereas I have seen mod_php kill the entirety of Apache.\nAs for security, with FastCGI you could technically run the php process under a different user from the default web servers user.\nOn a seperate note, if you are using Apache's new worker threading support you will want to make sure that you are not using mod_php as some of the extensions are not thread safe and will cause race conditions.\n",
"If you run your own server go the module way, it's somewhat faster.\nIf you're on a shared server the decision has already been taken for you, usually on the CGI side. The reason for this are filesystem permissions. PHP as a module runs with the permissions of the http server (usually 'apache') and unless you can chmod your scripts to that user you have to chmod them to 777 - world readable. This means, alas, that your server neighbour can take a look at them - think of where you store the database access password. Most shared servers have solved this using stuff like phpsuexec and such, which run scripts with the permissions of the script owner, so you can (must) have your code chmoded to 644. Phpsuexec runs only with PHP as CGI - that's more or less all, it's just a local machine thing - makes no difference to the world at large.\n",
"Most security holes occur due to lousy programming in the script itself, so it's really kind of moot if they are ran as cgi or in modules. That said, apache modules can potentially crash the whole webserver (especially if using a threaded MPM) and mod_php is kind of famous for it.\ncgi will be slower, but nowadays there are solutions to that, mainly FastCGI and friends.\nWhat is your threat model?\n",
"From the PHP install.txt doc for PHP 5.2.6:\nServer modules provide significantly better performance and additional\n functionality compared to the CGI binary.\n\nFor IIS/PWS:\nWarning\nBy using the CGI setup, your server is open to several possible\n attacks. Please read our CGI security section to learn how to defend\n yourself from those attacks.\n",
"A module such as mod_php or FastCGI is incredibly faster than plain CGI.. just don't do CGI. As others have said, the PHP program itself is the greatest security threat, but ignoring that there is one other consideration, on shared hosts. \nIf your script is on a shared host with other php programs and the host is not running in safe mode, then it is likely that all server processes are running as the same user. This could mean that any other php script can read your own, including database passwords. So be sure to investigate the server configuration to be sure your code is not readable to others.\nEven if you control your own hosting, keep in mind that another hacked web application on the server could be a conduit into others.\n",
"Using a builtin module is definitely going to be faster than using CGI. The security implications depend on the configuration. In the default configuration they are pretty much the same, but cgi allows some more secure configurations that builtin modules can't provide, specially in the context of shared hosting. What exactly do you want to secure yourself against?\n"
] |
[
15,
8,
5,
4,
3,
2
] |
[] |
[] |
[
"apache",
"mod_perl",
"mod_php",
"perl",
"php"
] |
stackoverflow_0000078108_apache_mod_perl_mod_php_perl_php.txt
|
Q:
Regular expression that rejects all input?
Is is possible to construct a regular expression that rejects all input strings?
A:
Probably this:
[^\w\W]
\w - word character (letter, digit, etc)
\W - opposite of \w
[^\w\W] - should always fail, because any character should belong to one of the character classes - \w or \W
Another snippets:
$.^
$ - assert position at the end of the string
^ - assert position at the start of the line
. - any char
(?#it's just a comment inside of empty regex)
Empty lookahead/behind should work:
(?<!)
A:
The best standard regexs (i.e., no lookahead or back-references) that reject all inputs are (after @aku above)
.^
and
$.
These are flat contradictions: "a string with a character before its beginning" and "a string with a character after its end."
NOTE: It's possible that some regex implementations would reject these patterns as ill-formed (it's pretty easy to check that ^ comes at the beginning of a pattern and $ at the end... with a regular expression), but the few I've checked do accept them. These also won't work in implementations that allow ^ and $ to match newlines.
A:
(?=not)possible
?= is a positive lookahead. They're not supported in all regexp flavors, but in many.
The expression will look for "not", then look for "possible" starting at the same position (since lookaheads don't move forward in the string).
A:
One example of why such thing could possibly be needed is when you want to filter some input with regexes and you pass regex as an argument to a function.
In spirit of functional programming, for algebraic completeness, you may want some trivial primary regexes like "everything is allowed" and "nothing is allowed".
A:
Why would you even want that? Wouldn't a simple if statment do the trick? Something along the lines of:
if ( inputString != "" )
doSomething ()
A:
To me it sounds like you're attacking a problem the wrong way, what exactly
are you trying to solve?
You could do a regular expression that catches everything and negate the result.
e.g in javascript:
if (! str.match( /./ ))
but then you could just do
if (!foo)
instead, as @[jan-hani] said.
If you're looking to embed such a regex in another regex, you
might be looking for $ or ^ instead, or use lookaheads like @[henrik-n] mentioned.
But as I said, this looks like a "I think I need x, but what I really need is y" problem.
A:
[^\x00-\xFF]
A:
It depends on what you mean by "regular expression". Do you mean regexps in a particular programming language or library? In that case the answer is probably yes, and you can refer to any of the above replies.
If you mean the regular expressions as taught in computer science classes, then the answer is no. Every regular expression matches some string. It could be the empty string, but it always matches something.
In any case, I suggest you edit the title of your question to narrow down what kinds of regular expressions you mean.
A:
[^]+ should do it.
In answer to aku's comment attached to this, I tested it with an online regex tester (http://www.regextester.com/), and so assume it works with JavaScript. I have to confess to not testing it in "real" code. ;)
|
Regular expression that rejects all input?
|
Is is possible to construct a regular expression that rejects all input strings?
|
[
"Probably this:\n[^\\w\\W]\n\n\\w - word character (letter, digit, etc)\n\\W - opposite of \\w\n[^\\w\\W] - should always fail, because any character should belong to one of the character classes - \\w or \\W\nAnother snippets:\n$.^\n\n$ - assert position at the end of the string\n^ - assert position at the start of the line\n. - any char \n(?#it's just a comment inside of empty regex)\n\nEmpty lookahead/behind should work:\n(?<!)\n\n",
"The best standard regexs (i.e., no lookahead or back-references) that reject all inputs are (after @aku above)\n.^\n\nand\n$.\n\nThese are flat contradictions: \"a string with a character before its beginning\" and \"a string with a character after its end.\"\nNOTE: It's possible that some regex implementations would reject these patterns as ill-formed (it's pretty easy to check that ^ comes at the beginning of a pattern and $ at the end... with a regular expression), but the few I've checked do accept them. These also won't work in implementations that allow ^ and $ to match newlines.\n",
"(?=not)possible\n\n?= is a positive lookahead. They're not supported in all regexp flavors, but in many.\nThe expression will look for \"not\", then look for \"possible\" starting at the same position (since lookaheads don't move forward in the string).\n",
"One example of why such thing could possibly be needed is when you want to filter some input with regexes and you pass regex as an argument to a function.\nIn spirit of functional programming, for algebraic completeness, you may want some trivial primary regexes like \"everything is allowed\" and \"nothing is allowed\".\n",
"Why would you even want that? Wouldn't a simple if statment do the trick? Something along the lines of:\nif ( inputString != \"\" )\n doSomething ()\n\n",
"To me it sounds like you're attacking a problem the wrong way, what exactly\nare you trying to solve?\nYou could do a regular expression that catches everything and negate the result.\ne.g in javascript:\nif (! str.match( /./ ))\n\nbut then you could just do\nif (!foo)\n\ninstead, as @[jan-hani] said.\nIf you're looking to embed such a regex in another regex, you\nmight be looking for $ or ^ instead, or use lookaheads like @[henrik-n] mentioned.\nBut as I said, this looks like a \"I think I need x, but what I really need is y\" problem.\n",
"[^\\x00-\\xFF]\n",
"It depends on what you mean by \"regular expression\". Do you mean regexps in a particular programming language or library? In that case the answer is probably yes, and you can refer to any of the above replies. \nIf you mean the regular expressions as taught in computer science classes, then the answer is no. Every regular expression matches some string. It could be the empty string, but it always matches something.\nIn any case, I suggest you edit the title of your question to narrow down what kinds of regular expressions you mean.\n",
"[^]+ should do it.\nIn answer to aku's comment attached to this, I tested it with an online regex tester (http://www.regextester.com/), and so assume it works with JavaScript. I have to confess to not testing it in \"real\" code. ;)\n"
] |
[
10,
5,
4,
2,
1,
1,
0,
0,
0
] |
[
"EDIT:\n [^\\n\\r\\w\\s]\n",
"Well,\nI am not sure if I understood, since I always thought of regular expression of a way to match strings. I would say the best shot you have is not using regex.\nBut, you can also use regexp that matches empty lines like ^$ or a regexp that do not match words/spaces like [^\\w\\s] ...\nHope it helps!\n"
] |
[
-1,
-1
] |
[
"regex"
] |
stackoverflow_0000062430_regex.txt
|
Q:
Can any database do math?
Can databases (MySQL in particular, any SQL--MS, Oracle, Postgres--in general) do mass updates, and figure out on their own what the new value should be? Say for example I've got a database with information about a bunch of computers, and all of these computers have drives of various sizes--anywhere from 20 to 250 GB. Then one day we upgrade every single computer by adding a 120 GB hard drive. Is there a way to say something like
update computers set total_disk_space = (whatever that row's current total_disk_space is plus 120)
A:
For the entire Table then:
Update Computers
Set Total_Disk_Space = Total_Disk_Space + 120;
If, you only want to update certain ones, then you'd need filters, for example:
Update Computers
Set Total_Disk_Space = Total_Disk_Space + 120
Where PurchaseDate BETWEEN '1/1/2008' AND GETDATE();
A:
Yeah:
update computers set total_disk_space = total_disk_space + 120;
A:
In your example, if total_disk_space is an INT you can use:
UPDATE computers
SET total_disk_space = total_disk_space + 120;
I you're storing character data, then it will be far more interesting.
|
Can any database do math?
|
Can databases (MySQL in particular, any SQL--MS, Oracle, Postgres--in general) do mass updates, and figure out on their own what the new value should be? Say for example I've got a database with information about a bunch of computers, and all of these computers have drives of various sizes--anywhere from 20 to 250 GB. Then one day we upgrade every single computer by adding a 120 GB hard drive. Is there a way to say something like
update computers set total_disk_space = (whatever that row's current total_disk_space is plus 120)
|
[
"For the entire Table then:\nUpdate Computers \nSet Total_Disk_Space = Total_Disk_Space + 120;\n\nIf, you only want to update certain ones, then you'd need filters, for example:\nUpdate Computers\nSet Total_Disk_Space = Total_Disk_Space + 120\nWhere PurchaseDate BETWEEN '1/1/2008' AND GETDATE();\n\n",
"Yeah:\nupdate computers set total_disk_space = total_disk_space + 120;\n\n",
"In your example, if total_disk_space is an INT you can use:\nUPDATE computers\nSET total_disk_space = total_disk_space + 120;\n\nI you're storing character data, then it will be far more interesting.\n"
] |
[
10,
2,
2
] |
[] |
[] |
[
"database",
"math",
"sql"
] |
stackoverflow_0000079292_database_math_sql.txt
|
Q:
How to retrieve a changed value of databound textbox within datagrid
ASP.NET 1.1 - I have a DataGrid on an ASPX page that is databound and displays a value within a textbox. The user is able to change this value, then click on a button where the code behind basically iterates through each DataGridItem in the grid, does a FindControl for the ID of the textbox then assigns the .Text value to a variable which is then used to update the database. The DataGrid is rebound with the new values.
The issue I'm having is that when assigning the .Text value to the variable, the value being retrieved is the original databound value and not the newly entered user value. Any ideas as to what may be causing this behaviour?
Code sample:
foreach(DataGridItem dgi in exGrid.Items)
{
TextBox Text1 = (TextBox)dgi.FindControl("TextID");
string exValue = Text1.Text; //This is retrieving the original bound value not the newly entered value
// do stuff with the new value
}
A:
So the code sample is from your button click event?
Are you sure you are not rebinding your datasource on postback?
A:
When are you attempting to retrieve the value from the TextBox? i.e. when is the code sample you provided being executed?
If you aren't already, you'll want to set up a handler method for the ItemCommand event of the DataGrid. You should be looking for the new TextBox value within that method. You should also make sure your DataGrid is not being re-databound on postback.
I would also highly recommend reading through Scott Mitchell's excellent article series on using the DataGrid control and all of it's functions:
https://web.archive.org/web/20210608183626/https://aspnet.4guysfromrolla.com/articles/040502-1.aspx
|
How to retrieve a changed value of databound textbox within datagrid
|
ASP.NET 1.1 - I have a DataGrid on an ASPX page that is databound and displays a value within a textbox. The user is able to change this value, then click on a button where the code behind basically iterates through each DataGridItem in the grid, does a FindControl for the ID of the textbox then assigns the .Text value to a variable which is then used to update the database. The DataGrid is rebound with the new values.
The issue I'm having is that when assigning the .Text value to the variable, the value being retrieved is the original databound value and not the newly entered user value. Any ideas as to what may be causing this behaviour?
Code sample:
foreach(DataGridItem dgi in exGrid.Items)
{
TextBox Text1 = (TextBox)dgi.FindControl("TextID");
string exValue = Text1.Text; //This is retrieving the original bound value not the newly entered value
// do stuff with the new value
}
|
[
"So the code sample is from your button click event?\nAre you sure you are not rebinding your datasource on postback?\n",
"When are you attempting to retrieve the value from the TextBox? i.e. when is the code sample you provided being executed?\nIf you aren't already, you'll want to set up a handler method for the ItemCommand event of the DataGrid. You should be looking for the new TextBox value within that method. You should also make sure your DataGrid is not being re-databound on postback.\nI would also highly recommend reading through Scott Mitchell's excellent article series on using the DataGrid control and all of it's functions:\nhttps://web.archive.org/web/20210608183626/https://aspnet.4guysfromrolla.com/articles/040502-1.aspx\n"
] |
[
0,
0
] |
[] |
[] |
[
"asp.net",
"c#",
"datagrid",
"textbox"
] |
stackoverflow_0000078847_asp.net_c#_datagrid_textbox.txt
|
Q:
How to skip fields using javascript?
I have a form like this:
<form name="mine">
<input type=text name=one>
<input type=text name=two>
<input type=text name=three>
</form>
When user types a value in 'one', I sometimes want to skip the field 'two', depending on what he typed. For example, if user types '123' and uses Tab to move to next field, I want to skip it and go to field three.
I tried to use OnBlur and OnEnter, without success.
Try 1:
<form name="mine">
<input type=text name=one onBlur="if (document.mine.one.value='123') document.three.focus();>
<input type=text name=two>
<input type=text name=three>
</form>
Try 2:
<form name="mine">
<input type=text name=one>
<input type=text name=two onEnter="if (document.mine.one.value='123') document.three.focus();>
<input type=text name=three>
</form>
but none of these works. Looks like the browser doesn't allow you to mess with focus while the focus is changing.
BTW, all this tried with Firefox on Linux.
A:
Try to attach tabindex attribute to your elements and then programmaticaly (in javaScript change it):
<INPUT tabindex="3" type="submit" name="mySubmit">
A:
You could use the onfocus event on field two, which will be called when it receives focus. At that point, field 1's value should be updated and you can perform your check then.
A:
If you used the method you describe, and they worked, the focus would also change when the user clicks on the field, instead of tabbing to it. I can guarantee you that this would result in a frustrated user. (Why exactly it doesn't work is beyond me.)
Instead, as said before, change the tabindex of the appropriate fields as soon as the content of field one changes.
A:
<form name="mine">
<input type="text" name="one" onkeypress="if (mine.one.value == '123') mine.three.focus();" />
<input type="text" name="two">
<input type="text" name="three">
</form>
A:
Try onkeypress instead of onblur. Also, on the onfocus of field two is where you should be sending to three. I'm assuming you don't want them typing in two if one is 123 so you can just check that on two's onfocus and send on to three.
|
How to skip fields using javascript?
|
I have a form like this:
<form name="mine">
<input type=text name=one>
<input type=text name=two>
<input type=text name=three>
</form>
When user types a value in 'one', I sometimes want to skip the field 'two', depending on what he typed. For example, if user types '123' and uses Tab to move to next field, I want to skip it and go to field three.
I tried to use OnBlur and OnEnter, without success.
Try 1:
<form name="mine">
<input type=text name=one onBlur="if (document.mine.one.value='123') document.three.focus();>
<input type=text name=two>
<input type=text name=three>
</form>
Try 2:
<form name="mine">
<input type=text name=one>
<input type=text name=two onEnter="if (document.mine.one.value='123') document.three.focus();>
<input type=text name=three>
</form>
but none of these works. Looks like the browser doesn't allow you to mess with focus while the focus is changing.
BTW, all this tried with Firefox on Linux.
|
[
"Try to attach tabindex attribute to your elements and then programmaticaly (in javaScript change it):\n<INPUT tabindex=\"3\" type=\"submit\" name=\"mySubmit\">\n\n",
"You could use the onfocus event on field two, which will be called when it receives focus. At that point, field 1's value should be updated and you can perform your check then.\n",
"If you used the method you describe, and they worked, the focus would also change when the user clicks on the field, instead of tabbing to it. I can guarantee you that this would result in a frustrated user. (Why exactly it doesn't work is beyond me.)\nInstead, as said before, change the tabindex of the appropriate fields as soon as the content of field one changes.\n",
"<form name=\"mine\">\n <input type=\"text\" name=\"one\" onkeypress=\"if (mine.one.value == '123') mine.three.focus();\" />\n <input type=\"text\" name=\"two\">\n <input type=\"text\" name=\"three\">\n</form>\n\n",
"Try onkeypress instead of onblur. Also, on the onfocus of field two is where you should be sending to three. I'm assuming you don't want them typing in two if one is 123 so you can just check that on two's onfocus and send on to three.\n"
] |
[
3,
1,
1,
0,
0
] |
[] |
[] |
[
"firefox",
"focus",
"javascript",
"skip"
] |
stackoverflow_0000079275_firefox_focus_javascript_skip.txt
|
Q:
How to play WMV in a WindowMediaPlayer activeX controlled by a flash component?
How to play WMV in a WindowMediaPlayer activeX controlled by a flash component?
I've seen it done here: http://sport5.co.il/
Does anyone know how?
A:
The only way I see the two interacting would be to place both a WMP and a Flash component on a web page, then have JavaScript to manipulate the two. For example, you could have Flash buttons for play/stop/etc, and when those are clicked, javascript handles the click event and tells the WMP control to start/stop playing.
Does this help?
|
How to play WMV in a WindowMediaPlayer activeX controlled by a flash component?
|
How to play WMV in a WindowMediaPlayer activeX controlled by a flash component?
I've seen it done here: http://sport5.co.il/
Does anyone know how?
|
[
"The only way I see the two interacting would be to place both a WMP and a Flash component on a web page, then have JavaScript to manipulate the two. For example, you could have Flash buttons for play/stop/etc, and when those are clicked, javascript handles the click event and tells the WMP control to start/stop playing.\nDoes this help?\n"
] |
[
1
] |
[] |
[] |
[
"flash",
"video",
"wmp"
] |
stackoverflow_0000076069_flash_video_wmp.txt
|
Q:
Outlook + Perl + Win32::Ole: How do you select calendar entries sorted by date?
Current code opens up an Outlook Calendar database as follow:
my $outlook = Win32::OLE->GetActiveObject('Outlook.Application') || Win32::OLE->new('Outlook.Application', 'Quit');
my $namespace = $outlook->GetNamespace("MAPI");
## only fetch entries from Jan 1, 2007 onwards
my $restrictDates = "[Start] >= '01/01/2007'";
A:
Since you don't show the code that gets the date of your object, this question is impossible to answer without some knowledge of the Outlook object you are trying to access.
If you have an array of objects you can sort them by date and filter ones prior to a certain one.
my $sub = sub {
my $ad = $a->date_string_accessor;
my $bd = $b->date_string_accessor;
$ad =~ s:(\d+)/(\d+)/(\d+):$3 . sprintf('%0d', $1) . sprintf('%0d', $2):e;
$bd =~ s:(\d+)/(\d+)/(\d+):$3 . sprintf('%0d', $1) . sprintf('%0d', $2):e;
return $ad cmp $bd;
};
my @sorted = sort $sub @unsorted;
print join("\n", @sorted);
But it would seem to me that you should use the application itself to do this -- presumably Outlook has some sort of query/sort functionality.
|
Outlook + Perl + Win32::Ole: How do you select calendar entries sorted by date?
|
Current code opens up an Outlook Calendar database as follow:
my $outlook = Win32::OLE->GetActiveObject('Outlook.Application') || Win32::OLE->new('Outlook.Application', 'Quit');
my $namespace = $outlook->GetNamespace("MAPI");
## only fetch entries from Jan 1, 2007 onwards
my $restrictDates = "[Start] >= '01/01/2007'";
|
[
"Since you don't show the code that gets the date of your object, this question is impossible to answer without some knowledge of the Outlook object you are trying to access.\nIf you have an array of objects you can sort them by date and filter ones prior to a certain one.\nmy $sub = sub {\n my $ad = $a->date_string_accessor;\n my $bd = $b->date_string_accessor;\n $ad =~ s:(\\d+)/(\\d+)/(\\d+):$3 . sprintf('%0d', $1) . sprintf('%0d', $2):e;\n $bd =~ s:(\\d+)/(\\d+)/(\\d+):$3 . sprintf('%0d', $1) . sprintf('%0d', $2):e;\n return $ad cmp $bd;\n};\n\nmy @sorted = sort $sub @unsorted;\n\nprint join(\"\\n\", @sorted);\n\nBut it would seem to me that you should use the application itself to do this -- presumably Outlook has some sort of query/sort functionality.\n"
] |
[
1
] |
[] |
[] |
[
"outlook",
"perl",
"winapi"
] |
stackoverflow_0000078916_outlook_perl_winapi.txt
|
Q:
Rules about disabling or hiding menu items
Have you ever been in a situation where a menu function you really really want to use but can't cause it's disabled or worse gone all together?
There is an argument for always leaving menus enabled and then display a message to a user explaining why a menu function can not be activated when they click on it. I think there is merit in this but maybe there is a cleverer way of addressing this issue.
I would be interested to hear what others think.
A:
If you're refering to Joel's post Don't hide or disable menu items, he clarified in the StackOverflow podcast that he intended that there be information - not a dialog - telling you why a menu item wouldn't do anything:
So, the use-case I was thinking of was, you had mentioned that in the Windows Media Player, you can play things faster when you're listening to podcasts and so forth, and it'll speed them up. And when I looked in there, that was disabled. And I couldn't figure out how to enable it. And obviously the help file is no help--not that anybody reads help files, but even if you did you couldn't find the answer to that. And that was kind of frustrating, and I'd rather have that menu item be enabled and have it just tell me "I'm not going to do this right now because of the following reason. I refuse to do this."
A:
As with most questions about usability, the answer is "it depends". It depends on the problem domain, the type of user, how critical the function is and so on. There is no single answer to your question.
I think the general consensus is, never ever totally remove items from a menu. Menus allow the user to freely discover what functions are available, but if those items are hidden or move around it does nothing to help the user. Also, moving them around makes it impossible to become proficient with the application since you have to constantly scan the menus for the item you want to select.
As for disabling versus enabling an item and displaying a dialog or message explaining why it's not something you can do, I generally prefer the former. However, if there's a function that a user can't reasonably be expected to intuit from the display, leaving it enabled is a good choice.
For example, if "Paste" is disabled it's reasonably obvious to most computer users that there's nothing to paste. However, if you have a "Frizzle the Bonfraz" menu item and the user may not know what a Bonfraz is or why they might want to enable it but can't, it's a good idea to leave it enabled at least for a while.
So again, it depends. If at all possible, do what you think is best and then ask your users.
A:
To generalize it a bit (perhaps incorrectly...), which of these situations would you prefer:
To find yourself on an island with no boat or bridge in site. Of course, you could talk to the villager in town and he would tell you the magical word to make a bridge appear...but you had no idea that magic existed.
You see that there is a bridge; however, when you get to it, there is a sign telling you that the bridge is not open to use.
You see that there is a bridge and celebrate! When you get to the end of the bridge, it tells you that the exit is not open. They must go back.
Maybe I am biased, but I don't believe that leaving the menu options enabled and allowing the user to click on it is the best of idea. That's just wasting someone's time. There is no way for them to distinguish that the item is available or not until they click on the item. (Scenario #3)
Hiding the item all together has its pros and cons. Completely hidden and you run the risk of the user never discovering all these features; however, at the same time, you are presented with the opportunity of making your application 'fun' and 'discoverable.' I've always thought the visibility of actions is more suited to items like toolbars. A good example of that is in when in some applications the picture toolbar pops up when you click on an image...and disappears when you click on text. In general, I would say that something like this is best if the overall experience of your application lends towards a "discovering" and "exploring" attitude from the user. (Scenario #1)
I would generally recommend disabling the items and providing a tooltip to the user informing them how to enable it (or even a link to Help?); however, this cannot be overdone. This must be done in moderation. (Scenario #2)
In general, when it's a context-related action (i.e. picture toolbar) that the user can easily discover, hide the items. If the user won't easily find it, have it disabled.
A:
Make it disabled but have the tooltip explain why it's disabled
A:
I've always believed that you should hide as much as you can. (Your application shouldn't be any more complex than what the user can/should do.)
If you display a menu option that a user shouldn't be using, they may click on it, but think your application is broken because nothing happens.
That's what I think at least...
A:
It depends on the situation. If the menu item has applies in the current context but isn't available because of state, it should be disabled. If the context has changed so it no longer applies, it should be removed.
A:
I've never really understood this myself (I don't program GUIs). Why even have menu items hidden or disabled in the first place? It is non-intuitive for most users who are looking for a particular menu option to find it disabled, or not even present.
Tooltips are also non-intuitive. If I'm moving my mouse across menu items, I'm not going to pause long enough to get a tooltip explanation. I'm more likely to become frustrated that something I expected to be accessible through the menu isn't there, or is disabled.
That said, I actually don't use GUI menus very often. I find the options available are often not useful, or are accessible in some more intuitive way, such as common keyboard shortcuts.
A:
You can display the 'reason' in the status bar. Or even better, use a text that describes the action and contains information when such action is possible. For example, for 'Copy' menu item, the text in status bar would be: Copy the selected text. Note the 'selected' part, which tells the user that he needs to select the text to enable the menu item.
Another example in a tool I'm making, we have 'Drop database' menu item, but this action is only possible when you're connected to it. So, the text in status bar goes something like: 'Drop the database (only when connected)'.
|
Rules about disabling or hiding menu items
|
Have you ever been in a situation where a menu function you really really want to use but can't cause it's disabled or worse gone all together?
There is an argument for always leaving menus enabled and then display a message to a user explaining why a menu function can not be activated when they click on it. I think there is merit in this but maybe there is a cleverer way of addressing this issue.
I would be interested to hear what others think.
|
[
"If you're refering to Joel's post Don't hide or disable menu items, he clarified in the StackOverflow podcast that he intended that there be information - not a dialog - telling you why a menu item wouldn't do anything:\n\nSo, the use-case I was thinking of was, you had mentioned that in the Windows Media Player, you can play things faster when you're listening to podcasts and so forth, and it'll speed them up. And when I looked in there, that was disabled. And I couldn't figure out how to enable it. And obviously the help file is no help--not that anybody reads help files, but even if you did you couldn't find the answer to that. And that was kind of frustrating, and I'd rather have that menu item be enabled and have it just tell me \"I'm not going to do this right now because of the following reason. I refuse to do this.\" \n\n",
"As with most questions about usability, the answer is \"it depends\". It depends on the problem domain, the type of user, how critical the function is and so on. There is no single answer to your question.\nI think the general consensus is, never ever totally remove items from a menu. Menus allow the user to freely discover what functions are available, but if those items are hidden or move around it does nothing to help the user. Also, moving them around makes it impossible to become proficient with the application since you have to constantly scan the menus for the item you want to select.\nAs for disabling versus enabling an item and displaying a dialog or message explaining why it's not something you can do, I generally prefer the former. However, if there's a function that a user can't reasonably be expected to intuit from the display, leaving it enabled is a good choice. \nFor example, if \"Paste\" is disabled it's reasonably obvious to most computer users that there's nothing to paste. However, if you have a \"Frizzle the Bonfraz\" menu item and the user may not know what a Bonfraz is or why they might want to enable it but can't, it's a good idea to leave it enabled at least for a while.\nSo again, it depends. If at all possible, do what you think is best and then ask your users. \n",
"To generalize it a bit (perhaps incorrectly...), which of these situations would you prefer:\n\nTo find yourself on an island with no boat or bridge in site. Of course, you could talk to the villager in town and he would tell you the magical word to make a bridge appear...but you had no idea that magic existed.\nYou see that there is a bridge; however, when you get to it, there is a sign telling you that the bridge is not open to use.\nYou see that there is a bridge and celebrate! When you get to the end of the bridge, it tells you that the exit is not open. They must go back.\n\nMaybe I am biased, but I don't believe that leaving the menu options enabled and allowing the user to click on it is the best of idea. That's just wasting someone's time. There is no way for them to distinguish that the item is available or not until they click on the item. (Scenario #3)\nHiding the item all together has its pros and cons. Completely hidden and you run the risk of the user never discovering all these features; however, at the same time, you are presented with the opportunity of making your application 'fun' and 'discoverable.' I've always thought the visibility of actions is more suited to items like toolbars. A good example of that is in when in some applications the picture toolbar pops up when you click on an image...and disappears when you click on text. In general, I would say that something like this is best if the overall experience of your application lends towards a \"discovering\" and \"exploring\" attitude from the user. (Scenario #1)\nI would generally recommend disabling the items and providing a tooltip to the user informing them how to enable it (or even a link to Help?); however, this cannot be overdone. This must be done in moderation. (Scenario #2)\nIn general, when it's a context-related action (i.e. picture toolbar) that the user can easily discover, hide the items. If the user won't easily find it, have it disabled. \n",
"Make it disabled but have the tooltip explain why it's disabled\n",
"I've always believed that you should hide as much as you can. (Your application shouldn't be any more complex than what the user can/should do.)\nIf you display a menu option that a user shouldn't be using, they may click on it, but think your application is broken because nothing happens.\nThat's what I think at least...\n",
"It depends on the situation. If the menu item has applies in the current context but isn't available because of state, it should be disabled. If the context has changed so it no longer applies, it should be removed.\n",
"I've never really understood this myself (I don't program GUIs). Why even have menu items hidden or disabled in the first place? It is non-intuitive for most users who are looking for a particular menu option to find it disabled, or not even present.\nTooltips are also non-intuitive. If I'm moving my mouse across menu items, I'm not going to pause long enough to get a tooltip explanation. I'm more likely to become frustrated that something I expected to be accessible through the menu isn't there, or is disabled. \nThat said, I actually don't use GUI menus very often. I find the options available are often not useful, or are accessible in some more intuitive way, such as common keyboard shortcuts.\n",
"You can display the 'reason' in the status bar. Or even better, use a text that describes the action and contains information when such action is possible. For example, for 'Copy' menu item, the text in status bar would be: Copy the selected text. Note the 'selected' part, which tells the user that he needs to select the text to enable the menu item.\nAnother example in a tool I'm making, we have 'Drop database' menu item, but this action is only possible when you're connected to it. So, the text in status bar goes something like: 'Drop the database (only when connected)'.\n"
] |
[
10,
4,
4,
3,
0,
0,
0,
0
] |
[] |
[] |
[
"standards",
"user_interface"
] |
stackoverflow_0000079033_standards_user_interface.txt
|
Q:
How can I prevent deformation when rotating about the line-of-sight in OpenGL?
I've drawn an ellipse in the XZ plane, and set my perspective slightly up on the Y-axis and back on the Z, looking at the center of ellipse from a 45-degree angle, using gluPerspective() to set my viewing frustrum.
Unrotated, the major axis of the ellipse spans the width of my viewport. When I rotate 90-degrees about my line-of-sight, the major axis of the ellipse now spans the height of my viewport, thus deforming the ellipse (in this case, making it appear less eccentric).
What do I need to do to prevent this deformation (or at least account for it), so rotation about the line-of-sight preserves the perceived major axis of the ellipse (in this case, causing it to go beyond the viewport)?
A:
It looks like you're using 1.0 as the aspect when you call gluPerspective(). You should use width/height. For example, if your viewport is 640x480, you would use 1.33333 as the aspect argument.
A:
According to the OpenGL Spec:
void gluPerspective( GLdouble fovy,
GLdouble aspect,
GLdouble zNear,
GLdouble zFar )
Aspect should be a function of your window width and height. Specifically width divided by height (but watch out for division by zero).
Perhaps you are using 1 as the aspect which is not accurate unless your window is a square.
A:
It looks like the aspect parameter on your gluPerspective call need tweaking. See The Man Page. If your window were physically square, the aspect ratio would be 1 and your problem would go away. However, your window is rectangular, so the viewing frustum needs to be non-square.
Set the aspect ratio to window_width / window_height, and your ellipse should look correct. Note that you'll need to update this whenever the window resizes; if you're using GLUT set a glutReshapeFunc and recalculate the projection matrix in there.
|
How can I prevent deformation when rotating about the line-of-sight in OpenGL?
|
I've drawn an ellipse in the XZ plane, and set my perspective slightly up on the Y-axis and back on the Z, looking at the center of ellipse from a 45-degree angle, using gluPerspective() to set my viewing frustrum.
Unrotated, the major axis of the ellipse spans the width of my viewport. When I rotate 90-degrees about my line-of-sight, the major axis of the ellipse now spans the height of my viewport, thus deforming the ellipse (in this case, making it appear less eccentric).
What do I need to do to prevent this deformation (or at least account for it), so rotation about the line-of-sight preserves the perceived major axis of the ellipse (in this case, causing it to go beyond the viewport)?
|
[
"It looks like you're using 1.0 as the aspect when you call gluPerspective(). You should use width/height. For example, if your viewport is 640x480, you would use 1.33333 as the aspect argument.\n",
"According to the OpenGL Spec:\nvoid gluPerspective( GLdouble fovy,\n GLdouble aspect,\n GLdouble zNear,\n GLdouble zFar )\n\nAspect should be a function of your window width and height. Specifically width divided by height (but watch out for division by zero).\nPerhaps you are using 1 as the aspect which is not accurate unless your window is a square.\n",
"It looks like the aspect parameter on your gluPerspective call need tweaking. See The Man Page. If your window were physically square, the aspect ratio would be 1 and your problem would go away. However, your window is rectangular, so the viewing frustum needs to be non-square.\nSet the aspect ratio to window_width / window_height, and your ellipse should look correct. Note that you'll need to update this whenever the window resizes; if you're using GLUT set a glutReshapeFunc and recalculate the projection matrix in there.\n"
] |
[
5,
3,
1
] |
[] |
[] |
[
"c",
"opengl"
] |
stackoverflow_0000079360_c_opengl.txt
|
Q:
Publishing multiple sites on a single instance of umbraco
I am looking to setup a parallel site to one already that already uses umbraco for its content management system. The new site would share admins, templates, macros, and media resources, but not any content. If I setup multiple host headers pointing to the same directory with an umbraco install, how can I switch the top node (home vs home2) of the site based on which url is being accessed?
A:
I believe you first have to change a setting in umbracosettings.config:
<useDomainPrefixes>true</useDomainPrefixes>
Then I think you also have to right click on each top node and click 'Manage Hostnames', then add the appropriate host name for that top node.
It already sounds like you have IIS configured correctly, so you should be good to go on that front.
It's been a while since I've worked with Umbraco, but I think I'm mostly right ;-)
|
Publishing multiple sites on a single instance of umbraco
|
I am looking to setup a parallel site to one already that already uses umbraco for its content management system. The new site would share admins, templates, macros, and media resources, but not any content. If I setup multiple host headers pointing to the same directory with an umbraco install, how can I switch the top node (home vs home2) of the site based on which url is being accessed?
|
[
"I believe you first have to change a setting in umbracosettings.config:\n<useDomainPrefixes>true</useDomainPrefixes>\n\nThen I think you also have to right click on each top node and click 'Manage Hostnames', then add the appropriate host name for that top node.\nIt already sounds like you have IIS configured correctly, so you should be good to go on that front.\nIt's been a while since I've worked with Umbraco, but I think I'm mostly right ;-)\n"
] |
[
16
] |
[] |
[] |
[
"content_management_system",
"hostheaders",
"sites",
"umbraco"
] |
stackoverflow_0000078049_content_management_system_hostheaders_sites_umbraco.txt
|
Q:
.NET 3.5 SP1 changes for ASP.NET
I would like to test out the new SP1 in my development server and then install it for my production server. But I wonder what it had enhance to the ASP.NET portion specifically as that is where my concerns are.
I read the docs found in the SP1 Download page but it seens a bit too general to me, not much on the ASP.NE portion. Anyone have any clues on this?
A:
http://weblogs.asp.net/scottgu/archive/2008/05/12/visual-studio-2008-and-net-framework-3-5-service-pack-1-beta.aspx
There is a section in there on the improvements for web development.. it can be vague as well but has links to videos and further information. I suggest checking it out.
A:
Short list:
ASP.NET: Dynamic Data now included in .Net 3.5 and all necessary project templates for VS also available
ASP.NET: History support added. Now we can control AJAX pages behavior on Back/Forward buttons pressed in very simple manner that was shown previously on MS demos
ASP.NET: Script Combining feature added to reduce the number of requests and improving page load time. Before this moment we used custom approach for client scripts combining
VS2008: Added richer support of JavaScript code formatting and Intellisense, especially for separated .js files
more on blog: http://dimarzionist.wordpress.com/2008/05/13/full-list-of-changes-in-sp1-beta/
A:
Please check out this article: Hidden Gems - Not the same old 3.5 SP1 post
It details some of the minor improvements in SP1. It also links to Scott Guthrie's article on SP1.
In my experience, the upgrade went well. I had one issue with a site with poorly coded AJAX - nothing much at all.
Ron
A:
To be honest the only real improvement i've seen from SP1 and this is because I haven't been looking for specific improvements is that it will now read your TODO and HACK tasks from all files in a project instead of just the file open. That particular feature is useful though.
|
.NET 3.5 SP1 changes for ASP.NET
|
I would like to test out the new SP1 in my development server and then install it for my production server. But I wonder what it had enhance to the ASP.NET portion specifically as that is where my concerns are.
I read the docs found in the SP1 Download page but it seens a bit too general to me, not much on the ASP.NE portion. Anyone have any clues on this?
|
[
"http://weblogs.asp.net/scottgu/archive/2008/05/12/visual-studio-2008-and-net-framework-3-5-service-pack-1-beta.aspx\nThere is a section in there on the improvements for web development.. it can be vague as well but has links to videos and further information. I suggest checking it out.\n",
"Short list:\nASP.NET: Dynamic Data now included in .Net 3.5 and all necessary project templates for VS also available\nASP.NET: History support added. Now we can control AJAX pages behavior on Back/Forward buttons pressed in very simple manner that was shown previously on MS demos\nASP.NET: Script Combining feature added to reduce the number of requests and improving page load time. Before this moment we used custom approach for client scripts combining\nVS2008: Added richer support of JavaScript code formatting and Intellisense, especially for separated .js files\nmore on blog: http://dimarzionist.wordpress.com/2008/05/13/full-list-of-changes-in-sp1-beta/\n",
"Please check out this article: Hidden Gems - Not the same old 3.5 SP1 post\nIt details some of the minor improvements in SP1. It also links to Scott Guthrie's article on SP1.\nIn my experience, the upgrade went well. I had one issue with a site with poorly coded AJAX - nothing much at all.\n\nRon\n\n",
"To be honest the only real improvement i've seen from SP1 and this is because I haven't been looking for specific improvements is that it will now read your TODO and HACK tasks from all files in a project instead of just the file open. That particular feature is useful though.\n"
] |
[
1,
0,
0,
0
] |
[] |
[] |
[
".net_3.5"
] |
stackoverflow_0000079451_.net_3.5.txt
|
Q:
How do I invoke an exe that is an embedded resource in a .Net assembly?
I have a non-.Net executable file that is included in my .net assembly as an embedded resource. Is there a way to run this executable that does not involve writing it out to disk and launching it?
This is .Net 2.0.
A:
You might try injecting your exe into a suspended process and then awakening the hijacked process, but this seems like a recipe for disaster.
A:
You can load a .NET assembly from a byte array using an overload of Assembly.Load.
However, there are implications for the security model that need to be considered which make things more complex. See the discussion here, and also this thread.
If your embedded executable is not .NET then I think you will have to write it out to disk first.
|
How do I invoke an exe that is an embedded resource in a .Net assembly?
|
I have a non-.Net executable file that is included in my .net assembly as an embedded resource. Is there a way to run this executable that does not involve writing it out to disk and launching it?
This is .Net 2.0.
|
[
"You might try injecting your exe into a suspended process and then awakening the hijacked process, but this seems like a recipe for disaster.\n",
"You can load a .NET assembly from a byte array using an overload of Assembly.Load.\nHowever, there are implications for the security model that need to be considered which make things more complex. See the discussion here, and also this thread.\nIf your embedded executable is not .NET then I think you will have to write it out to disk first.\n"
] |
[
4,
3
] |
[] |
[] |
[
".net"
] |
stackoverflow_0000072615_.net.txt
|
Q:
What's the correct term for "number of std deviations" away from a mean
I've computed the mean & variance of a set of values, and I want to pass along the value that represents the # of std deviations away from mean for each number in the set. Is there a better term for this, or should I just call it num_of_std_devs_from_mean ...
A:
Some suggestions here:
Standard score (z-value, z-score, normal score)
but "sigma" or "stdev_distance" would probably be clearer
A:
The standard deviation is usually denoted with the letter σ (sigma). Personally, I think more people will understand what you mean if you do say number of standard deviations.
As for a variable name, as long as you comment the declaration you could shorten it to std_devs.
A:
sigma is what you want, I think.
A:
That is normalizing your values. You could just refer to it as the normalized value. Maybe norm_val would be more appropriate.
A:
I've always heard it as number of standard deviations
A:
Deviation may be what you're after. Deviation is the distance between a data point and the mean.
|
What's the correct term for "number of std deviations" away from a mean
|
I've computed the mean & variance of a set of values, and I want to pass along the value that represents the # of std deviations away from mean for each number in the set. Is there a better term for this, or should I just call it num_of_std_devs_from_mean ...
|
[
"Some suggestions here:\nStandard score (z-value, z-score, normal score)\nbut \"sigma\" or \"stdev_distance\" would probably be clearer\n",
"The standard deviation is usually denoted with the letter σ (sigma). Personally, I think more people will understand what you mean if you do say number of standard deviations.\nAs for a variable name, as long as you comment the declaration you could shorten it to std_devs.\n",
"sigma is what you want, I think.\n",
"That is normalizing your values. You could just refer to it as the normalized value. Maybe norm_val would be more appropriate.\n",
"I've always heard it as number of standard deviations\n",
"Deviation may be what you're after. Deviation is the distance between a data point and the mean.\n"
] |
[
9,
4,
4,
1,
0,
0
] |
[] |
[] |
[
"statistics"
] |
stackoverflow_0000079476_statistics.txt
|
Q:
How to save a public html page with all media and preserve structure
Looking for a Linux application (or Firefox extension) that will allow me to scrape an HTML mockup and keep the page's integrity.
Firefox does an almost perfect job but doesn't grab images referenced in the CSS.
The Scrapbook extension for Firefox gets everything, but flattens the directory structure.
I wouldn't terribly mind if all folders became children of the index page.
A:
See Website Mirroring With wget
wget --mirror –w 2 –p --HTML-extension –-convert-links http://www.yourdomain.com
A:
Have you tried wget?
A:
wget -r does what you want, and if not, there are plenty of flags to configure it. See man wget.
Another option is curl, which is even more powerful. See http://curl.haxx.se/.
A:
Teleport Pro is great for this sort of thing. You can point it at complete websites and it will download a copy locally maintaining directory structure, and replacing absolute links with relative ones as necessary. You can also specify whether you want content from other third-party websites linked to from the original site.
|
How to save a public html page with all media and preserve structure
|
Looking for a Linux application (or Firefox extension) that will allow me to scrape an HTML mockup and keep the page's integrity.
Firefox does an almost perfect job but doesn't grab images referenced in the CSS.
The Scrapbook extension for Firefox gets everything, but flattens the directory structure.
I wouldn't terribly mind if all folders became children of the index page.
|
[
"See Website Mirroring With wget\nwget --mirror –w 2 –p --HTML-extension –-convert-links http://www.yourdomain.com\n\n",
"Have you tried wget?\n",
"wget -r does what you want, and if not, there are plenty of flags to configure it. See man wget.\nAnother option is curl, which is even more powerful. See http://curl.haxx.se/.\n",
"Teleport Pro is great for this sort of thing. You can point it at complete websites and it will download a copy locally maintaining directory structure, and replacing absolute links with relative ones as necessary. You can also specify whether you want content from other third-party websites linked to from the original site.\n"
] |
[
6,
3,
1,
0
] |
[] |
[] |
[
"css",
"directory_structure",
"screen",
"screen_scraping"
] |
stackoverflow_0000079612_css_directory_structure_screen_screen_scraping.txt
|
Q:
Databinding ApplicationSettings to Custom Components
I have a custom component where I have implemented
INotifyPropertyChanged and IBindableComponent.
However, when I try to databind a property, the designer adds this
line:
this.component11.TestString =
global::WindowsFormsApplication2.Properties.Settings.Default.Setting;
instead of creating a binding as it does with a TextBox:
this.textBox2.DataBindings.Add(new System.Windows.Forms.Binding(
"Text",
global::WindowsFormsApplication2.Properties.Settings.Default,
"Setting2",
true,
System.Windows.Forms.DataSourceUpdateMode.OnPropertyChanged));
I would have thought the designer would simply look to see if
IBindableComponent is implemented and if it is, generate the binding
coding instead of the assignment code.
Any ideas why this works with a textbox and not my custom component?
Here is my custom component:
public partial class Component1 : Component, INotifyPropertyChanged, IBindableComponent
{
public Component1()
{
InitializeComponent();
}
public Component1(IContainer container)
{
container.Add(this);
InitializeComponent();
}
private string teststring;
[Bindable(true)]
public string TestString
{
get
{
return teststring;
}
set
{
if (teststring != value)
{
teststring = value;
FirePropertyChanged("TestString");
}
}
}
#region INotifyPropertyChanged Members
public event PropertyChangedEventHandler PropertyChanged;
void FirePropertyChanged(string propertyName)
{
if (PropertyChanged != null)
{
PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
}
}
#endregion
#region IBindableComponent Members
private BindingContext bindingContext = null;
public BindingContext BindingContext
{
get
{
if (null == bindingContext)
{
bindingContext = new BindingContext();
}
return bindingContext;
}
set { bindingContext = value; }
}
private ControlBindingsCollection databindings;
public ControlBindingsCollection DataBindings
{
get
{
if (null == databindings)
{
databindings = new ControlBindingsCollection(this);
}
return databindings;
}
set { databindings = value; }
}
#endregion
}
print("code sample");
A:
Try:
[ DesignerSerializationVisibility( DesignerSerializationVisibility.Hidden ),
EditorBrowsable( EditorBrowsableState.Advanced ),
Browsable( false ) ]
public BindingContext BindingContext {
...
}
[ ParenthesizePropertyName( true ),
RefreshProperties( RefreshProperties.All ),
DesignerSerializationVisibility( DesignerSerializationVisibility.Content ),
Category( "Data" ) ]
public ControlBindingsCollection DataBindings {
...
}
|
Databinding ApplicationSettings to Custom Components
|
I have a custom component where I have implemented
INotifyPropertyChanged and IBindableComponent.
However, when I try to databind a property, the designer adds this
line:
this.component11.TestString =
global::WindowsFormsApplication2.Properties.Settings.Default.Setting;
instead of creating a binding as it does with a TextBox:
this.textBox2.DataBindings.Add(new System.Windows.Forms.Binding(
"Text",
global::WindowsFormsApplication2.Properties.Settings.Default,
"Setting2",
true,
System.Windows.Forms.DataSourceUpdateMode.OnPropertyChanged));
I would have thought the designer would simply look to see if
IBindableComponent is implemented and if it is, generate the binding
coding instead of the assignment code.
Any ideas why this works with a textbox and not my custom component?
Here is my custom component:
public partial class Component1 : Component, INotifyPropertyChanged, IBindableComponent
{
public Component1()
{
InitializeComponent();
}
public Component1(IContainer container)
{
container.Add(this);
InitializeComponent();
}
private string teststring;
[Bindable(true)]
public string TestString
{
get
{
return teststring;
}
set
{
if (teststring != value)
{
teststring = value;
FirePropertyChanged("TestString");
}
}
}
#region INotifyPropertyChanged Members
public event PropertyChangedEventHandler PropertyChanged;
void FirePropertyChanged(string propertyName)
{
if (PropertyChanged != null)
{
PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
}
}
#endregion
#region IBindableComponent Members
private BindingContext bindingContext = null;
public BindingContext BindingContext
{
get
{
if (null == bindingContext)
{
bindingContext = new BindingContext();
}
return bindingContext;
}
set { bindingContext = value; }
}
private ControlBindingsCollection databindings;
public ControlBindingsCollection DataBindings
{
get
{
if (null == databindings)
{
databindings = new ControlBindingsCollection(this);
}
return databindings;
}
set { databindings = value; }
}
#endregion
}
print("code sample");
|
[
"Try:\n[ DesignerSerializationVisibility( DesignerSerializationVisibility.Hidden ),\n EditorBrowsable( EditorBrowsableState.Advanced ),\n Browsable( false ) ]\npublic BindingContext BindingContext {\n ...\n}\n\n[ ParenthesizePropertyName( true ),\n RefreshProperties( RefreshProperties.All ),\n DesignerSerializationVisibility( DesignerSerializationVisibility.Content ),\n Category( \"Data\" ) ]\npublic ControlBindingsCollection DataBindings {\n ...\n}\n\n"
] |
[
2
] |
[] |
[] |
[
"c#",
"custom_component",
"data_binding"
] |
stackoverflow_0000075577_c#_custom_component_data_binding.txt
|
Q:
Firebird database replication
I have reached the point where I've decided to replace my custom-built replication system with a system that has been built by someone else, mainly for reliability purposes. Can anyone recommend any replication system that is worth it? Is FiBRE any good?
What I need might be a little away from a generic system, though. I have five departments with each having it's own copy of the database, and the master in a remote location. The departments all have sporadic internet connection, the master is always online. The data has to flow back and forth from the master, meaning that all departments need to be equal to the master (when internet connection is available), and to upload changes made during network outage that are later distributed to other departments by the master.
A:
I have used CopyCat to create a replication project. It allows you create your own replication client/server configuration using CodeGear Delphi. This allows you complete flexibilty as to how you want your replication to work.
If you don't use Delphi, or need a prefabricated solution, CopyTiger does the same thing already configured.
A:
I find IBReplicator by IBPhoenix to be the most complete, but there are many more listed here (with short descriptions):
http://www.firebirdfaq.org/faq249/
A:
The Ibphoenix site list replication tools
IbPhoenix Replication Tools
|
Firebird database replication
|
I have reached the point where I've decided to replace my custom-built replication system with a system that has been built by someone else, mainly for reliability purposes. Can anyone recommend any replication system that is worth it? Is FiBRE any good?
What I need might be a little away from a generic system, though. I have five departments with each having it's own copy of the database, and the master in a remote location. The departments all have sporadic internet connection, the master is always online. The data has to flow back and forth from the master, meaning that all departments need to be equal to the master (when internet connection is available), and to upload changes made during network outage that are later distributed to other departments by the master.
|
[
"I have used CopyCat to create a replication project. It allows you create your own replication client/server configuration using CodeGear Delphi. This allows you complete flexibilty as to how you want your replication to work.\nIf you don't use Delphi, or need a prefabricated solution, CopyTiger does the same thing already configured. \n",
"I find IBReplicator by IBPhoenix to be the most complete, but there are many more listed here (with short descriptions):\nhttp://www.firebirdfaq.org/faq249/\n",
"The Ibphoenix site list replication tools\nIbPhoenix Replication Tools\n"
] |
[
3,
3,
1
] |
[] |
[] |
[
"database",
"firebird",
"replication"
] |
stackoverflow_0000061929_database_firebird_replication.txt
|
Q:
Looking up document library items in a SharePoint workflow
I using SharePoint Designer to create a workflow. I'm trying to get at a sub-folder in a document library in the "Define Workflow Lookup" dialog. There are two issues with this:
I can't look up items by URL Path. If I look up by Title, I can
output the URL Path, but selecting by path doesn't work. What
fields can/can't I use?
I can't get at any sub-folders. I can get at the top-level folder,
but the sub-folders don't seem to be available. Noticed the same
things is true when looking at the data for a document library in
the "Data Source Library" in Designer.
To clarify, the workflow is on a different list, not on the document library.
UPD: Also, I know how to do this through the object model, the question is how to do it in SharePoint Designer without deploying code to the server.
A:
I really don't have much experience with Sharepoint, but I thought I could at least provide some answer - even if it's the wrong one.
From another dev I've spoken to it sounds like it's tough to get into any subfolders, so you might need to look at making your own custom workflow.
Maybe something like LINQ to Sharepoint might be able to help you with actually getting in and enumerating the subfolders and getting to the data that you need? LINQ to Sharepoint
A:
The issue is that "folders" are not really folders as they are accessed by querystring, not a "/" as with real folders.
|
Looking up document library items in a SharePoint workflow
|
I using SharePoint Designer to create a workflow. I'm trying to get at a sub-folder in a document library in the "Define Workflow Lookup" dialog. There are two issues with this:
I can't look up items by URL Path. If I look up by Title, I can
output the URL Path, but selecting by path doesn't work. What
fields can/can't I use?
I can't get at any sub-folders. I can get at the top-level folder,
but the sub-folders don't seem to be available. Noticed the same
things is true when looking at the data for a document library in
the "Data Source Library" in Designer.
To clarify, the workflow is on a different list, not on the document library.
UPD: Also, I know how to do this through the object model, the question is how to do it in SharePoint Designer without deploying code to the server.
|
[
"I really don't have much experience with Sharepoint, but I thought I could at least provide some answer - even if it's the wrong one.\nFrom another dev I've spoken to it sounds like it's tough to get into any subfolders, so you might need to look at making your own custom workflow.\nMaybe something like LINQ to Sharepoint might be able to help you with actually getting in and enumerating the subfolders and getting to the data that you need? LINQ to Sharepoint\n",
"The issue is that \"folders\" are not really folders as they are accessed by querystring, not a \"/\" as with real folders.\n"
] |
[
1,
0
] |
[] |
[] |
[
"moss",
"sharepoint",
"sharepoint_designer",
"workflow"
] |
stackoverflow_0000033349_moss_sharepoint_sharepoint_designer_workflow.txt
|
Q:
Make a Query in MS Access default to landscape when printed
How can I programmatically make a query in MS Access default to landscape when printed, specifically when viewing it as a PivotChart? I'm currently attempting this in MS Access 2003, but would like to see a solution for any version.
A:
The following function should do the trick:
Function SetLandscape()
Application.Printer.Orientation = acPRORLandscape
End Function
Should be able to call this from the autoexec function to ensure it always runs.
A:
Yes ahockley's call sets the application's printer orientation to landscape. I tried an experiment and it worked well. I know this doesn't produce a pivot table, but I didn't setup one to use, so it opens and prints a regular query.
Private sub
Application.Printer.Orientation = acPRORLandscape
DoCmd.OpenQuery "qry1", acViewNormal, acReadOnly
DoCmd.PrintOut acPrintAll
End Sub
If you want to close the query after printing it, add:
docmd.Close acQuery, "qry1", acSaveNo
|
Make a Query in MS Access default to landscape when printed
|
How can I programmatically make a query in MS Access default to landscape when printed, specifically when viewing it as a PivotChart? I'm currently attempting this in MS Access 2003, but would like to see a solution for any version.
|
[
"The following function should do the trick:\nFunction SetLandscape()\n Application.Printer.Orientation = acPRORLandscape\nEnd Function\n\nShould be able to call this from the autoexec function to ensure it always runs.\n",
"Yes ahockley's call sets the application's printer orientation to landscape. I tried an experiment and it worked well. I know this doesn't produce a pivot table, but I didn't setup one to use, so it opens and prints a regular query. \nPrivate sub\n Application.Printer.Orientation = acPRORLandscape\n DoCmd.OpenQuery \"qry1\", acViewNormal, acReadOnly\n DoCmd.PrintOut acPrintAll\nEnd Sub\n\nIf you want to close the query after printing it, add:\ndocmd.Close acQuery, \"qry1\", acSaveNo\n\n"
] |
[
3,
0
] |
[] |
[] |
[
"database",
"ms_access",
"printing",
"vba"
] |
stackoverflow_0000078757_database_ms_access_printing_vba.txt
|
Q:
How do I get started processing email related to website activity?
I am writing a web application that requires user interaction via email. I'm curious if there is a best practice or recommended source for learning about processing email. I am writing my application in Python, but I'm not sure what mail server to use or how to format the message or subject line to account for automated processing. I'm also looking for guidance on processing bouncebacks.
A:
There are some pretty serious concerns here for how to send email automatically, and here are a few:
Use an email library. Python includes one called 'email'. This is your friend, it will stop you from doing anything tragically wrong. Read an example from the Python Manual.
Some points that will stop you from getting blocked by spam filters:
Always send from a valid email address. You must be able to send email to this address and have it received (it can go into /dev/null after it's received, but it must be possible to /deliver/ there). This will stop spam filters that do Sender Address Verification from blocking your mail.
The email address you send from on the server.sendmail(fromaddr, [toaddr]) line will be where bounces go. The From: line in the email is a totally different address, and that's where mail will go when the user hits 'Reply:'. Use this to your advantage, bounces can go to one place, while reply goes to another.
Send email to a local mail server, I recommend postfix. This local server will receive your mail and be responsible for sending it to your upstream server. Once it has been delivered to the local server, treat it as 'sent' from a programmatic point of view.
If you have a site that is on a static ip in a datacenter of good reputation, don't be afraid to simply relay the mail directly to the internet. If you're in a datacenter full of script kiddies and spammers, you will need to relay this mail via a public MTA of good reputation, hopefully you will be able to work this out without a hassle.
Don't send an email in only HTML. Always send it in Plain and HTML, or just Plain. Be nice, I use a text only email client, and you don't want to annoy me.
Verify that you're not running SPF on your email domain, or get it configured to allow your server to send the mail. Do this by doing a TXT lookup on your domain.
$ dig google.com txt
...snip...
;; ANSWER SECTION:
google.com. 300 IN TXT "v=spf1 include:_netblocks.google.com ~all"
As you can see from that result, there's an SPF record there. If you don't have SPF, there won't be a TXT record. Read more about SPF on wikipedia.
Hope that helps.
A:
Some general information with regards to automated mail processing...
First, the mail server "brand" itself isn't that important for broadcasting or receiving emails. All of them support the standard smtp / pop3 communications protocol. Most even have IMAP support and have some level of spam filtering. That said, try to use a current generation email server.
Second, be aware that in an effort to reduce spam a lot of the receiving mail servers out there will simply throw a message away instead of responding back that a mail account doesn't exist. Which means you may not receive those.
Bear in mind that getting past spam filters is an art. A number of isp's watch for duplicate messages, messages that look like spam based on keywords or other content, etc. This is sometimes independent of the quantity of messages sent; I've seen messages with as few as 50 copies get blocked by AOL even though they were legitimate emails. So, testing is your friend and look into this article on wikipedia on anti-spam techniques. Then make sure your not doing that crap.
**
As far as processing the messages, just remember it's a queued system. Connect to the server via POP3 to retrieve messages, open it, do some action, delete the message or archive it, and move on.
With regards to bouncebacks, let the mail server do most of the work. You should be able to configure it to notify a certain email account on the server in the event that it is unable to deliver a message. You can check that account periodically and process the Non Delivery Reports as necessary.
|
How do I get started processing email related to website activity?
|
I am writing a web application that requires user interaction via email. I'm curious if there is a best practice or recommended source for learning about processing email. I am writing my application in Python, but I'm not sure what mail server to use or how to format the message or subject line to account for automated processing. I'm also looking for guidance on processing bouncebacks.
|
[
"There are some pretty serious concerns here for how to send email automatically, and here are a few:\nUse an email library. Python includes one called 'email'. This is your friend, it will stop you from doing anything tragically wrong. Read an example from the Python Manual.\nSome points that will stop you from getting blocked by spam filters:\nAlways send from a valid email address. You must be able to send email to this address and have it received (it can go into /dev/null after it's received, but it must be possible to /deliver/ there). This will stop spam filters that do Sender Address Verification from blocking your mail.\nThe email address you send from on the server.sendmail(fromaddr, [toaddr]) line will be where bounces go. The From: line in the email is a totally different address, and that's where mail will go when the user hits 'Reply:'. Use this to your advantage, bounces can go to one place, while reply goes to another.\nSend email to a local mail server, I recommend postfix. This local server will receive your mail and be responsible for sending it to your upstream server. Once it has been delivered to the local server, treat it as 'sent' from a programmatic point of view.\nIf you have a site that is on a static ip in a datacenter of good reputation, don't be afraid to simply relay the mail directly to the internet. If you're in a datacenter full of script kiddies and spammers, you will need to relay this mail via a public MTA of good reputation, hopefully you will be able to work this out without a hassle.\nDon't send an email in only HTML. Always send it in Plain and HTML, or just Plain. Be nice, I use a text only email client, and you don't want to annoy me.\nVerify that you're not running SPF on your email domain, or get it configured to allow your server to send the mail. Do this by doing a TXT lookup on your domain.\n$ dig google.com txt\n...snip...\n;; ANSWER SECTION:\ngoogle.com. 300 IN TXT \"v=spf1 include:_netblocks.google.com ~all\"\n\nAs you can see from that result, there's an SPF record there. If you don't have SPF, there won't be a TXT record. Read more about SPF on wikipedia.\nHope that helps.\n",
"Some general information with regards to automated mail processing...\nFirst, the mail server \"brand\" itself isn't that important for broadcasting or receiving emails. All of them support the standard smtp / pop3 communications protocol. Most even have IMAP support and have some level of spam filtering. That said, try to use a current generation email server.\nSecond, be aware that in an effort to reduce spam a lot of the receiving mail servers out there will simply throw a message away instead of responding back that a mail account doesn't exist. Which means you may not receive those.\nBear in mind that getting past spam filters is an art. A number of isp's watch for duplicate messages, messages that look like spam based on keywords or other content, etc. This is sometimes independent of the quantity of messages sent; I've seen messages with as few as 50 copies get blocked by AOL even though they were legitimate emails. So, testing is your friend and look into this article on wikipedia on anti-spam techniques. Then make sure your not doing that crap.\n**\nAs far as processing the messages, just remember it's a queued system. Connect to the server via POP3 to retrieve messages, open it, do some action, delete the message or archive it, and move on.\nWith regards to bouncebacks, let the mail server do most of the work. You should be able to configure it to notify a certain email account on the server in the event that it is unable to deliver a message. You can check that account periodically and process the Non Delivery Reports as necessary.\n"
] |
[
4,
2
] |
[] |
[] |
[
"email",
"python"
] |
stackoverflow_0000079602_email_python.txt
|
Q:
Mapping internal data elements to external vendors' XML schema
I'm considering Altova MapForce (or something similar) to produce either XSLT and/or a Java or C# class to do the translation. Today, we pull data right out of the database and manually build an XML string that we post to a webservice.
Should it be db -> (internal)XML -> XSLT -> (External)XML? What do you folks do out there in the wide world?
A:
I would use one of the out-of-the-box XML serialization classes to do your internal XML generation, and then use XSLT to transform to the external XML. You might generate a schema as well to enforce that the translation code (whatever will drive your XSLT translation) continues to get the XML it is expecting for translation in case of changes to the object breaks things.
There are a number of XSLT editors on the market that will help you do the mappings, but I prefer to just use a regular XML editor.
A:
ya, I think you're heading down the right path with MapForce. If you don't want to write code to preform the actual transformation, MapForce can do that for you also. THis may be better long term b/c it's less code to maintain.
Steer clear of more expensive options (e.g. BizTalk) unless you really need to B2B integration and orchestration.
A:
What database are you using? Oracle has some nice XML mapping tools. There are some Java binding tools (one is http://java.sun.com/developer/technicalArticles/WebServices/jaxb). However, if you have the luxory consider using Ruby which has nice built-in "to_xml" methods.
A:
Tip #1: Avoid all use of XSLT.
The tool support sucks. The resulting solution will be unmaintainable.
Tip #2: Eliminate all unnecessary steps.
Just translate your resultset (assuming you're using JDBC or equiv) to the outbound XML.
Tip #3: Assume all use of a schema-based tool to be incorrect and plan accordingly.
In other words, just fake it. If you have to squirt out some mutant SOAP (redundant, I know) payload just mock up a working SOAP message and then turn it into a template. Velocity doesn't suck.
That said, the best/correct answer, is to use an "XML Writer" style solution. There's a few.
The best is the one I wrote, LOX (Lightweight Objects for XML).
The public API uses a Builder design pattern. Due to some magic under the hood, it's impossible to create malformed XML.
Please note: If XML is the answer, you've asked the wrong question. Sometimes, we're forced against our will to use it in some way. When that happens, it's crucial to use tools which minimize developer effort and improve code maintainability.
|
Mapping internal data elements to external vendors' XML schema
|
I'm considering Altova MapForce (or something similar) to produce either XSLT and/or a Java or C# class to do the translation. Today, we pull data right out of the database and manually build an XML string that we post to a webservice.
Should it be db -> (internal)XML -> XSLT -> (External)XML? What do you folks do out there in the wide world?
|
[
"I would use one of the out-of-the-box XML serialization classes to do your internal XML generation, and then use XSLT to transform to the external XML. You might generate a schema as well to enforce that the translation code (whatever will drive your XSLT translation) continues to get the XML it is expecting for translation in case of changes to the object breaks things.\nThere are a number of XSLT editors on the market that will help you do the mappings, but I prefer to just use a regular XML editor.\n",
"ya, I think you're heading down the right path with MapForce. If you don't want to write code to preform the actual transformation, MapForce can do that for you also. THis may be better long term b/c it's less code to maintain.\nSteer clear of more expensive options (e.g. BizTalk) unless you really need to B2B integration and orchestration.\n",
"What database are you using? Oracle has some nice XML mapping tools. There are some Java binding tools (one is http://java.sun.com/developer/technicalArticles/WebServices/jaxb). However, if you have the luxory consider using Ruby which has nice built-in \"to_xml\" methods.\n",
"Tip #1: Avoid all use of XSLT. \nThe tool support sucks. The resulting solution will be unmaintainable.\nTip #2: Eliminate all unnecessary steps. \nJust translate your resultset (assuming you're using JDBC or equiv) to the outbound XML.\nTip #3: Assume all use of a schema-based tool to be incorrect and plan accordingly. \nIn other words, just fake it. If you have to squirt out some mutant SOAP (redundant, I know) payload just mock up a working SOAP message and then turn it into a template. Velocity doesn't suck. \nThat said, the best/correct answer, is to use an \"XML Writer\" style solution. There's a few. \nThe best is the one I wrote, LOX (Lightweight Objects for XML).\nThe public API uses a Builder design pattern. Due to some magic under the hood, it's impossible to create malformed XML.\nPlease note: If XML is the answer, you've asked the wrong question. Sometimes, we're forced against our will to use it in some way. When that happens, it's crucial to use tools which minimize developer effort and improve code maintainability.\n"
] |
[
3,
1,
0,
0
] |
[] |
[] |
[
"c#",
"coldfusion",
"mapping",
"xml",
"xslt"
] |
stackoverflow_0000079594_c#_coldfusion_mapping_xml_xslt.txt
|
Q:
Assistance porting commctrl commands to C#
In a C++ app, I have an hWnd pointing to a window running in a third party process. This window contains controls which extend the COM TreeView control. I am interested in obtaining the CheckState of this control.
I use the hWnd to get an HTREEITEM using TreeView_GetRoot(hwnd) from commctrl.h
hwnd points to the window and hItem is return value from TreeView_GetRoot(hwnd). They are used as follows:
int iCheckState = TreeView_GetCheckState(hwnd, hItem);
switch (iCheckState)
{
case 0:
// (unchecked)
case 1:
// checked
...
}
I'm looking to port this code into a C# app which does the same thing (switches off the CheckState of the TreeView control). I have never used COM and am quite unfamiliar.
I have tried using the .NET mscomctl but can't find equivalent methods to TreeView_GetRoot or TreeView_GetCheckState. I'm totally stuck and don't know how to recreate this code in C# :(
Suggestions?
A:
We have these definitions from CommCtrl.h:
#define TreeView_SetItemState(hwndTV, hti, data, _mask) \
{ TVITEM _ms_TVi;\
_ms_TVi.mask = TVIF_STATE; \
_ms_TVi.hItem = (hti); \
_ms_TVi.stateMask = (_mask);\
_ms_TVi.state = (data);\
SNDMSG((hwndTV), TVM_SETITEM, 0, (LPARAM)(TV_ITEM *)&_ms_TVi);\
}
#define TreeView_SetCheckState(hwndTV, hti, fCheck) \
TreeView_SetItemState(hwndTV, hti, INDEXTOSTATEIMAGEMASK((fCheck)?2:1), TVIS_STATEIMAGEMASK)
We can translate this to C# using PInvoke. First, we implement these macros as functions, and then add whatever
other support is needed to make those functions work. Here is my first shot at it. You should double check my
code especially when it comes to the marshalling of the struct. Further, you may want to Post the message cross-thread
instead of calling SendMessage.
Lastly, I am not sure if this will work at all since I believe that the common
controls use WM_USER+ messages. When these messages are sent cross-process, the data parameter's addresses
are unmodified and the resulting process may cause an Access Violation. This would be a problem in whatever
language you use (C++ or C#), so perhaps I am wrong here (you say you have a working C++ program).
static class Interop {
public static IntPtr TreeView_SetCheckState(HandleRef hwndTV, IntPtr hti, bool fCheck) {
return TreeView_SetItemState(hwndTV, hti, INDEXTOSTATEIMAGEMASK((fCheck) ? 2 : 1), (uint)TVIS.TVIS_STATEIMAGEMASK);
}
public static IntPtr TreeView_SetItemState(HandleRef hwndTV, IntPtr hti, uint data, uint _mask) {
TVITEM _ms_TVi = new TVITEM();
_ms_TVi.mask = (uint)TVIF.TVIF_STATE;
_ms_TVi.hItem = (hti);
_ms_TVi.stateMask = (_mask);
_ms_TVi.state = (data);
IntPtr p = Marshal.AllocCoTaskMem(Marshal.SizeOf(_ms_TVi));
Marshal.StructureToPtr(_ms_TVi, p, false);
IntPtr r = SendMessage(hwndTV, (int)TVM.TVM_SETITEMW, IntPtr.Zero, p);
Marshal.FreeCoTaskMem(p);
return r;
}
private static uint INDEXTOSTATEIMAGEMASK(int i) { return ((uint)(i) << 12); }
[DllImport("user32.dll", CharSet = CharSet.Auto)]
private static extern IntPtr SendMessage(HandleRef hWnd, int msg, IntPtr wParam, IntPtr lParam);
private enum TVIF : uint {
TVIF_STATE = 0x0008
}
private enum TVIS : uint {
TVIS_STATEIMAGEMASK = 0xF000
}
private enum TVM : int {
TV_FIRST = 0x1100,
TVM_SETITEMA = (TV_FIRST + 13),
TVM_SETITEMW = (TV_FIRST + 63)
}
private struct TVITEM {
public uint mask;
public IntPtr hItem;
public uint state;
public uint stateMask;
public IntPtr pszText;
public int cchTextMax;
public int iImage;
public int iSelectedImage;
public int cChildren;
public IntPtr lParam;
}
}
A:
Why are you not using a Windows Forms TreeView control? If you are using this control, set the control's CheckBoxes property to true to enable check boxes, and set the Checked property on the nodes you want to display checked.
To get the collection of root nodes, use the TreeView's Nodes property. This returns a TreeNodeCollection which you can then index or add items to.
|
Assistance porting commctrl commands to C#
|
In a C++ app, I have an hWnd pointing to a window running in a third party process. This window contains controls which extend the COM TreeView control. I am interested in obtaining the CheckState of this control.
I use the hWnd to get an HTREEITEM using TreeView_GetRoot(hwnd) from commctrl.h
hwnd points to the window and hItem is return value from TreeView_GetRoot(hwnd). They are used as follows:
int iCheckState = TreeView_GetCheckState(hwnd, hItem);
switch (iCheckState)
{
case 0:
// (unchecked)
case 1:
// checked
...
}
I'm looking to port this code into a C# app which does the same thing (switches off the CheckState of the TreeView control). I have never used COM and am quite unfamiliar.
I have tried using the .NET mscomctl but can't find equivalent methods to TreeView_GetRoot or TreeView_GetCheckState. I'm totally stuck and don't know how to recreate this code in C# :(
Suggestions?
|
[
"We have these definitions from CommCtrl.h:\n#define TreeView_SetItemState(hwndTV, hti, data, _mask) \\\n{ TVITEM _ms_TVi;\\\n _ms_TVi.mask = TVIF_STATE; \\\n _ms_TVi.hItem = (hti); \\\n _ms_TVi.stateMask = (_mask);\\\n _ms_TVi.state = (data);\\\n SNDMSG((hwndTV), TVM_SETITEM, 0, (LPARAM)(TV_ITEM *)&_ms_TVi);\\\n}\n\n#define TreeView_SetCheckState(hwndTV, hti, fCheck) \\\n TreeView_SetItemState(hwndTV, hti, INDEXTOSTATEIMAGEMASK((fCheck)?2:1), TVIS_STATEIMAGEMASK)\n\nWe can translate this to C# using PInvoke. First, we implement these macros as functions, and then add whatever\nother support is needed to make those functions work. Here is my first shot at it. You should double check my\ncode especially when it comes to the marshalling of the struct. Further, you may want to Post the message cross-thread\ninstead of calling SendMessage.\nLastly, I am not sure if this will work at all since I believe that the common\ncontrols use WM_USER+ messages. When these messages are sent cross-process, the data parameter's addresses\nare unmodified and the resulting process may cause an Access Violation. This would be a problem in whatever\nlanguage you use (C++ or C#), so perhaps I am wrong here (you say you have a working C++ program).\nstatic class Interop {\n\npublic static IntPtr TreeView_SetCheckState(HandleRef hwndTV, IntPtr hti, bool fCheck) {\n return TreeView_SetItemState(hwndTV, hti, INDEXTOSTATEIMAGEMASK((fCheck) ? 2 : 1), (uint)TVIS.TVIS_STATEIMAGEMASK);\n}\n\npublic static IntPtr TreeView_SetItemState(HandleRef hwndTV, IntPtr hti, uint data, uint _mask) {\n TVITEM _ms_TVi = new TVITEM();\n _ms_TVi.mask = (uint)TVIF.TVIF_STATE;\n _ms_TVi.hItem = (hti);\n _ms_TVi.stateMask = (_mask);\n _ms_TVi.state = (data);\n IntPtr p = Marshal.AllocCoTaskMem(Marshal.SizeOf(_ms_TVi));\n Marshal.StructureToPtr(_ms_TVi, p, false);\n IntPtr r = SendMessage(hwndTV, (int)TVM.TVM_SETITEMW, IntPtr.Zero, p);\n Marshal.FreeCoTaskMem(p);\n return r;\n}\n\nprivate static uint INDEXTOSTATEIMAGEMASK(int i) { return ((uint)(i) << 12); }\n\n[DllImport(\"user32.dll\", CharSet = CharSet.Auto)]\nprivate static extern IntPtr SendMessage(HandleRef hWnd, int msg, IntPtr wParam, IntPtr lParam);\n\nprivate enum TVIF : uint {\n TVIF_STATE = 0x0008\n}\n\nprivate enum TVIS : uint {\n TVIS_STATEIMAGEMASK = 0xF000\n}\n\nprivate enum TVM : int {\n TV_FIRST = 0x1100,\n TVM_SETITEMA = (TV_FIRST + 13),\n TVM_SETITEMW = (TV_FIRST + 63)\n}\n\nprivate struct TVITEM {\n public uint mask;\n public IntPtr hItem;\n public uint state;\n public uint stateMask;\n public IntPtr pszText;\n public int cchTextMax;\n public int iImage;\n public int iSelectedImage;\n public int cChildren;\n public IntPtr lParam;\n}\n}\n\n",
"Why are you not using a Windows Forms TreeView control? If you are using this control, set the control's CheckBoxes property to true to enable check boxes, and set the Checked property on the nodes you want to display checked.\nTo get the collection of root nodes, use the TreeView's Nodes property. This returns a TreeNodeCollection which you can then index or add items to.\n"
] |
[
2,
1
] |
[] |
[] |
[
"c#",
"com",
"treeview"
] |
stackoverflow_0000078161_c#_com_treeview.txt
|
Q:
Design question: does the Phone dial the PhoneNumber, or does the PhoneNumber dial itself on the Phone?
This is re-posted from something I posted on the DDD Yahoo! group.
All things being equal, do you write phone.dial(phoneNumber) or phoneNumber.dialOn(phone)? Keep in mind possible future requirements (account numbers in addition to phone numbers, calculators in addition to phones).
The choice tends to illustrate how the idioms of Information Expert, Single Responsibility Principle, and Tell Don't Ask are at odds with each other.
phoneNumber.dialOn(phone) favors Information Expert and Tell Don't Ask, while phone.dial(phoneNumber) favors Single Responsibility Principle.
If you are familiar with Ken Pugh's work in Prefactoring, this is the Spreadsheet Conundrum; do you add rows or columns?
A:
phone.dial(), because it's the phone that does the dialing.
Actor.Verb( inputs ) -> outputs.
A:
Meh - User.Dial(number). The phone is meaningless in the given context. SOL (speak out loud) is a nice way to think this through (idioms and principles aside):
Phones have a dial. They can't dial themselves.
Phone numbers are digits.
Users dial PhoneNumbers on a Phone Dial.
A:
the question assumes the context of the answer, and thus creates a false dilemma
the 'spreadsheet conundrum' is a false dichotomy in this example: rows and columns are the presentation layer, not necessarily the data layer. The comments below tell me i misunderstood the analogy, but i don't think so - saying 'should this be a row or a column, which one is more likely to change' is forcing an unnecessary choice on the problem space - they are both equally likely to change. And in this specific example, this leads to choosing the wrong [yes wrong] paradigm for the solution. Dialing a phone is how old mechanical devices initiated a connection to another old mechanical device; this is hardly an apt analogy for modern telephony. And assuming that there is a 'user' to initiate the call simply moves the problem - although it moves it in the correct direction, i.e. away from the rotary-phone model ;-)
If you look at how the TAPI [sorry about the typo earlier, it's TAPI not ATAPI!] protocol works, there is a call controller - equivalent to the 'user' i suppose in some sense - that manages the connections between devices. One device does not call another, the call controller connects devices. So the example below is still essentially correct. It might be more correct to use a CallController object instead of a generic Connection, but the analogy should be clear enough as is.
In this example, a phone is a device with an address aka a 'phone number'. The 'dial' operator establishes a connection between the two devices. So the answer is:
Phone p1 = new Phone(phoneNumber1);
Phone p2 = new Phone(phoneNumber2);
Connection conn = new Connection(p1,p2);
conn.Open();
//...talk
conn.Close();
this will support multi-party calls as well, by overloading Connection to include a list of devices or other connections, e.g.
Connection confCall = new Connection(p1,p2,p3,p4,p5,p6);
confCall.Open();
Connection joinCall = new Connection(confCall,p7,p8,conn);
joinCall.Open();
look at the TAPI protocol for more examples
A:
If your writing OO then you start with the basic object, which is not the number, the number is going INTO the phone, so phone.dial() that way you can also phone.answer() phone.disconnect() phone.powerOFF, ect.Another way to look at it is does the phone dial the number or does the number dial the phone?
A:
Clearly, phone.Dial(number)
A:
Clearly the PhoneUserInterface interface, which you can get an implementation of from the PhoneUserFactory.CreatePhoneUser() method, has a method dial(Phone, Number) that you can use to dial the phone.
EDIT: Answering the comment. Neither. The phone should have a buttonPressed() or something like that. The user enters the digits/characters of the phone number via that interface.
A:
Neither. The User dials a Phone Number on a Phone.
A:
A: phone.dial(phone_number)
The PhoneNumber is dumb and is only a dataset. When the "dialling" happens, should the the PhoneNumber object know how to dial? There are many states to keep track of, like:
Is the phone already on another call? (if yes/no, what to do?)
What happens if the method of dialling changes? (global roaming, different carrier, etc.)
Also, what about scope? When a call is made, the phone number needs to be added to the list of recent outgoing calls.
If your PhoneNumber object needs to know all this, it's not DRY and your code will be less
portable and more likely to break.
I would say that Steven A. Lowe has it down. This should be done by a Controller type object to handle the different states, etc. Keep your PhoneNumber object dumb and give the smarts to the middle-man who needs to worry about keeping the phone humming along.
A:
Choosing whether to give the column objects or the row objects the dial method doesn't change how the program will scale.
The dial method is just going to be itself a sequence of row and column methods. You have to ask what those methods depend on.
If the sequence of row methods doesn't depend on knowing exactly which column object is involved (but does depend on which particular row object is involved) and vice versa for the sequence of column methods, then the problem scales as m + n (m = num. rows, n = num. cols). When you create a new row it doesn't actually save you any work had the column method been assigned the 'dial' method. You still have to specify a unique sequence of row methods for use in 'dial' somewhere!
If, however, say the sequence of column methods inside 'dial' doesn't even depend on which column object is involved (they use one 'generic' sequence of column methods), then the problem just scales as m. It doesn't actually matter if you've assigned the 'dial' method to the column objects, the program still scales as m; essentially no work is required to make a new dial method when adding 1 more column object and you clearly have the option of abstracting all those dial methods themselves into one generic dial method.
A:
Not to be the negative one here, but these kinds of questions are very academic. It completely depends on the application. I can think of very good reasons for doing it either way, and I've seen too many good programmers get bogged down in this kind of moot design details.
A:
I'm not sure how that relates to the spreadsheet conundrum. Do you expect, in the future, to use phones to dial account numbers? To use phone numbers on calculators? Your example of "future requirements preparedness" is not very good...
Plus, you use the verb "dial". Sure, I could imagine "dialing" an account number on a phone. (It's a big stretch, though.) But if this phone number is to be used on a calculator, would you call the action "dialing"? If the name of the function changes depending on the type of parameter it gets passed, you have a design error.
In a typical OO design, objects get sent messages carrying data, not the other way around.
A:
phone.dial() +1.
What is the variant state or behavior of a PhoneNumber? The only thing that comes to mind are "dialing rules" (dial country code if outside, dial "9" to get outside line, etc.). That context seems well suited to the Phone.
If your object model doesn't require variance -- a number is just a sequence of digits, "dial" is just foreach(digit in phonenumber) { press(digit); } I'm with Rob Conery: meh.
A:
I wouldn't have phone number as a class at all as it does not have any behavior, it's just a data element.
|
Design question: does the Phone dial the PhoneNumber, or does the PhoneNumber dial itself on the Phone?
|
This is re-posted from something I posted on the DDD Yahoo! group.
All things being equal, do you write phone.dial(phoneNumber) or phoneNumber.dialOn(phone)? Keep in mind possible future requirements (account numbers in addition to phone numbers, calculators in addition to phones).
The choice tends to illustrate how the idioms of Information Expert, Single Responsibility Principle, and Tell Don't Ask are at odds with each other.
phoneNumber.dialOn(phone) favors Information Expert and Tell Don't Ask, while phone.dial(phoneNumber) favors Single Responsibility Principle.
If you are familiar with Ken Pugh's work in Prefactoring, this is the Spreadsheet Conundrum; do you add rows or columns?
|
[
"phone.dial(), because it's the phone that does the dialing.\nActor.Verb( inputs ) -> outputs.\n",
"Meh - User.Dial(number). The phone is meaningless in the given context. SOL (speak out loud) is a nice way to think this through (idioms and principles aside):\nPhones have a dial. They can't dial themselves.\nPhone numbers are digits.\nUsers dial PhoneNumbers on a Phone Dial.\n",
"the question assumes the context of the answer, and thus creates a false dilemma\nthe 'spreadsheet conundrum' is a false dichotomy in this example: rows and columns are the presentation layer, not necessarily the data layer. The comments below tell me i misunderstood the analogy, but i don't think so - saying 'should this be a row or a column, which one is more likely to change' is forcing an unnecessary choice on the problem space - they are both equally likely to change. And in this specific example, this leads to choosing the wrong [yes wrong] paradigm for the solution. Dialing a phone is how old mechanical devices initiated a connection to another old mechanical device; this is hardly an apt analogy for modern telephony. And assuming that there is a 'user' to initiate the call simply moves the problem - although it moves it in the correct direction, i.e. away from the rotary-phone model ;-)\nIf you look at how the TAPI [sorry about the typo earlier, it's TAPI not ATAPI!] protocol works, there is a call controller - equivalent to the 'user' i suppose in some sense - that manages the connections between devices. One device does not call another, the call controller connects devices. So the example below is still essentially correct. It might be more correct to use a CallController object instead of a generic Connection, but the analogy should be clear enough as is.\nIn this example, a phone is a device with an address aka a 'phone number'. The 'dial' operator establishes a connection between the two devices. So the answer is:\nPhone p1 = new Phone(phoneNumber1);\nPhone p2 = new Phone(phoneNumber2);\nConnection conn = new Connection(p1,p2);\nconn.Open();\n//...talk\nconn.Close();\n\nthis will support multi-party calls as well, by overloading Connection to include a list of devices or other connections, e.g.\nConnection confCall = new Connection(p1,p2,p3,p4,p5,p6);\nconfCall.Open();\n\nConnection joinCall = new Connection(confCall,p7,p8,conn);\njoinCall.Open();\n\nlook at the TAPI protocol for more examples\n",
"If your writing OO then you start with the basic object, which is not the number, the number is going INTO the phone, so phone.dial() that way you can also phone.answer() phone.disconnect() phone.powerOFF, ect.Another way to look at it is does the phone dial the number or does the number dial the phone?\n",
"Clearly, phone.Dial(number)\n",
"Clearly the PhoneUserInterface interface, which you can get an implementation of from the PhoneUserFactory.CreatePhoneUser() method, has a method dial(Phone, Number) that you can use to dial the phone.\nEDIT: Answering the comment. Neither. The phone should have a buttonPressed() or something like that. The user enters the digits/characters of the phone number via that interface.\n",
"Neither. The User dials a Phone Number on a Phone.\n",
"A: phone.dial(phone_number)\nThe PhoneNumber is dumb and is only a dataset. When the \"dialling\" happens, should the the PhoneNumber object know how to dial? There are many states to keep track of, like:\n\nIs the phone already on another call? (if yes/no, what to do?)\nWhat happens if the method of dialling changes? (global roaming, different carrier, etc.)\nAlso, what about scope? When a call is made, the phone number needs to be added to the list of recent outgoing calls.\n\nIf your PhoneNumber object needs to know all this, it's not DRY and your code will be less\nportable and more likely to break.\nI would say that Steven A. Lowe has it down. This should be done by a Controller type object to handle the different states, etc. Keep your PhoneNumber object dumb and give the smarts to the middle-man who needs to worry about keeping the phone humming along.\n",
"Choosing whether to give the column objects or the row objects the dial method doesn't change how the program will scale.\nThe dial method is just going to be itself a sequence of row and column methods. You have to ask what those methods depend on.\nIf the sequence of row methods doesn't depend on knowing exactly which column object is involved (but does depend on which particular row object is involved) and vice versa for the sequence of column methods, then the problem scales as m + n (m = num. rows, n = num. cols). When you create a new row it doesn't actually save you any work had the column method been assigned the 'dial' method. You still have to specify a unique sequence of row methods for use in 'dial' somewhere! \nIf, however, say the sequence of column methods inside 'dial' doesn't even depend on which column object is involved (they use one 'generic' sequence of column methods), then the problem just scales as m. It doesn't actually matter if you've assigned the 'dial' method to the column objects, the program still scales as m; essentially no work is required to make a new dial method when adding 1 more column object and you clearly have the option of abstracting all those dial methods themselves into one generic dial method.\n",
"Not to be the negative one here, but these kinds of questions are very academic. It completely depends on the application. I can think of very good reasons for doing it either way, and I've seen too many good programmers get bogged down in this kind of moot design details.\n",
"I'm not sure how that relates to the spreadsheet conundrum. Do you expect, in the future, to use phones to dial account numbers? To use phone numbers on calculators? Your example of \"future requirements preparedness\" is not very good...\nPlus, you use the verb \"dial\". Sure, I could imagine \"dialing\" an account number on a phone. (It's a big stretch, though.) But if this phone number is to be used on a calculator, would you call the action \"dialing\"? If the name of the function changes depending on the type of parameter it gets passed, you have a design error.\nIn a typical OO design, objects get sent messages carrying data, not the other way around. \n",
"phone.dial() +1. \nWhat is the variant state or behavior of a PhoneNumber? The only thing that comes to mind are \"dialing rules\" (dial country code if outside, dial \"9\" to get outside line, etc.). That context seems well suited to the Phone. \nIf your object model doesn't require variance -- a number is just a sequence of digits, \"dial\" is just foreach(digit in phonenumber) { press(digit); } I'm with Rob Conery: meh. \n",
"I wouldn't have phone number as a class at all as it does not have any behavior, it's just a data element. \n"
] |
[
20,
10,
3,
1,
1,
1,
1,
1,
1,
0,
0,
0,
0
] |
[] |
[] |
[
"oop"
] |
stackoverflow_0000068617_oop.txt
|
Q:
Screen scraping a command window using .net managed code
I am writing a program in dot net that will execute scripts and command line programs using the framework 2.0's Process object. I want to be able to access the screen buffers of the process in my program. I've investigated this and it appears that I need to access console stdout and stderr buffers. Anyone know how this is accomplished using managed code?
I think I need to use the AttachConsole and the ReadConsoleOutput of the windows console attached to the task in order to read a block of character and attribute data from the console screen. I need to do this is managed code.
See http://msdn.microsoft.com/en-us/library/ms684965(VS.85).aspx
A:
You can accomplish this using the StandardError, StandardOutput, and StandardInput properties on the System.Diagnostics.Process class.
MSDN has a nice example of redirecting standard in and out of a process.
Note that you can only redirect the output of processes that you started. External processes that you didn't launch can't have their stdout redirected after the fact.
Also note that to use StandardInput, you must set ProcessStartInfo.UseShellExecute to false, and you must set ProcessStartInfo.RedirectStandardInput to true. Otherwise, writing to the StandardInput stream throws an exception.
|
Screen scraping a command window using .net managed code
|
I am writing a program in dot net that will execute scripts and command line programs using the framework 2.0's Process object. I want to be able to access the screen buffers of the process in my program. I've investigated this and it appears that I need to access console stdout and stderr buffers. Anyone know how this is accomplished using managed code?
I think I need to use the AttachConsole and the ReadConsoleOutput of the windows console attached to the task in order to read a block of character and attribute data from the console screen. I need to do this is managed code.
See http://msdn.microsoft.com/en-us/library/ms684965(VS.85).aspx
|
[
"You can accomplish this using the StandardError, StandardOutput, and StandardInput properties on the System.Diagnostics.Process class.\nMSDN has a nice example of redirecting standard in and out of a process.\nNote that you can only redirect the output of processes that you started. External processes that you didn't launch can't have their stdout redirected after the fact.\nAlso note that to use StandardInput, you must set ProcessStartInfo.UseShellExecute to false, and you must set ProcessStartInfo.RedirectStandardInput to true. Otherwise, writing to the StandardInput stream throws an exception. \n"
] |
[
2
] |
[] |
[] |
[
".net",
"buffer",
"console",
"console_scraping",
"screen_scraping"
] |
stackoverflow_0000079787_.net_buffer_console_console_scraping_screen_scraping.txt
|
Q:
What is the difference between file modification time and file changed time?
I am confused between the term file modification time and file changed time. Can anyone help to make it clearer?
A:
mtime is modification time - contents have changed.
ctime is status change time - perms and ownership as well as contents.
Wikipedia says:
* mtime: time of last modification (ls -l),
* ctime: time of last status change (ls -lc) and
* atime: time of last access (ls -lu).
Note that ctime is not the time of
file creation. Writing to a file
changes its mtime, ctime, and atime. A
change in file permissions or file
ownership changes its ctime and atime.
Reading a file changes its atime. File
systems mounted with the noatime
option do not update the atime on
reads, and the relatime option
provides for updates only if the
previous atime is older than the mtime
or ctime. Unlike atime and mtime,
ctime cannot be set with utime() (as
used e.g. by touch); the only way to
set it to an arbitrary value is by
changing the system clock.
|
What is the difference between file modification time and file changed time?
|
I am confused between the term file modification time and file changed time. Can anyone help to make it clearer?
|
[
"mtime is modification time - contents have changed.\nctime is status change time - perms and ownership as well as contents.\nWikipedia says:\n\n* mtime: time of last modification (ls -l),\n* ctime: time of last status change (ls -lc) and\n* atime: time of last access (ls -lu).\n\nNote that ctime is not the time of\nfile creation. Writing to a file\nchanges its mtime, ctime, and atime. A\nchange in file permissions or file\nownership changes its ctime and atime.\nReading a file changes its atime. File\nsystems mounted with the noatime\noption do not update the atime on\nreads, and the relatime option\nprovides for updates only if the\nprevious atime is older than the mtime\nor ctime. Unlike atime and mtime,\nctime cannot be set with utime() (as\nused e.g. by touch); the only way to\nset it to an arbitrary value is by\nchanging the system clock.\n\n"
] |
[
31
] |
[] |
[] |
[
"filesystems",
"operating_system"
] |
stackoverflow_0000079809_filesystems_operating_system.txt
|
Q:
Creating SharePoint 2007 list items via the Web Dav interface
SharePoint 2007 (both Moss and Wss) exposes document libraries via web dav, allowing you to create documents via essentially file system level activities (e.g. saving documents to a location).
SharePoint also seems to expose lists via the same web dav interface, as directories but they are usually empty. Is it possible to create or manipulate a list item somehow via this exposure?
A:
In short: No.
Longer answer: Kinda. Any item stored in sharepoint is in a list, including files. But not all lists have files. A document library is a list with each element being a file+metadata. Other lists (like announcments) are just metadata. Only lists that contain files are exposed via webdav, and even then you are limited to mucking around with the file - there is no way to use webdav (afaik) to edit the metadata.
Hope this helps.
Oisin.
A:
Agreed. The only thing exposed to webdav is a list item's attachment (or a library's documents). Even if you bring up a file's properties in explorer, there's no options for list data.
If you're working with Office 2007 documents, you can create a document information panel that can be tied into sharepoint.
A:
No, but in my experience most things looking to speak WebDAV to something are pretty much expecting to work with files or documents of some sort. Since non-library lists in SharePoint don't really have an associated file (yeah, they can have attachments, but that's not the same), then effectively the primary construct WebDAV is built around (document) is missing. What would you be Authoring and Versioning?
If you are writing your own client, there are robust web services for interacting with lists (both the library and non-library varieties)
|
Creating SharePoint 2007 list items via the Web Dav interface
|
SharePoint 2007 (both Moss and Wss) exposes document libraries via web dav, allowing you to create documents via essentially file system level activities (e.g. saving documents to a location).
SharePoint also seems to expose lists via the same web dav interface, as directories but they are usually empty. Is it possible to create or manipulate a list item somehow via this exposure?
|
[
"In short: No. \nLonger answer: Kinda. Any item stored in sharepoint is in a list, including files. But not all lists have files. A document library is a list with each element being a file+metadata. Other lists (like announcments) are just metadata. Only lists that contain files are exposed via webdav, and even then you are limited to mucking around with the file - there is no way to use webdav (afaik) to edit the metadata.\nHope this helps.\nOisin.\n",
"Agreed. The only thing exposed to webdav is a list item's attachment (or a library's documents). Even if you bring up a file's properties in explorer, there's no options for list data.\nIf you're working with Office 2007 documents, you can create a document information panel that can be tied into sharepoint.\n",
"No, but in my experience most things looking to speak WebDAV to something are pretty much expecting to work with files or documents of some sort. Since non-library lists in SharePoint don't really have an associated file (yeah, they can have attachments, but that's not the same), then effectively the primary construct WebDAV is built around (document) is missing. What would you be Authoring and Versioning?\nIf you are writing your own client, there are robust web services for interacting with lists (both the library and non-library varieties)\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"sharepoint"
] |
stackoverflow_0000072773_sharepoint.txt
|
Q:
Best win32 compiled scripting language?
What is the best compilable scripting language for Win32? I prefer .EXE's because I don't want to install the runtime on the servers first (my company administrates many via remote), but I need to be able to do things like NTFS permissions and (if possible) APIs over the network.
There was a small Perl which appeared to be able to do most of this, but it does not seem to have been updated/developed in quite a while. I have wondered about Lua, but I don't know if it has everything I need yet (and don't want to hunt through fifty library sites trying to find out). Any thoughts?
A:
Have you considered using an EXE maker? For example, you can code in Python and use py2exe to create a standalone EXE that runs anywhere (it actually packages Python into the exe, so you don't have to install the runtime).
A:
Ruby is my scripting language of choice.
Try RubyScript2Exe.
A:
A scripting language is, almost by definition, not compiled into a standalone executable. So maybe you need to restate your intentions or give some indication about what kind of program you want to create.
C# is a powerful language that compiles to .EXE and allows you to interface with pretty much anything (through native p/invoke calls, if necessary). A basic but very usable Visual Studio for C# can be downloaded for free from the Microsoft website. The .NET runtime is installed on most systems nowadays.
A:
Did you consider AutoIt ?
It is a scripting language, and you can quickly transform a script into an exe...
A:
At OSCON 2005, I heard Damien Conway say "the only thing better than Perl is something that works well, even if it's not written in Perl."
It's good advice. Instead of looking for the best language that can be compiled to an .EXE, worry a lot more about writing it in a language that can be compiled to an .EXE. Use whatever works. Just remember that the quality of your programming matters infinitely more than what language you use.
That said, I like py2exe. YMMV. Good luck!
|
Best win32 compiled scripting language?
|
What is the best compilable scripting language for Win32? I prefer .EXE's because I don't want to install the runtime on the servers first (my company administrates many via remote), but I need to be able to do things like NTFS permissions and (if possible) APIs over the network.
There was a small Perl which appeared to be able to do most of this, but it does not seem to have been updated/developed in quite a while. I have wondered about Lua, but I don't know if it has everything I need yet (and don't want to hunt through fifty library sites trying to find out). Any thoughts?
|
[
"Have you considered using an EXE maker? For example, you can code in Python and use py2exe to create a standalone EXE that runs anywhere (it actually packages Python into the exe, so you don't have to install the runtime).\n",
"Ruby is my scripting language of choice.\nTry RubyScript2Exe.\n",
"A scripting language is, almost by definition, not compiled into a standalone executable. So maybe you need to restate your intentions or give some indication about what kind of program you want to create.\nC# is a powerful language that compiles to .EXE and allows you to interface with pretty much anything (through native p/invoke calls, if necessary). A basic but very usable Visual Studio for C# can be downloaded for free from the Microsoft website. The .NET runtime is installed on most systems nowadays.\n",
"Did you consider AutoIt ?\nIt is a scripting language, and you can quickly transform a script into an exe...\n",
"At OSCON 2005, I heard Damien Conway say \"the only thing better than Perl is something that works well, even if it's not written in Perl.\"\nIt's good advice. Instead of looking for the best language that can be compiled to an .EXE, worry a lot more about writing it in a language that can be compiled to an .EXE. Use whatever works. Just remember that the quality of your programming matters infinitely more than what language you use.\nThat said, I like py2exe. YMMV. Good luck!\n"
] |
[
5,
4,
2,
1,
1
] |
[] |
[] |
[
"compiler_construction",
"scripting",
"windows"
] |
stackoverflow_0000079582_compiler_construction_scripting_windows.txt
|
Q:
How to fix a corrupt Delphi 2009 Install
I installed both the Delphi 2009 trial and actual release via the web installer when I received them and experienced the same errors when installing both.
Both times it appears that the core web installer failed when it went to spawn the additional install packages for boost, documentation and dbtools. (It brought up a findfile dialog asking for a setup.msi that didn't exist on my machine). When cancelling out of this, the installer reported a fatal error.
The uninstaller did not appear in my program list, and would did not launch from the installation folder.
Future attempts to bring up the installer had it in a state where it thought Delphi 2009 was already installed and it wouldn't correct or repair or uninstall it.
A:
Step 1
Clean out the registry of all things Delphi 2009.
You're looking for HKLM\Software\Codegear\BDS\6.0 and everything under it. Purge the HKCU equivalent while you're at it.
Search under HKEY.CLASSES.ROOT for anything that contains "CodeGear\RAD Studio\6.0" - assuming you installed into the default folder. Purge all those items from the CLSID level.
Step 2
Clean up Windows Installer using the Microsoft Windows Installer Cleanup utility.
Step 3
I suggest a reboot at this stage.
Step 4
Try to install again.
Good Luck!
A:
The problems seem to originate with the web installer not having all the files needed.
Download the 2009 ISO: http://cc.codegear.com/item/26049
Mounted it using this free tool from Microsoft: http://download.microsoft.com/download/7/b/6/7b6abd84-7841-4978-96f5-bd58df02efa2/winxpvirtualcdcontrolpanel_21.exe (You can burn it to a DVD too)
Then reran the installer. At this point, both the repair and uninstall worked.
|
How to fix a corrupt Delphi 2009 Install
|
I installed both the Delphi 2009 trial and actual release via the web installer when I received them and experienced the same errors when installing both.
Both times it appears that the core web installer failed when it went to spawn the additional install packages for boost, documentation and dbtools. (It brought up a findfile dialog asking for a setup.msi that didn't exist on my machine). When cancelling out of this, the installer reported a fatal error.
The uninstaller did not appear in my program list, and would did not launch from the installation folder.
Future attempts to bring up the installer had it in a state where it thought Delphi 2009 was already installed and it wouldn't correct or repair or uninstall it.
|
[
"Step 1\nClean out the registry of all things Delphi 2009.\nYou're looking for HKLM\\Software\\Codegear\\BDS\\6.0 and everything under it. Purge the HKCU equivalent while you're at it.\nSearch under HKEY.CLASSES.ROOT for anything that contains \"CodeGear\\RAD Studio\\6.0\" - assuming you installed into the default folder. Purge all those items from the CLSID level.\nStep 2\nClean up Windows Installer using the Microsoft Windows Installer Cleanup utility. \nStep 3\nI suggest a reboot at this stage.\nStep 4\nTry to install again.\nGood Luck!\n",
"The problems seem to originate with the web installer not having all the files needed.\nDownload the 2009 ISO: http://cc.codegear.com/item/26049\nMounted it using this free tool from Microsoft: http://download.microsoft.com/download/7/b/6/7b6abd84-7841-4978-96f5-bd58df02efa2/winxpvirtualcdcontrolpanel_21.exe (You can burn it to a DVD too)\nThen reran the installer. At this point, both the repair and uninstall worked.\n"
] |
[
4,
0
] |
[] |
[] |
[
"delphi",
"delphi_2009"
] |
stackoverflow_0000078263_delphi_delphi_2009.txt
|
Q:
Getting user photo from SPUser using WSS Object model
I am trying to retrieve a user on Sharepoint's user photo through the WSS 3.0 object model. I have been browsing the web for solutions, but so far I've been unable to find a way to do it. Is it possible, and if so how?
A:
Here is a code snippet that should help get the job done for you. You may need to do some additional validation to avoid any exceptions (ensuring the profile actually exists, ensuring the image URL actually exists, etc...):
//get current profile manager
UserProfileManager objUserProfileManager = new UserProfileManager(PortalContext.Current);
//get current users profile
UserProfile profile = objUserProfileManager.GetUserProfile(true);
//get user image URL
string imageUrl = (string)profile[PropertyConstants.PictureUrl];
//do something here with imageUrl
A:
If you are strictly talking about WSS 3.0 (and not MOSS), then you really don't have global user profiles per se, but a hiddenh User Information List in each site collection. That mean none of the stuff in the Microsoft.Office.Server namespaces is available to you.
However, you can update the User Information List programatically as long as you know the URL to a user's picture. As long as you're running with some kind of elevated privileges, you should be able to manipulate this list just like you can with any other SharePoint list. Keep in mind that this list is only good for the scope of a site collection, so users would have to make this same update all over the place to actually have a photo URL. Plus users don't get into the User Information List until someone assigns some kind of permission to them, so not every user in your domain will be in there.
The clean way to handle this is definitely the User Profile mechanism is MOSS, but if that's an option the question should really be updated to ask about MOSS vs WSS.
A:
Ah, You have to use the UserProfileManager class.
More information here: http://msdn.microsoft.com/en-us/library/microsoft.office.server.userprofiles.userprofilemanager.aspx
Example use:
public override void ItemAdded(SPItemEventProperties properties)
{
// Get list item on which the event occurred.
SPListItem item = properties.ListItem;
// Set the Author Image field to the user's PictureURL if it exists.
using (SPWeb web = properties.OpenWeb())
{
// Author: {C32DB804-FF2D-4656-A38A-B0394BA5C931}
SPFieldUserValue authorValue = new SPFieldUserValue(properties.OpenWeb(), item[new Guid("{C32DB804-FF2D-4656-A38A-B0394BA5C931}")].ToString());
UserProfileManager profileManager = new UserProfileManager(ServerContext.GetContext(web.Site));
UserProfile profile = profileManager.GetUserProfile(authorValue.LookupId);
UserProfileValueCollection values = profile[PropertyConstants.PictureUrl];
if (values.Count > 0)
{
// Author Image: {37A5CA4C-7621-44d7-BF3B-583F742CE52F}
SPFieldUrlValue urlValue = new SPFieldUrlValue(values.Value.ToString());
item[new Guid("{37A5CA4C-7621-44d7-BF3B-583F742CE52F}")] = urlValue.Url;
}
}
item.Update();
// News Text: {7F55A8F0-4555-46BC-B24C-222240B862AF}
//
// Author Image: {37A5CA4C-7621-44d7-BF3B-583F742CE52F}
//
// Publish Date: {45E84B8B-E161-46C6-AD51-27A42E4992B5}
//
}
|
Getting user photo from SPUser using WSS Object model
|
I am trying to retrieve a user on Sharepoint's user photo through the WSS 3.0 object model. I have been browsing the web for solutions, but so far I've been unable to find a way to do it. Is it possible, and if so how?
|
[
"Here is a code snippet that should help get the job done for you. You may need to do some additional validation to avoid any exceptions (ensuring the profile actually exists, ensuring the image URL actually exists, etc...):\n //get current profile manager\n UserProfileManager objUserProfileManager = new UserProfileManager(PortalContext.Current);\n //get current users profile\n UserProfile profile = objUserProfileManager.GetUserProfile(true);\n //get user image URL\n string imageUrl = (string)profile[PropertyConstants.PictureUrl];\n\n //do something here with imageUrl\n\n",
"If you are strictly talking about WSS 3.0 (and not MOSS), then you really don't have global user profiles per se, but a hiddenh User Information List in each site collection. That mean none of the stuff in the Microsoft.Office.Server namespaces is available to you.\nHowever, you can update the User Information List programatically as long as you know the URL to a user's picture. As long as you're running with some kind of elevated privileges, you should be able to manipulate this list just like you can with any other SharePoint list. Keep in mind that this list is only good for the scope of a site collection, so users would have to make this same update all over the place to actually have a photo URL. Plus users don't get into the User Information List until someone assigns some kind of permission to them, so not every user in your domain will be in there.\nThe clean way to handle this is definitely the User Profile mechanism is MOSS, but if that's an option the question should really be updated to ask about MOSS vs WSS.\n",
"Ah, You have to use the UserProfileManager class.\nMore information here: http://msdn.microsoft.com/en-us/library/microsoft.office.server.userprofiles.userprofilemanager.aspx\nExample use:\npublic override void ItemAdded(SPItemEventProperties properties)\n{\n // Get list item on which the event occurred.\n SPListItem item = properties.ListItem;\n\n // Set the Author Image field to the user's PictureURL if it exists.\n using (SPWeb web = properties.OpenWeb())\n {\n // Author: {C32DB804-FF2D-4656-A38A-B0394BA5C931}\n SPFieldUserValue authorValue = new SPFieldUserValue(properties.OpenWeb(), item[new Guid(\"{C32DB804-FF2D-4656-A38A-B0394BA5C931}\")].ToString());\n\n UserProfileManager profileManager = new UserProfileManager(ServerContext.GetContext(web.Site));\n UserProfile profile = profileManager.GetUserProfile(authorValue.LookupId);\n UserProfileValueCollection values = profile[PropertyConstants.PictureUrl];\n\n if (values.Count > 0)\n {\n // Author Image: {37A5CA4C-7621-44d7-BF3B-583F742CE52F}\n SPFieldUrlValue urlValue = new SPFieldUrlValue(values.Value.ToString());\n item[new Guid(\"{37A5CA4C-7621-44d7-BF3B-583F742CE52F}\")] = urlValue.Url;\n }\n }\n\n item.Update();\n\n // News Text: {7F55A8F0-4555-46BC-B24C-222240B862AF}\n //\n\n // Author Image: {37A5CA4C-7621-44d7-BF3B-583F742CE52F}\n // \n\n // Publish Date: {45E84B8B-E161-46C6-AD51-27A42E4992B5}\n //\n}\n\n"
] |
[
5,
3,
2
] |
[] |
[] |
[
"photo",
"sharepoint",
"wss"
] |
stackoverflow_0000061339_photo_sharepoint_wss.txt
|
Q:
SharePoint Permissions
I would like to create a folder that users who do not have privileges to view the rest of the site can see. This user group would be granted access to the site, but I only want them to be able to view one particular page.
Is this possible to do without going to every single page and removing the new user group's access?
A:
yeah, you should be able to create a new group and add the users to that list/subweb/whatever and just that. This is assuming that you didn't grant access to all users somewhere. If you did, then hopefully the default access is granted to a default user group (like sharepoint visitors) and you can alter that group to exclude the users you only want to access the limited part of the site.
If created correctly the new group shouldn't have access to the rest of the site.
A:
If you are getting thrown off by the fact that the user/group is listed as having "Limited Access" on the ACLs on, say, the parent site/web. That's just a placeholder SharePoint uses to make sure people have access to at least the bare minimum set of objects (e.g. theme and other UI files and the parent web itself) to get to the list or item you actually want them to have access to.
As long as the group only has access on a single list, you should have to worry about them having access to anything else.
|
SharePoint Permissions
|
I would like to create a folder that users who do not have privileges to view the rest of the site can see. This user group would be granted access to the site, but I only want them to be able to view one particular page.
Is this possible to do without going to every single page and removing the new user group's access?
|
[
"yeah, you should be able to create a new group and add the users to that list/subweb/whatever and just that. This is assuming that you didn't grant access to all users somewhere. If you did, then hopefully the default access is granted to a default user group (like sharepoint visitors) and you can alter that group to exclude the users you only want to access the limited part of the site. \nIf created correctly the new group shouldn't have access to the rest of the site. \n",
"If you are getting thrown off by the fact that the user/group is listed as having \"Limited Access\" on the ACLs on, say, the parent site/web. That's just a placeholder SharePoint uses to make sure people have access to at least the bare minimum set of objects (e.g. theme and other UI files and the parent web itself) to get to the list or item you actually want them to have access to. \nAs long as the group only has access on a single list, you should have to worry about them having access to anything else.\n"
] |
[
2,
1
] |
[] |
[] |
[
"permissions",
"sharepoint"
] |
stackoverflow_0000055669_permissions_sharepoint.txt
|
Q:
Multicore Text File Parsing
I have a quad core machine and would like to write some code to parse a text file that takes advantage of all four cores. The text file basically contains one record per line.
Multithreading isn't my forte so I'm wondering if anyone could give me some patterns that I might be able to use to parse the file in an optimal manner.
My first thoughts are to read all the lines into some sort of queue and then spin up threads to pull the lines off the queue and process them, but that means the queue would have to exist in memory and these are fairly large files so I'm not so keen on that idea.
My next thoughts are to have some sort of controller that will read in a line and assign it a thread to parse, but I'm not sure if the controller will end up being a bottleneck if the threads are processing the lines faster than it can read and assign them.
I know there's probably another simpler solution than both of these but at the moment I'm just not seeing it.
A:
I'd go with your original idea. If you are concerned that the queue might get too large implement a buffer-zone for it (i.e. If is gets above 100 lines the stop reading the file and if it gets below 20 then start reading again. You'd need to do some testing to find the optimal barriers). Make it so that any of the threads can potentially be the "reader thread" as it has to lock the queue to pull an item out anyway it can also check to see if the "low buffer region" has been hit and start reading again. While it's doing this the other threads can read out the rest of the queue.
Or if you prefer, have one reader thread assign the lines to three other processor threads (via their own queues) and implement a work-stealing strategy. I've never done this so I don't know how hard it is.
A:
Mark's answer is the simpler, more elegant solution. Why build a complex program with inter-thread communication if it's not necessary? Spawn 4 threads. Each thread calculates size-of-file/4 to determine it's start point (and stop point). Each thread can then work entirely independently.
The only reason to add a special thread to handle reading is if you expect some lines to take a very long time to process and you expect that these lines are clustered in a single part of the file. Adding inter-thread communication when you don't need it is a very bad idea. You greatly increase the chance of introducing an unexpected bottleneck and/or synchronization bugs.
A:
This will eliminate bottlenecks of having a single thread do the reading:
open file
for each thread n=0,1,2,3:
seek to file offset 1/n*filesize
scan to next complete line
process all lines in your part of the file
A:
My experience is with Java, not C#, so apologies if these solutions don't apply.
The immediate solution I can think up off the top of my head would be to have an executor that runs 3 threads (using Executors.newFixedThreadPool, say). For each line/record read from the input file, fire off a job at the executor (using ExecutorService.submit). The executor will queue requests for you, and allocate between the 3 threads.
Probably better solutions exist, but hopefully that will do the job. :-)
ETA: Sounds a lot like Wolfbyte's second solution. :-)
ETA2: System.Threading.ThreadPool sounds like a very similar idea in .NET. I've never used it, but it may be worth your while!
A:
Since the bottleneck will generally be in the processing and not the reading when dealing with files I'd go with the producer-consumer pattern. To avoid locking I'd look at lock free lists. Since you are using C# you can take a look at Julian Bucknall's Lock-Free List code.
A:
@lomaxx
@Derek & Mark: I wish there was a way to accept 2 answers. I'm going to have to end up going with Wolfbyte's solution because if I split the file into n sections there is the potential for a thread to come across a batch of "slow" transactions, however if I was processing a file where each process was guaranteed to require an equal amount of processing then I really like your solution of just splitting the file into chunks and assigning each chunk to a thread and being done with it.
No worries. If clustered "slow" transactions is a issue, then the queuing solution is the way to go. Depending on how fast or slow the average transaction is, you might also want to look at assigning multiple lines at a time to each worker. This will cut down on synchronization overhead. Likewise, you might need to optimize your buffer size. Of course, both of these are optimizations that you should probably only do after profiling. (No point in worrying about synchronization if it's not a bottleneck.)
A:
If the text that you are parsing is made up of repeated strings and tokens, break the file into chunks and for each chunk you could have one thread pre-parse it into tokens consisting of keywords, "punctuation", ID strings, and values. String compares and lookups can be quite expensive and passing this off to several worker threads can speed up the purely logical / semantic part of the code if it doesn't have to do the string lookups and comparisons.
The pre-parsed data chunks (where you have already done all the string comparisons and "tokenized" it) can then be passed to the part of the code that would actually look at the semantics and ordering of the tokenized data.
Also, you mention you are concerned with the size of your file occupying a large amount of memory. There are a couple things you could do to cut back on your memory budget.
Split the file into chunks and parse it. Read in only as many chunks as you are working on at a time plus a few for "read ahead" so you do not stall on disk when you finish processing a chunk before you go to the next chunk.
Alternatively, large files can be memory mapped and "demand" loaded. If you have more threads working on processing the file than CPUs (usually threads = 1.5-2X CPU's is a good number for demand paging apps), the threads that are stalling on IO for the memory mapped file will halt automatically from the OS until their memory is ready and the other threads will continue to process.
|
Multicore Text File Parsing
|
I have a quad core machine and would like to write some code to parse a text file that takes advantage of all four cores. The text file basically contains one record per line.
Multithreading isn't my forte so I'm wondering if anyone could give me some patterns that I might be able to use to parse the file in an optimal manner.
My first thoughts are to read all the lines into some sort of queue and then spin up threads to pull the lines off the queue and process them, but that means the queue would have to exist in memory and these are fairly large files so I'm not so keen on that idea.
My next thoughts are to have some sort of controller that will read in a line and assign it a thread to parse, but I'm not sure if the controller will end up being a bottleneck if the threads are processing the lines faster than it can read and assign them.
I know there's probably another simpler solution than both of these but at the moment I'm just not seeing it.
|
[
"I'd go with your original idea. If you are concerned that the queue might get too large implement a buffer-zone for it (i.e. If is gets above 100 lines the stop reading the file and if it gets below 20 then start reading again. You'd need to do some testing to find the optimal barriers). Make it so that any of the threads can potentially be the \"reader thread\" as it has to lock the queue to pull an item out anyway it can also check to see if the \"low buffer region\" has been hit and start reading again. While it's doing this the other threads can read out the rest of the queue.\nOr if you prefer, have one reader thread assign the lines to three other processor threads (via their own queues) and implement a work-stealing strategy. I've never done this so I don't know how hard it is.\n",
"Mark's answer is the simpler, more elegant solution. Why build a complex program with inter-thread communication if it's not necessary? Spawn 4 threads. Each thread calculates size-of-file/4 to determine it's start point (and stop point). Each thread can then work entirely independently.\nThe only reason to add a special thread to handle reading is if you expect some lines to take a very long time to process and you expect that these lines are clustered in a single part of the file. Adding inter-thread communication when you don't need it is a very bad idea. You greatly increase the chance of introducing an unexpected bottleneck and/or synchronization bugs.\n",
"This will eliminate bottlenecks of having a single thread do the reading:\nopen file\nfor each thread n=0,1,2,3:\n seek to file offset 1/n*filesize\n scan to next complete line\n process all lines in your part of the file\n\n",
"My experience is with Java, not C#, so apologies if these solutions don't apply.\nThe immediate solution I can think up off the top of my head would be to have an executor that runs 3 threads (using Executors.newFixedThreadPool, say). For each line/record read from the input file, fire off a job at the executor (using ExecutorService.submit). The executor will queue requests for you, and allocate between the 3 threads.\nProbably better solutions exist, but hopefully that will do the job. :-)\nETA: Sounds a lot like Wolfbyte's second solution. :-)\nETA2: System.Threading.ThreadPool sounds like a very similar idea in .NET. I've never used it, but it may be worth your while!\n",
"Since the bottleneck will generally be in the processing and not the reading when dealing with files I'd go with the producer-consumer pattern. To avoid locking I'd look at lock free lists. Since you are using C# you can take a look at Julian Bucknall's Lock-Free List code.\n",
"@lomaxx\n\n@Derek & Mark: I wish there was a way to accept 2 answers. I'm going to have to end up going with Wolfbyte's solution because if I split the file into n sections there is the potential for a thread to come across a batch of \"slow\" transactions, however if I was processing a file where each process was guaranteed to require an equal amount of processing then I really like your solution of just splitting the file into chunks and assigning each chunk to a thread and being done with it.\n\nNo worries. If clustered \"slow\" transactions is a issue, then the queuing solution is the way to go. Depending on how fast or slow the average transaction is, you might also want to look at assigning multiple lines at a time to each worker. This will cut down on synchronization overhead. Likewise, you might need to optimize your buffer size. Of course, both of these are optimizations that you should probably only do after profiling. (No point in worrying about synchronization if it's not a bottleneck.)\n",
"If the text that you are parsing is made up of repeated strings and tokens, break the file into chunks and for each chunk you could have one thread pre-parse it into tokens consisting of keywords, \"punctuation\", ID strings, and values. String compares and lookups can be quite expensive and passing this off to several worker threads can speed up the purely logical / semantic part of the code if it doesn't have to do the string lookups and comparisons.\nThe pre-parsed data chunks (where you have already done all the string comparisons and \"tokenized\" it) can then be passed to the part of the code that would actually look at the semantics and ordering of the tokenized data.\nAlso, you mention you are concerned with the size of your file occupying a large amount of memory. There are a couple things you could do to cut back on your memory budget.\nSplit the file into chunks and parse it. Read in only as many chunks as you are working on at a time plus a few for \"read ahead\" so you do not stall on disk when you finish processing a chunk before you go to the next chunk.\nAlternatively, large files can be memory mapped and \"demand\" loaded. If you have more threads working on processing the file than CPUs (usually threads = 1.5-2X CPU's is a good number for demand paging apps), the threads that are stalling on IO for the memory mapped file will halt automatically from the OS until their memory is ready and the other threads will continue to process.\n"
] |
[
9,
9,
3,
1,
1,
0,
0
] |
[] |
[] |
[
"c#",
"multithreading"
] |
stackoverflow_0000007015_c#_multithreading.txt
|
Q:
What metrics for GUI usability do you know?
Of course the best metric would be a happiness of your users.
But what metrics do you know for GUI usability measurements?
For example, one of the common metrics is a average click count to perform action.
What other metrics do you know?
A:
Jakob Nielsen has several articles regarding usability metrics, including one that is entitled, well, Usability Metrics:
The most basic measures are based on the definition of usability as a quality metric:
success rate (whether users can perform the task at all),
the time a task requires,
the error rate, and
users' subjective satisfaction.
A:
I just look at where I want users to go and where (physically) they are going on screen, I do this with data from Google Analytics.
A:
Not strictly usability, but we sometimes measure the ratio of the GUI and the backend code. This is for the managers, to remind them, that while functionality is importaint, the GUI should get a proportional budget for user testing and study too.
A:
check:
http://www.iqcontent.com/blog/2007/05/a-really-simple-metric-for-measuring-user-interfaces/
Here is a simple pre-launch check you
should do on all your web
applications. It only takes about 5
seconds and one screeshot
Q: “What percentage of your interface contains stuff that your customers
want to see?”
10%
25%
100%
If you answer a, or b then you might
do well, but you’ll probably get blown
out of the water once someone decides
to enter the market with option c.
|
What metrics for GUI usability do you know?
|
Of course the best metric would be a happiness of your users.
But what metrics do you know for GUI usability measurements?
For example, one of the common metrics is a average click count to perform action.
What other metrics do you know?
|
[
"Jakob Nielsen has several articles regarding usability metrics, including one that is entitled, well, Usability Metrics:\n\nThe most basic measures are based on the definition of usability as a quality metric:\n\nsuccess rate (whether users can perform the task at all),\nthe time a task requires,\nthe error rate, and\nusers' subjective satisfaction.\n\n\n",
"I just look at where I want users to go and where (physically) they are going on screen, I do this with data from Google Analytics.\n",
"Not strictly usability, but we sometimes measure the ratio of the GUI and the backend code. This is for the managers, to remind them, that while functionality is importaint, the GUI should get a proportional budget for user testing and study too.\n",
"check: \nhttp://www.iqcontent.com/blog/2007/05/a-really-simple-metric-for-measuring-user-interfaces/\n\nHere is a simple pre-launch check you\n should do on all your web\n applications. It only takes about 5\n seconds and one screeshot\nQ: “What percentage of your interface contains stuff that your customers\n want to see?”\n\n\n10%\n25%\n100%\n\nIf you answer a, or b then you might\n do well, but you’ll probably get blown\n out of the water once someone decides\n to enter the market with option c.\n\n"
] |
[
6,
0,
0,
0
] |
[] |
[] |
[
"metrics",
"usability",
"user_interface"
] |
stackoverflow_0000042863_metrics_usability_user_interface.txt
|
Q:
What can cause Web.sitemap to not be found?
I have a asp:menu object which I set up to use a SiteMapDataSource but everytime I try to run the site, I get a yellow screen from firefox saying it cannot find the web.sitemap. Here's the code for the sitemapdatasource and the menu. The Web.sitemap file is sitting in the root directory of the website.
<div>
<asp:Menu ID="MainMenu" CssClass="wTheme" Orientation="Horizontal" runat="server" DataSourceID="SiteMapDataSource1">
</asp:Menu>
<asp:SiteMapDataSource ID="SiteMapDataSource1" runat="server" SiteMapProvider="Web.sitemap" />
</div>
And this is the Web.sitemap looks like so:
<?xml version="1.0" encoding="utf-8" ?>
A:
I had a similar problem where I was specifying the path to the SiteMap from within my DataSource control. I tried removing it and it worked.
Try removing the path from the SiteMapDataSource and ensure that web.sitemap is in the root directory and see if that fixes it.
A:
You need to specify in web.config to use XmlSiteMapProvider and provide it with correct path to .sitemap file.
|
What can cause Web.sitemap to not be found?
|
I have a asp:menu object which I set up to use a SiteMapDataSource but everytime I try to run the site, I get a yellow screen from firefox saying it cannot find the web.sitemap. Here's the code for the sitemapdatasource and the menu. The Web.sitemap file is sitting in the root directory of the website.
<div>
<asp:Menu ID="MainMenu" CssClass="wTheme" Orientation="Horizontal" runat="server" DataSourceID="SiteMapDataSource1">
</asp:Menu>
<asp:SiteMapDataSource ID="SiteMapDataSource1" runat="server" SiteMapProvider="Web.sitemap" />
</div>
And this is the Web.sitemap looks like so:
<?xml version="1.0" encoding="utf-8" ?>
|
[
"I had a similar problem where I was specifying the path to the SiteMap from within my DataSource control. I tried removing it and it worked.\nTry removing the path from the SiteMapDataSource and ensure that web.sitemap is in the root directory and see if that fixes it.\n",
"You need to specify in web.config to use XmlSiteMapProvider and provide it with correct path to .sitemap file.\n"
] |
[
2,
0
] |
[] |
[] |
[
"asp.net"
] |
stackoverflow_0000080031_asp.net.txt
|
Q:
What is the easiest way using T-SQL / MS-SQL to append a string to existing table cells?
I have a table with a 'filename' column.
I recently performed an insert into this column but in my haste forgot to append the file extension to all the filenames entered. Fortunately they are all '.jpg' images.
How can I easily update the 'filename' column of these inserted fields (assuming I can select the recent rows based on known id values) to include the '.jpg' extension?
A:
The solution is:
UPDATE tablename SET [filename] = RTRIM([filename]) + '.jpg' WHERE id > 50
RTRIM is required because otherwise the [filename] column in its entirety will be selected for the string concatenation i.e. if it is a varchar(20) column and filename is only 10 letters long then it will still select those 10 letters and then 10 spaces. This will in turn result in an error as you try to fit 20 + 3 characters into a 20 character long field.
A:
MattMitchell's answer is correct if the column is a CHAR(20), but is not true if it was a VARCHAR(20) and the spaces hadn't been explicitly entered.
If you do try it on a CHAR field without the RTRIM function you will get a "String or binary data would be truncated" error.
A:
Nice easy one I think.
update MyTable
set filename = filename + '.jpg'
where ...
Edit: Ooh +1 to @MattMitchell's answer for the rtrim suggestion.
A:
If the original data came from a char column or variable (before being inserted into this table), then the original data had the spaces appended before becoming a varchar.
DECLARE @Name char(10), @Name2 varchar(10)
SELECT
@Name = 'Bob',
@Name2 = 'Bob'
SELECT
CASE WHEN @Name2 = @Name THEN 1 ELSE 0 END as Equal,
CASE WHEN @Name2 like @Name THEN 1 ELSE 0 END as Similiar
Life Lesson : never use char.
A:
The answer to the mystery of the trailing spaces can be found in the ANSI_PADDING
For more information visit: SET ANSI_PADDING (Transact-SQL)
The default is ANSI_PADDIN ON. This will affect the column only when it is created but not to existing columns.
Before you run the update query, verify your data. It could have been compromised.
Run the following query to find compromised rows:
SELECT *
FROM tablename
WHERE LEN(RTRIM([filename])) > 46
-- The column size varchar(50) minus 4 chars
-- for the needed file extension '.jpg' is 46.
These rows either have lost some characters or there is not enough space for adding the file extension.
A:
I wanted to adjust David B's "Life Lesson". I think it should be "never use char for variable length string values" -> There are valid uses for the char data type, just not as many as some people think :)
|
What is the easiest way using T-SQL / MS-SQL to append a string to existing table cells?
|
I have a table with a 'filename' column.
I recently performed an insert into this column but in my haste forgot to append the file extension to all the filenames entered. Fortunately they are all '.jpg' images.
How can I easily update the 'filename' column of these inserted fields (assuming I can select the recent rows based on known id values) to include the '.jpg' extension?
|
[
"The solution is:\nUPDATE tablename SET [filename] = RTRIM([filename]) + '.jpg' WHERE id > 50\n\nRTRIM is required because otherwise the [filename] column in its entirety will be selected for the string concatenation i.e. if it is a varchar(20) column and filename is only 10 letters long then it will still select those 10 letters and then 10 spaces. This will in turn result in an error as you try to fit 20 + 3 characters into a 20 character long field.\n",
"MattMitchell's answer is correct if the column is a CHAR(20), but is not true if it was a VARCHAR(20) and the spaces hadn't been explicitly entered.\nIf you do try it on a CHAR field without the RTRIM function you will get a \"String or binary data would be truncated\" error.\n",
"Nice easy one I think.\nupdate MyTable\nset filename = filename + '.jpg'\nwhere ...\n\nEdit: Ooh +1 to @MattMitchell's answer for the rtrim suggestion.\n",
"If the original data came from a char column or variable (before being inserted into this table), then the original data had the spaces appended before becoming a varchar.\nDECLARE @Name char(10), @Name2 varchar(10)\nSELECT\n @Name = 'Bob',\n @Name2 = 'Bob'\n\nSELECT\n CASE WHEN @Name2 = @Name THEN 1 ELSE 0 END as Equal,\n CASE WHEN @Name2 like @Name THEN 1 ELSE 0 END as Similiar\n\nLife Lesson : never use char.\n",
"The answer to the mystery of the trailing spaces can be found in the ANSI_PADDING\nFor more information visit: SET ANSI_PADDING (Transact-SQL)\nThe default is ANSI_PADDIN ON. This will affect the column only when it is created but not to existing columns.\nBefore you run the update query, verify your data. It could have been compromised.\nRun the following query to find compromised rows:\nSELECT *\nFROM tablename \nWHERE LEN(RTRIM([filename])) > 46 \n-- The column size varchar(50) minus 4 chars \n-- for the needed file extension '.jpg' is 46.\n\nThese rows either have lost some characters or there is not enough space for adding the file extension. \n",
"I wanted to adjust David B's \"Life Lesson\". I think it should be \"never use char for variable length string values\" -> There are valid uses for the char data type, just not as many as some people think :)\n"
] |
[
27,
6,
4,
1,
1,
1
] |
[] |
[] |
[
"sql",
"sql_server",
"tsql"
] |
stackoverflow_0000017624_sql_sql_server_tsql.txt
|
Q:
Byte buffer transfer via UDP
Can you provide an example of a byte buffer transferred between two java classes via UDP datagram?
A:
Hows' this ?
import java.io.IOException;
import java.net.DatagramPacket;
import java.net.DatagramSocket;
import java.net.InetSocketAddress;
public class Server {
public static void main(String[] args) throws IOException {
DatagramSocket socket = new DatagramSocket(new InetSocketAddress(5000));
byte[] message = new byte[512];
DatagramPacket packet = new DatagramPacket(message, message.length);
socket.receive(packet);
System.out.println(new String(packet.getData(), packet.getOffset(), packet.getLength()));
}
}
import java.io.IOException;
import java.net.DatagramPacket;
import java.net.DatagramSocket;
import java.net.InetSocketAddress;
public class Client {
public static void main(String[] args) throws IOException {
DatagramSocket socket = new DatagramSocket();
socket.connect(new InetSocketAddress(5000));
byte[] message = "Oh Hai!".getBytes();
DatagramPacket packet = new DatagramPacket(message, message.length);
socket.send(packet);
}
}
A:
@none
The DatagramSocket classes sure need a polish up, DatagramChannel is slightly better for clients, but confusing for server programming. For example:
import java.io.IOException;
import java.net.InetSocketAddress;
import java.nio.ByteBuffer;
import java.nio.channels.DatagramChannel;
public class Client {
public static void main(String[] args) throws IOException {
DatagramChannel channel = DatagramChannel.open();
ByteBuffer buffer = ByteBuffer.wrap("Oh Hai!".getBytes());
channel.send(buffer, new InetSocketAddress("localhost", 5000));
}
}
Bring on JSR-203 I say
|
Byte buffer transfer via UDP
|
Can you provide an example of a byte buffer transferred between two java classes via UDP datagram?
|
[
"Hows' this ?\n\nimport java.io.IOException;\nimport java.net.DatagramPacket;\nimport java.net.DatagramSocket;\nimport java.net.InetSocketAddress;\n\n\npublic class Server {\n\n public static void main(String[] args) throws IOException {\n DatagramSocket socket = new DatagramSocket(new InetSocketAddress(5000));\n byte[] message = new byte[512];\n DatagramPacket packet = new DatagramPacket(message, message.length);\n socket.receive(packet);\n System.out.println(new String(packet.getData(), packet.getOffset(), packet.getLength()));\n }\n}\n\n\nimport java.io.IOException;\nimport java.net.DatagramPacket;\nimport java.net.DatagramSocket;\nimport java.net.InetSocketAddress;\n\n\npublic class Client {\n\n public static void main(String[] args) throws IOException {\n DatagramSocket socket = new DatagramSocket();\n socket.connect(new InetSocketAddress(5000));\n byte[] message = \"Oh Hai!\".getBytes();\n DatagramPacket packet = new DatagramPacket(message, message.length);\n socket.send(packet);\n }\n}\n\n",
"@none\nThe DatagramSocket classes sure need a polish up, DatagramChannel is slightly better for clients, but confusing for server programming. For example:\n\nimport java.io.IOException;\nimport java.net.InetSocketAddress;\nimport java.nio.ByteBuffer;\nimport java.nio.channels.DatagramChannel;\n\n\npublic class Client {\n\n public static void main(String[] args) throws IOException {\n DatagramChannel channel = DatagramChannel.open();\n ByteBuffer buffer = ByteBuffer.wrap(\"Oh Hai!\".getBytes());\n channel.send(buffer, new InetSocketAddress(\"localhost\", 5000));\n }\n}\n\nBring on JSR-203 I say\n"
] |
[
4,
0
] |
[] |
[] |
[
"datagram",
"java",
"udp"
] |
stackoverflow_0000080042_datagram_java_udp.txt
|
Q:
What does COINIT_SPEED_OVER_MEMORY do?
When calling CoInitializeEx, you can specify the following values for dwCoInit:
typedef enum tagCOINIT {
COINIT_MULTITHREADED = 0x0,
COINIT_APARTMENTTHREADED = 0x2,
COINIT_DISABLE_OLE1DDE = 0x4,
COINIT_SPEED_OVER_MEMORY = 0x8,
} COINIT;
What does the suggestively titled "speed over memory" value do? Is it ignored these days in COM?
A:
No idea if it's still used but it was meant to change the balance used by the COM algorithms.
If you had tons of memory and wanted speed at all costs, you would set that flag.
In low-memory environments, leaving that flag off would favor reduced memory usage.
As it turns out, the marvellous Raymond Chen (of "The Old New Thing" fame) has now weighed in on the subject and, despite what that flag was meant to do, it apparently does nothing at all.
See What does the COINIT_SPEED_OVER_MEMORY flag to CoInitializeEx do? for more details:
When should you enable this mode? It doesn't matter, because as far as I can tell, there is no code anywhere in COM that changes its behavior based on whether the process has been placed into this mode! It looks like the flag was added when DCOM was introduced, but it never got hooked up to anything. (Or whatever code that had been hooked up to it never shipped.)
Also http://archives.neohapsis.com/archives/microsoft/various/dcom/2001-q1/0160.html from Steve Swartz, one of the original COM+ architects:
COINIT_SPEED_OVER_MEMORY is ignored by COM.
|
What does COINIT_SPEED_OVER_MEMORY do?
|
When calling CoInitializeEx, you can specify the following values for dwCoInit:
typedef enum tagCOINIT {
COINIT_MULTITHREADED = 0x0,
COINIT_APARTMENTTHREADED = 0x2,
COINIT_DISABLE_OLE1DDE = 0x4,
COINIT_SPEED_OVER_MEMORY = 0x8,
} COINIT;
What does the suggestively titled "speed over memory" value do? Is it ignored these days in COM?
|
[
"No idea if it's still used but it was meant to change the balance used by the COM algorithms.\nIf you had tons of memory and wanted speed at all costs, you would set that flag.\nIn low-memory environments, leaving that flag off would favor reduced memory usage.\n\nAs it turns out, the marvellous Raymond Chen (of \"The Old New Thing\" fame) has now weighed in on the subject and, despite what that flag was meant to do, it apparently does nothing at all.\nSee What does the COINIT_SPEED_OVER_MEMORY flag to CoInitializeEx do? for more details:\n\nWhen should you enable this mode? It doesn't matter, because as far as I can tell, there is no code anywhere in COM that changes its behavior based on whether the process has been placed into this mode! It looks like the flag was added when DCOM was introduced, but it never got hooked up to anything. (Or whatever code that had been hooked up to it never shipped.)\n\nAlso http://archives.neohapsis.com/archives/microsoft/various/dcom/2001-q1/0160.html from Steve Swartz, one of the original COM+ architects:\n\nCOINIT_SPEED_OVER_MEMORY is ignored by COM.\n\n"
] |
[
15
] |
[] |
[] |
[
"com"
] |
stackoverflow_0000080160_com.txt
|
Q:
Converting a .rptproj from VS2005 to VS2008
I've got my brand new VS2008 and decided to convert my main solution from VS2005. One of the projects is a SQL2005 reporting services project. Now that I've converted I cannot load it in VS2008. Is there anyway around this?
My problem is that my solution is a hybrid and has websites libraries and reports in there.
Separating it out breaks the logic the solution entity.
A:
Visual Studio 2008 does not support the 2005 business intelligence projects, so if you have not done so already don't uninstall 2005 Business Intelligence! You can continue to maintain those projects independently in VS2005.
SQL Server 2008 Business Intelligence will integrate with VS2008 so you will require that and an upgrade to your existing reporting project to use in VS2008.
A:
To obtain BI2008 you must install MSSQL2008. When you have done so, you may find the project will load. If it doesn't, create a new report project and add existing RDL files to it.
|
Converting a .rptproj from VS2005 to VS2008
|
I've got my brand new VS2008 and decided to convert my main solution from VS2005. One of the projects is a SQL2005 reporting services project. Now that I've converted I cannot load it in VS2008. Is there anyway around this?
My problem is that my solution is a hybrid and has websites libraries and reports in there.
Separating it out breaks the logic the solution entity.
|
[
"Visual Studio 2008 does not support the 2005 business intelligence projects, so if you have not done so already don't uninstall 2005 Business Intelligence! You can continue to maintain those projects independently in VS2005.\nSQL Server 2008 Business Intelligence will integrate with VS2008 so you will require that and an upgrade to your existing reporting project to use in VS2008.\n",
"To obtain BI2008 you must install MSSQL2008. When you have done so, you may find the project will load. If it doesn't, create a new report project and add existing RDL files to it.\n"
] |
[
1,
1
] |
[] |
[] |
[
"reporting_services",
"visual_studio_2008"
] |
stackoverflow_0000056256_reporting_services_visual_studio_2008.txt
|
Q:
Comparing MySQL Cross and Inner Joins
What are the potential pros and cons of each of these queries given different databases, configurations, etc? Is there ever a time when one would be more efficient than the other? Vice versa? Is there an even better way to do it? Can you explain why?
Query 1:
SELECT
*
FROM
table_a, table_b, table_c
WHERE
table_a.id = table_b.id AND
table_a.id = table_c.id AND
table_a.create_date > DATE('1998-01-01');
Query 2:
SELECT
*
FROM
table_a
INNER JOIN table_b ON
table_a.id = table_b.id
INNER JOIN table_c ON
table_a.id = table_c.id
WHERE
table_a.create_date > DATE('1998-01-01');
A:
Same query, different revision of SQL spec. The query optimizer should come up with the same query plan for those.
A:
Nope. I'm just sharing a large, overwhelmed database with some coworkers and am trying to come up with some ways to get more processor bang for our buck. I've been looking around online but haven't found a good explanation for some questions like this.
Sorry for sounding homework-y. I guess I spent too many years as a TA.
A:
Actually, I think query 2's more readable. Think about when you get to say 5,6, or 7 tables when you hit the where clause in query one. Following the joins could get messy.
As for performance, I have no idea. I bet if you go to the MySQL website, there would be info there - probably examples of joins.
Professionally, I've only worked on one project. But it was a big one, and they always followed query 2's format. This was using Microsoft SQL Server though.
|
Comparing MySQL Cross and Inner Joins
|
What are the potential pros and cons of each of these queries given different databases, configurations, etc? Is there ever a time when one would be more efficient than the other? Vice versa? Is there an even better way to do it? Can you explain why?
Query 1:
SELECT
*
FROM
table_a, table_b, table_c
WHERE
table_a.id = table_b.id AND
table_a.id = table_c.id AND
table_a.create_date > DATE('1998-01-01');
Query 2:
SELECT
*
FROM
table_a
INNER JOIN table_b ON
table_a.id = table_b.id
INNER JOIN table_c ON
table_a.id = table_c.id
WHERE
table_a.create_date > DATE('1998-01-01');
|
[
"Same query, different revision of SQL spec. The query optimizer should come up with the same query plan for those.\n",
"Nope. I'm just sharing a large, overwhelmed database with some coworkers and am trying to come up with some ways to get more processor bang for our buck. I've been looking around online but haven't found a good explanation for some questions like this.\nSorry for sounding homework-y. I guess I spent too many years as a TA.\n",
"Actually, I think query 2's more readable. Think about when you get to say 5,6, or 7 tables when you hit the where clause in query one. Following the joins could get messy.\nAs for performance, I have no idea. I bet if you go to the MySQL website, there would be info there - probably examples of joins.\nProfessionally, I've only worked on one project. But it was a big one, and they always followed query 2's format. This was using Microsoft SQL Server though.\n"
] |
[
2,
0,
0
] |
[
"I agree, it's sounding a bit too much like Homework!\nIf it isn't homework then I guess the simplest answer is readability.\nAs stated before, both queries will produce the same execution plan. If this is the case then the only thing you need to worry about it maintainability.\n"
] |
[
-1
] |
[
"mysql",
"sql"
] |
stackoverflow_0000080152_mysql_sql.txt
|
Q:
forms and jQuery
I'm creating a simple form for a site I manage. I use jQuery for my JavaScript. I noticed a large amount of plugins for jQuery and forms. Does anybody have any favorites that they find especially useful? In particular, plugins to help with validation would be the most useful.
A:
The jQuery Form Plugin is pretty much standard. It handles serializing form fields and AJAX submission.
A:
Form Validation is one that comes to my mind. I think is being used here in SO.
|
forms and jQuery
|
I'm creating a simple form for a site I manage. I use jQuery for my JavaScript. I noticed a large amount of plugins for jQuery and forms. Does anybody have any favorites that they find especially useful? In particular, plugins to help with validation would be the most useful.
|
[
"The jQuery Form Plugin is pretty much standard. It handles serializing form fields and AJAX submission.\n",
"Form Validation is one that comes to my mind. I think is being used here in SO.\n"
] |
[
5,
2
] |
[] |
[] |
[
"javascript",
"jquery",
"webforms"
] |
stackoverflow_0000079541_javascript_jquery_webforms.txt
|
Q:
how to embed a true type font within a postscript file
I have a cross platform app and for my Linux and Mac versions it generates a postscript file for printing reports and then prints them with CUPS. It works for simple characters and images but I would like to have the ability to embed a true type font directly into the postscript file. Does anyone know how to do this??
Also I can encode simple ascii characters but I'm not sure how to encode any characters beyond the usual a-z 0-9, things like foreign characters with accents.
A:
In order to embed a TrueType font in a Postscript document, you will first need to convert it to a Type 42 font. This conversion turns the font into postscript code.
There are several small utilities for doing this conversion, or you can read
the Type 42 specification and write
your own code for it.
Embedding Type 1 fonts is a lot easier. Linux ships with a large set of Type 1 fonts, and so does OS X if you have X11 installed. Generating PDF instead is also an option you may want to look into, since PDF can embed TrueType fonts directly.
A:
Postscript fonts come with widely varying encodings, so if you want to reliably
print iso-8859-1 characters you need to reencode the font in your postscript
program.
PostScript FAQ - How to print accented characters
|
how to embed a true type font within a postscript file
|
I have a cross platform app and for my Linux and Mac versions it generates a postscript file for printing reports and then prints them with CUPS. It works for simple characters and images but I would like to have the ability to embed a true type font directly into the postscript file. Does anyone know how to do this??
Also I can encode simple ascii characters but I'm not sure how to encode any characters beyond the usual a-z 0-9, things like foreign characters with accents.
|
[
"In order to embed a TrueType font in a Postscript document, you will first need to convert it to a Type 42 font. This conversion turns the font into postscript code.\nThere are several small utilities for doing this conversion, or you can read\nthe Type 42 specification and write\nyour own code for it.\nEmbedding Type 1 fonts is a lot easier. Linux ships with a large set of Type 1 fonts, and so does OS X if you have X11 installed. Generating PDF instead is also an option you may want to look into, since PDF can embed TrueType fonts directly.\n",
"Postscript fonts come with widely varying encodings, so if you want to reliably\nprint iso-8859-1 characters you need to reencode the font in your postscript\nprogram.\nPostScript FAQ - How to print accented characters\n"
] |
[
9,
5
] |
[] |
[] |
[
"cups",
"fonts",
"postscript",
"types"
] |
stackoverflow_0000078874_cups_fonts_postscript_types.txt
|
Q:
Do you use application frameworks?
Application frameworks such as DotNetNuke, Eclipse, Websphere and so forth are available today which offer customizable frameworks that can be used as dashboard applications. Do you use these or do you and your peers keep writing amazing, modular, maintainable dashboard frameworks which you support yourselves?
Are there any good web based, OS independent frameworks out there that you suggest using to build your own enterprise class infrastructure around?
A:
The one I use is Oracle Application Development Framework. It's a complete, fully supported framework, and Oracle use it themselves to build their own enterprise applications. It comes with a lot of JSF components that are very easy to bind to the underlying data objects.
I'd recommend this for all Java applications that need database data.
You find a discussion of it on the Oracle Wiki:
http://wiki.oracle.com/page/ADF+Methodology+-+Work+in+Progressent
A:
There's no one right answer. Look at the business need... if you're doing fairly typical things, then starting from an established framework is a good place to start. If you feel you may need some custom components or widgets, look for a framework that's extensible using the knowledge and skills that you have in-house.
Unless your line of business is to build application frameworks or dashboards, one should look very hard before building a whole new framework or dashboard.
A:
At work, we try to create from scratch as little as possible. We use Frameworks a lot (maybe not always end to end frameworks). We have used Dot Net Nuke a lot. Another framework we use a lot is CSLA.
A:
I personally use DotNetNuke quite extensively for both personal and business related ventures. However DNN does not meet one of your requirements as it is a .NET solution so it is windows dependent.
I have found that using DotNetNuke has greatly reduced our time to delivery, and we can focus on our core needs rather than the implementation of the common pieces.
A:
Be careful to consider how scalable the framework is. There are several frameworks out there that like to hammer your database because they think it's nothing but a glorified file system... those frameworks don't scale well at all.
|
Do you use application frameworks?
|
Application frameworks such as DotNetNuke, Eclipse, Websphere and so forth are available today which offer customizable frameworks that can be used as dashboard applications. Do you use these or do you and your peers keep writing amazing, modular, maintainable dashboard frameworks which you support yourselves?
Are there any good web based, OS independent frameworks out there that you suggest using to build your own enterprise class infrastructure around?
|
[
"The one I use is Oracle Application Development Framework. It's a complete, fully supported framework, and Oracle use it themselves to build their own enterprise applications. It comes with a lot of JSF components that are very easy to bind to the underlying data objects.\nI'd recommend this for all Java applications that need database data. \nYou find a discussion of it on the Oracle Wiki: \nhttp://wiki.oracle.com/page/ADF+Methodology+-+Work+in+Progressent\n",
"There's no one right answer. Look at the business need... if you're doing fairly typical things, then starting from an established framework is a good place to start. If you feel you may need some custom components or widgets, look for a framework that's extensible using the knowledge and skills that you have in-house.\nUnless your line of business is to build application frameworks or dashboards, one should look very hard before building a whole new framework or dashboard.\n",
"At work, we try to create from scratch as little as possible. We use Frameworks a lot (maybe not always end to end frameworks). We have used Dot Net Nuke a lot. Another framework we use a lot is CSLA.\n",
"I personally use DotNetNuke quite extensively for both personal and business related ventures. However DNN does not meet one of your requirements as it is a .NET solution so it is windows dependent.\nI have found that using DotNetNuke has greatly reduced our time to delivery, and we can focus on our core needs rather than the implementation of the common pieces.\n",
"Be careful to consider how scalable the framework is. There are several frameworks out there that like to hammer your database because they think it's nothing but a glorified file system... those frameworks don't scale well at all.\n"
] |
[
1,
1,
0,
0,
0
] |
[] |
[] |
[
"frameworks"
] |
stackoverflow_0000080216_frameworks.txt
|
Q:
What's the best way to do fixed-point math?
I need to speed up a program for the Nintendo DS which doesn't have an FPU, so I need to change floating-point math (which is emulated and slow) to fixed-point.
How I started was I changed floats to ints and whenever I needed to convert them, I used x>>8 to convert the fixed-point variable x to the actual number and x<<8 to convert to fixed-point. Soon I found out it was impossible to keep track of what needed to be converted and I also realized it would be difficult to change the precision of the numbers (8 in this case.)
My question is, how should I make this easier and still fast? Should I make a FixedPoint class, or just a FixedPoint8 typedef or struct with some functions/macros to convert them, or something else? Should I put something in the variable name to show it's fixed-point?
A:
You can try my fixed point class (Latest available @ https://github.com/eteran/cpp-utilities)
// From: https://github.com/eteran/cpp-utilities/edit/master/Fixed.h
// See also: http://stackoverflow.com/questions/79677/whats-the-best-way-to-do-fixed-point-math
/*
* The MIT License (MIT)
*
* Copyright (c) 2015 Evan Teran
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in all
* copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#ifndef FIXED_H_
#define FIXED_H_
#include <ostream>
#include <exception>
#include <cstddef> // for size_t
#include <cstdint>
#include <type_traits>
#include <boost/operators.hpp>
namespace numeric {
template <size_t I, size_t F>
class Fixed;
namespace detail {
// helper templates to make magic with types :)
// these allow us to determine resonable types from
// a desired size, they also let us infer the next largest type
// from a type which is nice for the division op
template <size_t T>
struct type_from_size {
static const bool is_specialized = false;
typedef void value_type;
};
#if defined(__GNUC__) && defined(__x86_64__)
template <>
struct type_from_size<128> {
static const bool is_specialized = true;
static const size_t size = 128;
typedef __int128 value_type;
typedef unsigned __int128 unsigned_type;
typedef __int128 signed_type;
typedef type_from_size<256> next_size;
};
#endif
template <>
struct type_from_size<64> {
static const bool is_specialized = true;
static const size_t size = 64;
typedef int64_t value_type;
typedef uint64_t unsigned_type;
typedef int64_t signed_type;
typedef type_from_size<128> next_size;
};
template <>
struct type_from_size<32> {
static const bool is_specialized = true;
static const size_t size = 32;
typedef int32_t value_type;
typedef uint32_t unsigned_type;
typedef int32_t signed_type;
typedef type_from_size<64> next_size;
};
template <>
struct type_from_size<16> {
static const bool is_specialized = true;
static const size_t size = 16;
typedef int16_t value_type;
typedef uint16_t unsigned_type;
typedef int16_t signed_type;
typedef type_from_size<32> next_size;
};
template <>
struct type_from_size<8> {
static const bool is_specialized = true;
static const size_t size = 8;
typedef int8_t value_type;
typedef uint8_t unsigned_type;
typedef int8_t signed_type;
typedef type_from_size<16> next_size;
};
// this is to assist in adding support for non-native base
// types (for adding big-int support), this should be fine
// unless your bit-int class doesn't nicely support casting
template <class B, class N>
B next_to_base(const N& rhs) {
return static_cast<B>(rhs);
}
struct divide_by_zero : std::exception {
};
template <size_t I, size_t F>
Fixed<I,F> divide(const Fixed<I,F> &numerator, const Fixed<I,F> &denominator, Fixed<I,F> &remainder, typename std::enable_if<type_from_size<I+F>::next_size::is_specialized>::type* = 0) {
typedef typename Fixed<I,F>::next_type next_type;
typedef typename Fixed<I,F>::base_type base_type;
static const size_t fractional_bits = Fixed<I,F>::fractional_bits;
next_type t(numerator.to_raw());
t <<= fractional_bits;
Fixed<I,F> quotient;
quotient = Fixed<I,F>::from_base(next_to_base<base_type>(t / denominator.to_raw()));
remainder = Fixed<I,F>::from_base(next_to_base<base_type>(t % denominator.to_raw()));
return quotient;
}
template <size_t I, size_t F>
Fixed<I,F> divide(Fixed<I,F> numerator, Fixed<I,F> denominator, Fixed<I,F> &remainder, typename std::enable_if<!type_from_size<I+F>::next_size::is_specialized>::type* = 0) {
// NOTE(eteran): division is broken for large types :-(
// especially when dealing with negative quantities
typedef typename Fixed<I,F>::base_type base_type;
typedef typename Fixed<I,F>::unsigned_type unsigned_type;
static const int bits = Fixed<I,F>::total_bits;
if(denominator == 0) {
throw divide_by_zero();
} else {
int sign = 0;
Fixed<I,F> quotient;
if(numerator < 0) {
sign ^= 1;
numerator = -numerator;
}
if(denominator < 0) {
sign ^= 1;
denominator = -denominator;
}
base_type n = numerator.to_raw();
base_type d = denominator.to_raw();
base_type x = 1;
base_type answer = 0;
// egyptian division algorithm
while((n >= d) && (((d >> (bits - 1)) & 1) == 0)) {
x <<= 1;
d <<= 1;
}
while(x != 0) {
if(n >= d) {
n -= d;
answer += x;
}
x >>= 1;
d >>= 1;
}
unsigned_type l1 = n;
unsigned_type l2 = denominator.to_raw();
// calculate the lower bits (needs to be unsigned)
// unfortunately for many fractions this overflows the type still :-/
const unsigned_type lo = (static_cast<unsigned_type>(n) << F) / denominator.to_raw();
quotient = Fixed<I,F>::from_base((answer << F) | lo);
remainder = n;
if(sign) {
quotient = -quotient;
}
return quotient;
}
}
// this is the usual implementation of multiplication
template <size_t I, size_t F>
void multiply(const Fixed<I,F> &lhs, const Fixed<I,F> &rhs, Fixed<I,F> &result, typename std::enable_if<type_from_size<I+F>::next_size::is_specialized>::type* = 0) {
typedef typename Fixed<I,F>::next_type next_type;
typedef typename Fixed<I,F>::base_type base_type;
static const size_t fractional_bits = Fixed<I,F>::fractional_bits;
next_type t(static_cast<next_type>(lhs.to_raw()) * static_cast<next_type>(rhs.to_raw()));
t >>= fractional_bits;
result = Fixed<I,F>::from_base(next_to_base<base_type>(t));
}
// this is the fall back version we use when we don't have a next size
// it is slightly slower, but is more robust since it doesn't
// require and upgraded type
template <size_t I, size_t F>
void multiply(const Fixed<I,F> &lhs, const Fixed<I,F> &rhs, Fixed<I,F> &result, typename std::enable_if<!type_from_size<I+F>::next_size::is_specialized>::type* = 0) {
typedef typename Fixed<I,F>::base_type base_type;
static const size_t fractional_bits = Fixed<I,F>::fractional_bits;
static const size_t integer_mask = Fixed<I,F>::integer_mask;
static const size_t fractional_mask = Fixed<I,F>::fractional_mask;
// more costly but doesn't need a larger type
const base_type a_hi = (lhs.to_raw() & integer_mask) >> fractional_bits;
const base_type b_hi = (rhs.to_raw() & integer_mask) >> fractional_bits;
const base_type a_lo = (lhs.to_raw() & fractional_mask);
const base_type b_lo = (rhs.to_raw() & fractional_mask);
const base_type x1 = a_hi * b_hi;
const base_type x2 = a_hi * b_lo;
const base_type x3 = a_lo * b_hi;
const base_type x4 = a_lo * b_lo;
result = Fixed<I,F>::from_base((x1 << fractional_bits) + (x3 + x2) + (x4 >> fractional_bits));
}
}
/*
* inheriting from boost::operators enables us to be a drop in replacement for base types
* without having to specify all the different versions of operators manually
*/
template <size_t I, size_t F>
class Fixed : boost::operators<Fixed<I,F>> {
static_assert(detail::type_from_size<I + F>::is_specialized, "invalid combination of sizes");
public:
static const size_t fractional_bits = F;
static const size_t integer_bits = I;
static const size_t total_bits = I + F;
typedef detail::type_from_size<total_bits> base_type_info;
typedef typename base_type_info::value_type base_type;
typedef typename base_type_info::next_size::value_type next_type;
typedef typename base_type_info::unsigned_type unsigned_type;
public:
static const size_t base_size = base_type_info::size;
static const base_type fractional_mask = ~((~base_type(0)) << fractional_bits);
static const base_type integer_mask = ~fractional_mask;
public:
static const base_type one = base_type(1) << fractional_bits;
public: // constructors
Fixed() : data_(0) {
}
Fixed(long n) : data_(base_type(n) << fractional_bits) {
// TODO(eteran): assert in range!
}
Fixed(unsigned long n) : data_(base_type(n) << fractional_bits) {
// TODO(eteran): assert in range!
}
Fixed(int n) : data_(base_type(n) << fractional_bits) {
// TODO(eteran): assert in range!
}
Fixed(unsigned int n) : data_(base_type(n) << fractional_bits) {
// TODO(eteran): assert in range!
}
Fixed(float n) : data_(static_cast<base_type>(n * one)) {
// TODO(eteran): assert in range!
}
Fixed(double n) : data_(static_cast<base_type>(n * one)) {
// TODO(eteran): assert in range!
}
Fixed(const Fixed &o) : data_(o.data_) {
}
Fixed& operator=(const Fixed &o) {
data_ = o.data_;
return *this;
}
private:
// this makes it simpler to create a fixed point object from
// a native type without scaling
// use "Fixed::from_base" in order to perform this.
struct NoScale {};
Fixed(base_type n, const NoScale &) : data_(n) {
}
public:
static Fixed from_base(base_type n) {
return Fixed(n, NoScale());
}
public: // comparison operators
bool operator==(const Fixed &o) const {
return data_ == o.data_;
}
bool operator<(const Fixed &o) const {
return data_ < o.data_;
}
public: // unary operators
bool operator!() const {
return !data_;
}
Fixed operator~() const {
Fixed t(*this);
t.data_ = ~t.data_;
return t;
}
Fixed operator-() const {
Fixed t(*this);
t.data_ = -t.data_;
return t;
}
Fixed operator+() const {
return *this;
}
Fixed& operator++() {
data_ += one;
return *this;
}
Fixed& operator--() {
data_ -= one;
return *this;
}
public: // basic math operators
Fixed& operator+=(const Fixed &n) {
data_ += n.data_;
return *this;
}
Fixed& operator-=(const Fixed &n) {
data_ -= n.data_;
return *this;
}
Fixed& operator&=(const Fixed &n) {
data_ &= n.data_;
return *this;
}
Fixed& operator|=(const Fixed &n) {
data_ |= n.data_;
return *this;
}
Fixed& operator^=(const Fixed &n) {
data_ ^= n.data_;
return *this;
}
Fixed& operator*=(const Fixed &n) {
detail::multiply(*this, n, *this);
return *this;
}
Fixed& operator/=(const Fixed &n) {
Fixed temp;
*this = detail::divide(*this, n, temp);
return *this;
}
Fixed& operator>>=(const Fixed &n) {
data_ >>= n.to_int();
return *this;
}
Fixed& operator<<=(const Fixed &n) {
data_ <<= n.to_int();
return *this;
}
public: // conversion to basic types
int to_int() const {
return (data_ & integer_mask) >> fractional_bits;
}
unsigned int to_uint() const {
return (data_ & integer_mask) >> fractional_bits;
}
float to_float() const {
return static_cast<float>(data_) / Fixed::one;
}
double to_double() const {
return static_cast<double>(data_) / Fixed::one;
}
base_type to_raw() const {
return data_;
}
public:
void swap(Fixed &rhs) {
using std::swap;
swap(data_, rhs.data_);
}
public:
base_type data_;
};
// if we have the same fractional portion, but differing integer portions, we trivially upgrade the smaller type
template <size_t I1, size_t I2, size_t F>
typename std::conditional<I1 >= I2, Fixed<I1,F>, Fixed<I2,F>>::type operator+(const Fixed<I1,F> &lhs, const Fixed<I2,F> &rhs) {
typedef typename std::conditional<
I1 >= I2,
Fixed<I1,F>,
Fixed<I2,F>
>::type T;
const T l = T::from_base(lhs.to_raw());
const T r = T::from_base(rhs.to_raw());
return l + r;
}
template <size_t I1, size_t I2, size_t F>
typename std::conditional<I1 >= I2, Fixed<I1,F>, Fixed<I2,F>>::type operator-(const Fixed<I1,F> &lhs, const Fixed<I2,F> &rhs) {
typedef typename std::conditional<
I1 >= I2,
Fixed<I1,F>,
Fixed<I2,F>
>::type T;
const T l = T::from_base(lhs.to_raw());
const T r = T::from_base(rhs.to_raw());
return l - r;
}
template <size_t I1, size_t I2, size_t F>
typename std::conditional<I1 >= I2, Fixed<I1,F>, Fixed<I2,F>>::type operator*(const Fixed<I1,F> &lhs, const Fixed<I2,F> &rhs) {
typedef typename std::conditional<
I1 >= I2,
Fixed<I1,F>,
Fixed<I2,F>
>::type T;
const T l = T::from_base(lhs.to_raw());
const T r = T::from_base(rhs.to_raw());
return l * r;
}
template <size_t I1, size_t I2, size_t F>
typename std::conditional<I1 >= I2, Fixed<I1,F>, Fixed<I2,F>>::type operator/(const Fixed<I1,F> &lhs, const Fixed<I2,F> &rhs) {
typedef typename std::conditional<
I1 >= I2,
Fixed<I1,F>,
Fixed<I2,F>
>::type T;
const T l = T::from_base(lhs.to_raw());
const T r = T::from_base(rhs.to_raw());
return l / r;
}
template <size_t I, size_t F>
std::ostream &operator<<(std::ostream &os, const Fixed<I,F> &f) {
os << f.to_double();
return os;
}
template <size_t I, size_t F>
const size_t Fixed<I,F>::fractional_bits;
template <size_t I, size_t F>
const size_t Fixed<I,F>::integer_bits;
template <size_t I, size_t F>
const size_t Fixed<I,F>::total_bits;
}
#endif
It is designed to be a near drop in replacement for floats/doubles and has a choose-able precision. It does make use of boost to add all the necessary math operator overloads, so you will need that as well (I believe for this it is just a header dependency, not a library dependency).
BTW, common usage could be something like this:
using namespace numeric;
typedef Fixed<16, 16> fixed;
fixed f;
The only real rule is that the number have to add up to a native size of your system such as 8, 16, 32, 64.
A:
In modern C++ implementations, there will be no performance penalty for using simple and lean abstractions, such as concrete classes. Fixed-point computation is precisely the place where using a properly engineered class will save you from lots of bugs.
Therefore, you should write a FixedPoint8 class. Test and debug it thoroughly. If you have to convince yourself of its performance as compared to using plain integers, measure it.
It will save you from many a trouble by moving the complexity of fixed-point calculation to a single place.
If you like, you can further increase the utility of your class by making it a template and replacing the old FixedPoint8 with, say, typedef FixedPoint<short, 8> FixedPoint8; But on your target architecture this is not probably necessary, so avoid the complexity of templates at first.
There is probably a good fixed point class somewhere in the internet - I'd start looking from the Boost libraries.
A:
Does your floating point code actually make use of the decimal point? If so:
First you have to read Randy Yates's paper on Intro to Fixed Point Math:
http://www.digitalsignallabs.com/fp.pdf
Then you need to do "profiling" on your floating point code to figure out the appropriate range of fixed-point values required at "critical" points in your code, e.g. U(5,3) = 5 bits to the left, 3 bits to the right, unsigned.
At this point, you can apply the arithmetic rules in the paper mentioned above; the rules specify how to interpret the bits which result from arithmetic operations. You can write macros or functions to perform the operations.
It's handy to keep the floating point version around, in order to compare the floating point vs fixed point results.
A:
I wouldn't use floating point at all on a CPU without special hardware for handling it. My advice is to treat ALL numbers as integers scaled to a specific factor. For example, all monetary values are in cents as integers rather than dollars as floats. For example, 0.72 is represented as the integer 72.
Addition and subtraction are then a very simple integer operation such as (0.72 + 1 becomes 72 + 100 becomes 172 becomes 1.72).
Multiplication is slightly more complex as it needs an integer multiply followed by a scale back such as (0.72 * 2 becomes 72 * 200 becomes 14400 becomes 144 (scaleback) becomes 1.44).
That may require special functions for performing more complex math (sine, cosine, etc) but even those can be sped up by using lookup tables. Example: since you're using fixed-2 representation, there's only 100 values in the range (0.0,1] (0-99) and sin/cos repeat outside this range so you only need a 100-integer lookup table.
Cheers,
Pax.
A:
When I first encountered fixed point numbers I found Joe Lemieux's article, Fixed-point Math in C, very helpful, and it does suggest one way of representing fixed-point values.
I didn't wind up using his union representation for fixed-point numbers though. I mostly have experience with fixed-point in C, so I haven't had the option to use a class either. For the most part though, I think that defining your number of fraction bits in a macro and using descriptive variable names makes this fairly easy to work with. Also, I've found that it is best to have macros or functions for multiplication and especially division, or you quickly get unreadable code.
For example, with 24.8 values:
#include "stdio.h"
/* Declarations for fixed point stuff */
typedef int int_fixed;
#define FRACT_BITS 8
#define FIXED_POINT_ONE (1 << FRACT_BITS)
#define MAKE_INT_FIXED(x) ((x) << FRACT_BITS)
#define MAKE_FLOAT_FIXED(x) ((int_fixed)((x) * FIXED_POINT_ONE))
#define MAKE_FIXED_INT(x) ((x) >> FRACT_BITS)
#define MAKE_FIXED_FLOAT(x) (((float)(x)) / FIXED_POINT_ONE)
#define FIXED_MULT(x, y) ((x)*(y) >> FRACT_BITS)
#define FIXED_DIV(x, y) (((x)<<FRACT_BITS) / (y))
/* tests */
int main()
{
int_fixed fixed_x = MAKE_FLOAT_FIXED( 4.5f );
int_fixed fixed_y = MAKE_INT_FIXED( 2 );
int_fixed fixed_result = FIXED_MULT( fixed_x, fixed_y );
printf( "%.1f\n", MAKE_FIXED_FLOAT( fixed_result ) );
fixed_result = FIXED_DIV( fixed_result, fixed_y );
printf( "%.1f\n", MAKE_FIXED_FLOAT( fixed_result ) );
return 0;
}
Which writes out
9.0
4.5
Note that there are all kinds of integer overflow issues with those macros, I just wanted to keep the macros simple. This is just a quick and dirty example of how I've done this in C. In C++ you could make something a lot cleaner using operator overloading. Actually, you could easily make that C code a lot prettier too...
I guess this is a long-winded way of saying: I think it's OK to use a typedef and macro approach. So long as you're clear about what variables contain fixed point values it isn't too hard to maintain, but it probably won't be as pretty as a C++ class.
If I was in your position, I would try to get some profiling numbers to show where the bottlenecks are. If there are relatively few of them then go with a typedef and macros. If you decide that you need a global replacement of all floats with fixed-point math though, then you'll probably be better off with a class.
A:
Changing fixed point representations is commonly called 'scaling'.
If you can do this with a class with no performance penalty, then that's the way to go. It depends heavily on the compiler and how it inlines. If there is a performance penalty using classes, then you need a more traditional C-style approach. The OOP approach will give you compiler-enforced type safety which the traditional implementation only approximates.
@cibyr has a good OOP implementation. Now for the more traditional one.
To keep track of which variables are scaled, you need to use a consistent convention. Make a notation at the end of each variable name to indicate whether the value is scaled or not, and write macros SCALE() and UNSCALE() that expand to x>>8 and x<<8.
#define SCALE(x) (x>>8)
#define UNSCALE(x) (x<<8)
xPositionUnscaled = UNSCALE(10);
xPositionScaled = SCALE(xPositionUnscaled);
It may seem like extra work to use so much notation, but notice how you can tell at a glance that any line is correct without looking at other lines. For example:
xPositionScaled = SCALE(xPositionScaled);
is obviously wrong, by inspection.
This is a variation of the Apps Hungarian idea that Joel mentions in this post.
A:
The original version of Tricks of the Game Programming Gurus has an entire chapter on implementing fixed-point math.
A:
template <int precision = 8> class FixedPoint {
private:
int val_;
public:
inline FixedPoint(int val) : val_ (val << precision) {};
inline operator int() { return val_ >> precision; }
// Other operators...
};
A:
Whichever way you decide to go (I'd lean toward a typedef and some CPP macros for converting), you will need to be careful to convert back and forth with some discipline.
You might find that you never need to convert back and forth. Just imagine everything in the whole system is x256.
|
What's the best way to do fixed-point math?
|
I need to speed up a program for the Nintendo DS which doesn't have an FPU, so I need to change floating-point math (which is emulated and slow) to fixed-point.
How I started was I changed floats to ints and whenever I needed to convert them, I used x>>8 to convert the fixed-point variable x to the actual number and x<<8 to convert to fixed-point. Soon I found out it was impossible to keep track of what needed to be converted and I also realized it would be difficult to change the precision of the numbers (8 in this case.)
My question is, how should I make this easier and still fast? Should I make a FixedPoint class, or just a FixedPoint8 typedef or struct with some functions/macros to convert them, or something else? Should I put something in the variable name to show it's fixed-point?
|
[
"You can try my fixed point class (Latest available @ https://github.com/eteran/cpp-utilities)\n// From: https://github.com/eteran/cpp-utilities/edit/master/Fixed.h\n// See also: http://stackoverflow.com/questions/79677/whats-the-best-way-to-do-fixed-point-math\n/*\n * The MIT License (MIT)\n * \n * Copyright (c) 2015 Evan Teran\n * \n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n * \n * The above copyright notice and this permission notice shall be included in all\n * copies or substantial portions of the Software.\n * \n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n * SOFTWARE.\n */\n\n#ifndef FIXED_H_\n#define FIXED_H_\n\n#include <ostream>\n#include <exception>\n#include <cstddef> // for size_t\n#include <cstdint>\n#include <type_traits>\n\n#include <boost/operators.hpp>\n\nnamespace numeric {\n\ntemplate <size_t I, size_t F>\nclass Fixed;\n\nnamespace detail {\n\n// helper templates to make magic with types :)\n// these allow us to determine resonable types from\n// a desired size, they also let us infer the next largest type\n// from a type which is nice for the division op\ntemplate <size_t T>\nstruct type_from_size {\n static const bool is_specialized = false;\n typedef void value_type;\n};\n\n#if defined(__GNUC__) && defined(__x86_64__)\ntemplate <>\nstruct type_from_size<128> {\n static const bool is_specialized = true;\n static const size_t size = 128;\n typedef __int128 value_type;\n typedef unsigned __int128 unsigned_type;\n typedef __int128 signed_type;\n typedef type_from_size<256> next_size;\n};\n#endif\n\ntemplate <>\nstruct type_from_size<64> {\n static const bool is_specialized = true;\n static const size_t size = 64;\n typedef int64_t value_type;\n typedef uint64_t unsigned_type;\n typedef int64_t signed_type;\n typedef type_from_size<128> next_size;\n};\n\ntemplate <>\nstruct type_from_size<32> {\n static const bool is_specialized = true;\n static const size_t size = 32;\n typedef int32_t value_type;\n typedef uint32_t unsigned_type;\n typedef int32_t signed_type;\n typedef type_from_size<64> next_size;\n};\n\ntemplate <>\nstruct type_from_size<16> {\n static const bool is_specialized = true;\n static const size_t size = 16;\n typedef int16_t value_type;\n typedef uint16_t unsigned_type;\n typedef int16_t signed_type;\n typedef type_from_size<32> next_size;\n};\n\ntemplate <>\nstruct type_from_size<8> {\n static const bool is_specialized = true;\n static const size_t size = 8;\n typedef int8_t value_type;\n typedef uint8_t unsigned_type;\n typedef int8_t signed_type;\n typedef type_from_size<16> next_size;\n};\n\n// this is to assist in adding support for non-native base\n// types (for adding big-int support), this should be fine\n// unless your bit-int class doesn't nicely support casting\ntemplate <class B, class N>\nB next_to_base(const N& rhs) {\n return static_cast<B>(rhs);\n}\n\nstruct divide_by_zero : std::exception {\n};\n\ntemplate <size_t I, size_t F>\nFixed<I,F> divide(const Fixed<I,F> &numerator, const Fixed<I,F> &denominator, Fixed<I,F> &remainder, typename std::enable_if<type_from_size<I+F>::next_size::is_specialized>::type* = 0) {\n\n typedef typename Fixed<I,F>::next_type next_type;\n typedef typename Fixed<I,F>::base_type base_type;\n static const size_t fractional_bits = Fixed<I,F>::fractional_bits;\n\n next_type t(numerator.to_raw());\n t <<= fractional_bits;\n\n Fixed<I,F> quotient;\n\n quotient = Fixed<I,F>::from_base(next_to_base<base_type>(t / denominator.to_raw()));\n remainder = Fixed<I,F>::from_base(next_to_base<base_type>(t % denominator.to_raw()));\n\n return quotient;\n}\n\ntemplate <size_t I, size_t F>\nFixed<I,F> divide(Fixed<I,F> numerator, Fixed<I,F> denominator, Fixed<I,F> &remainder, typename std::enable_if<!type_from_size<I+F>::next_size::is_specialized>::type* = 0) {\n\n // NOTE(eteran): division is broken for large types :-(\n // especially when dealing with negative quantities\n\n typedef typename Fixed<I,F>::base_type base_type;\n typedef typename Fixed<I,F>::unsigned_type unsigned_type;\n\n static const int bits = Fixed<I,F>::total_bits;\n\n if(denominator == 0) {\n throw divide_by_zero();\n } else {\n\n int sign = 0;\n\n Fixed<I,F> quotient;\n\n if(numerator < 0) {\n sign ^= 1;\n numerator = -numerator;\n }\n\n if(denominator < 0) {\n sign ^= 1;\n denominator = -denominator;\n }\n\n base_type n = numerator.to_raw();\n base_type d = denominator.to_raw();\n base_type x = 1;\n base_type answer = 0;\n\n // egyptian division algorithm\n while((n >= d) && (((d >> (bits - 1)) & 1) == 0)) {\n x <<= 1;\n d <<= 1;\n }\n\n while(x != 0) {\n if(n >= d) {\n n -= d;\n answer += x;\n }\n\n x >>= 1;\n d >>= 1;\n }\n\n unsigned_type l1 = n;\n unsigned_type l2 = denominator.to_raw();\n\n // calculate the lower bits (needs to be unsigned)\n // unfortunately for many fractions this overflows the type still :-/\n const unsigned_type lo = (static_cast<unsigned_type>(n) << F) / denominator.to_raw();\n\n quotient = Fixed<I,F>::from_base((answer << F) | lo);\n remainder = n;\n\n if(sign) {\n quotient = -quotient;\n }\n\n return quotient;\n }\n}\n\n// this is the usual implementation of multiplication\ntemplate <size_t I, size_t F>\nvoid multiply(const Fixed<I,F> &lhs, const Fixed<I,F> &rhs, Fixed<I,F> &result, typename std::enable_if<type_from_size<I+F>::next_size::is_specialized>::type* = 0) {\n\n typedef typename Fixed<I,F>::next_type next_type;\n typedef typename Fixed<I,F>::base_type base_type;\n\n static const size_t fractional_bits = Fixed<I,F>::fractional_bits;\n\n next_type t(static_cast<next_type>(lhs.to_raw()) * static_cast<next_type>(rhs.to_raw()));\n t >>= fractional_bits;\n result = Fixed<I,F>::from_base(next_to_base<base_type>(t));\n}\n\n// this is the fall back version we use when we don't have a next size\n// it is slightly slower, but is more robust since it doesn't\n// require and upgraded type\ntemplate <size_t I, size_t F>\nvoid multiply(const Fixed<I,F> &lhs, const Fixed<I,F> &rhs, Fixed<I,F> &result, typename std::enable_if<!type_from_size<I+F>::next_size::is_specialized>::type* = 0) {\n\n typedef typename Fixed<I,F>::base_type base_type;\n\n static const size_t fractional_bits = Fixed<I,F>::fractional_bits;\n static const size_t integer_mask = Fixed<I,F>::integer_mask;\n static const size_t fractional_mask = Fixed<I,F>::fractional_mask;\n\n // more costly but doesn't need a larger type\n const base_type a_hi = (lhs.to_raw() & integer_mask) >> fractional_bits;\n const base_type b_hi = (rhs.to_raw() & integer_mask) >> fractional_bits;\n const base_type a_lo = (lhs.to_raw() & fractional_mask);\n const base_type b_lo = (rhs.to_raw() & fractional_mask);\n\n const base_type x1 = a_hi * b_hi;\n const base_type x2 = a_hi * b_lo;\n const base_type x3 = a_lo * b_hi;\n const base_type x4 = a_lo * b_lo;\n\n result = Fixed<I,F>::from_base((x1 << fractional_bits) + (x3 + x2) + (x4 >> fractional_bits));\n\n}\n}\n\n/*\n * inheriting from boost::operators enables us to be a drop in replacement for base types\n * without having to specify all the different versions of operators manually\n */\ntemplate <size_t I, size_t F>\nclass Fixed : boost::operators<Fixed<I,F>> {\n static_assert(detail::type_from_size<I + F>::is_specialized, \"invalid combination of sizes\");\n\npublic:\n static const size_t fractional_bits = F;\n static const size_t integer_bits = I;\n static const size_t total_bits = I + F;\n\n typedef detail::type_from_size<total_bits> base_type_info;\n\n typedef typename base_type_info::value_type base_type;\n typedef typename base_type_info::next_size::value_type next_type;\n typedef typename base_type_info::unsigned_type unsigned_type;\n\npublic:\n static const size_t base_size = base_type_info::size;\n static const base_type fractional_mask = ~((~base_type(0)) << fractional_bits);\n static const base_type integer_mask = ~fractional_mask;\n\npublic:\n static const base_type one = base_type(1) << fractional_bits;\n\npublic: // constructors\n Fixed() : data_(0) {\n }\n\n Fixed(long n) : data_(base_type(n) << fractional_bits) {\n // TODO(eteran): assert in range!\n }\n\n Fixed(unsigned long n) : data_(base_type(n) << fractional_bits) {\n // TODO(eteran): assert in range!\n }\n\n Fixed(int n) : data_(base_type(n) << fractional_bits) {\n // TODO(eteran): assert in range!\n }\n\n Fixed(unsigned int n) : data_(base_type(n) << fractional_bits) {\n // TODO(eteran): assert in range!\n }\n\n Fixed(float n) : data_(static_cast<base_type>(n * one)) {\n // TODO(eteran): assert in range!\n }\n\n Fixed(double n) : data_(static_cast<base_type>(n * one)) {\n // TODO(eteran): assert in range!\n }\n\n Fixed(const Fixed &o) : data_(o.data_) {\n }\n\n Fixed& operator=(const Fixed &o) {\n data_ = o.data_;\n return *this;\n }\n\nprivate:\n // this makes it simpler to create a fixed point object from\n // a native type without scaling\n // use \"Fixed::from_base\" in order to perform this.\n struct NoScale {};\n\n Fixed(base_type n, const NoScale &) : data_(n) {\n }\n\npublic:\n static Fixed from_base(base_type n) {\n return Fixed(n, NoScale());\n }\n\npublic: // comparison operators\n bool operator==(const Fixed &o) const {\n return data_ == o.data_;\n }\n\n bool operator<(const Fixed &o) const {\n return data_ < o.data_;\n }\n\npublic: // unary operators\n bool operator!() const {\n return !data_;\n }\n\n Fixed operator~() const {\n Fixed t(*this);\n t.data_ = ~t.data_;\n return t;\n }\n\n Fixed operator-() const {\n Fixed t(*this);\n t.data_ = -t.data_;\n return t;\n }\n\n Fixed operator+() const {\n return *this;\n }\n\n Fixed& operator++() {\n data_ += one;\n return *this;\n }\n\n Fixed& operator--() {\n data_ -= one;\n return *this;\n }\n\npublic: // basic math operators\n Fixed& operator+=(const Fixed &n) {\n data_ += n.data_;\n return *this;\n }\n\n Fixed& operator-=(const Fixed &n) {\n data_ -= n.data_;\n return *this;\n }\n\n Fixed& operator&=(const Fixed &n) {\n data_ &= n.data_;\n return *this;\n }\n\n Fixed& operator|=(const Fixed &n) {\n data_ |= n.data_;\n return *this;\n }\n\n Fixed& operator^=(const Fixed &n) {\n data_ ^= n.data_;\n return *this;\n }\n\n Fixed& operator*=(const Fixed &n) {\n detail::multiply(*this, n, *this);\n return *this;\n }\n\n Fixed& operator/=(const Fixed &n) {\n Fixed temp;\n *this = detail::divide(*this, n, temp);\n return *this;\n }\n\n Fixed& operator>>=(const Fixed &n) {\n data_ >>= n.to_int();\n return *this;\n }\n\n Fixed& operator<<=(const Fixed &n) {\n data_ <<= n.to_int();\n return *this;\n }\n\npublic: // conversion to basic types\n int to_int() const {\n return (data_ & integer_mask) >> fractional_bits;\n }\n\n unsigned int to_uint() const {\n return (data_ & integer_mask) >> fractional_bits;\n }\n\n float to_float() const {\n return static_cast<float>(data_) / Fixed::one;\n }\n\n double to_double() const {\n return static_cast<double>(data_) / Fixed::one;\n }\n\n base_type to_raw() const {\n return data_;\n }\n\npublic:\n void swap(Fixed &rhs) {\n using std::swap;\n swap(data_, rhs.data_);\n }\n\npublic:\n base_type data_;\n};\n\n// if we have the same fractional portion, but differing integer portions, we trivially upgrade the smaller type\ntemplate <size_t I1, size_t I2, size_t F>\ntypename std::conditional<I1 >= I2, Fixed<I1,F>, Fixed<I2,F>>::type operator+(const Fixed<I1,F> &lhs, const Fixed<I2,F> &rhs) {\n\n typedef typename std::conditional<\n I1 >= I2,\n Fixed<I1,F>,\n Fixed<I2,F>\n >::type T;\n\n const T l = T::from_base(lhs.to_raw());\n const T r = T::from_base(rhs.to_raw());\n return l + r;\n}\n\ntemplate <size_t I1, size_t I2, size_t F>\ntypename std::conditional<I1 >= I2, Fixed<I1,F>, Fixed<I2,F>>::type operator-(const Fixed<I1,F> &lhs, const Fixed<I2,F> &rhs) {\n\n typedef typename std::conditional<\n I1 >= I2,\n Fixed<I1,F>,\n Fixed<I2,F>\n >::type T;\n\n const T l = T::from_base(lhs.to_raw());\n const T r = T::from_base(rhs.to_raw());\n return l - r;\n}\n\ntemplate <size_t I1, size_t I2, size_t F>\ntypename std::conditional<I1 >= I2, Fixed<I1,F>, Fixed<I2,F>>::type operator*(const Fixed<I1,F> &lhs, const Fixed<I2,F> &rhs) {\n\n typedef typename std::conditional<\n I1 >= I2,\n Fixed<I1,F>,\n Fixed<I2,F>\n >::type T;\n\n const T l = T::from_base(lhs.to_raw());\n const T r = T::from_base(rhs.to_raw());\n return l * r;\n}\n\ntemplate <size_t I1, size_t I2, size_t F>\ntypename std::conditional<I1 >= I2, Fixed<I1,F>, Fixed<I2,F>>::type operator/(const Fixed<I1,F> &lhs, const Fixed<I2,F> &rhs) {\n\n typedef typename std::conditional<\n I1 >= I2,\n Fixed<I1,F>,\n Fixed<I2,F>\n >::type T;\n\n const T l = T::from_base(lhs.to_raw());\n const T r = T::from_base(rhs.to_raw());\n return l / r;\n}\n\ntemplate <size_t I, size_t F>\nstd::ostream &operator<<(std::ostream &os, const Fixed<I,F> &f) {\n os << f.to_double();\n return os;\n}\n\ntemplate <size_t I, size_t F>\nconst size_t Fixed<I,F>::fractional_bits;\n\ntemplate <size_t I, size_t F>\nconst size_t Fixed<I,F>::integer_bits;\n\ntemplate <size_t I, size_t F>\nconst size_t Fixed<I,F>::total_bits;\n\n}\n\n#endif\n\nIt is designed to be a near drop in replacement for floats/doubles and has a choose-able precision. It does make use of boost to add all the necessary math operator overloads, so you will need that as well (I believe for this it is just a header dependency, not a library dependency).\nBTW, common usage could be something like this:\nusing namespace numeric;\ntypedef Fixed<16, 16> fixed;\nfixed f;\n\nThe only real rule is that the number have to add up to a native size of your system such as 8, 16, 32, 64.\n",
"In modern C++ implementations, there will be no performance penalty for using simple and lean abstractions, such as concrete classes. Fixed-point computation is precisely the place where using a properly engineered class will save you from lots of bugs.\nTherefore, you should write a FixedPoint8 class. Test and debug it thoroughly. If you have to convince yourself of its performance as compared to using plain integers, measure it.\nIt will save you from many a trouble by moving the complexity of fixed-point calculation to a single place.\nIf you like, you can further increase the utility of your class by making it a template and replacing the old FixedPoint8 with, say, typedef FixedPoint<short, 8> FixedPoint8; But on your target architecture this is not probably necessary, so avoid the complexity of templates at first.\nThere is probably a good fixed point class somewhere in the internet - I'd start looking from the Boost libraries.\n",
"Does your floating point code actually make use of the decimal point? If so:\nFirst you have to read Randy Yates's paper on Intro to Fixed Point Math:\nhttp://www.digitalsignallabs.com/fp.pdf\nThen you need to do \"profiling\" on your floating point code to figure out the appropriate range of fixed-point values required at \"critical\" points in your code, e.g. U(5,3) = 5 bits to the left, 3 bits to the right, unsigned.\nAt this point, you can apply the arithmetic rules in the paper mentioned above; the rules specify how to interpret the bits which result from arithmetic operations. You can write macros or functions to perform the operations.\nIt's handy to keep the floating point version around, in order to compare the floating point vs fixed point results.\n",
"I wouldn't use floating point at all on a CPU without special hardware for handling it. My advice is to treat ALL numbers as integers scaled to a specific factor. For example, all monetary values are in cents as integers rather than dollars as floats. For example, 0.72 is represented as the integer 72.\nAddition and subtraction are then a very simple integer operation such as (0.72 + 1 becomes 72 + 100 becomes 172 becomes 1.72).\nMultiplication is slightly more complex as it needs an integer multiply followed by a scale back such as (0.72 * 2 becomes 72 * 200 becomes 14400 becomes 144 (scaleback) becomes 1.44).\nThat may require special functions for performing more complex math (sine, cosine, etc) but even those can be sped up by using lookup tables. Example: since you're using fixed-2 representation, there's only 100 values in the range (0.0,1] (0-99) and sin/cos repeat outside this range so you only need a 100-integer lookup table.\nCheers,\nPax.\n",
"When I first encountered fixed point numbers I found Joe Lemieux's article, Fixed-point Math in C, very helpful, and it does suggest one way of representing fixed-point values.\nI didn't wind up using his union representation for fixed-point numbers though. I mostly have experience with fixed-point in C, so I haven't had the option to use a class either. For the most part though, I think that defining your number of fraction bits in a macro and using descriptive variable names makes this fairly easy to work with. Also, I've found that it is best to have macros or functions for multiplication and especially division, or you quickly get unreadable code.\nFor example, with 24.8 values:\n #include \"stdio.h\"\n\n/* Declarations for fixed point stuff */\n\ntypedef int int_fixed;\n\n#define FRACT_BITS 8\n#define FIXED_POINT_ONE (1 << FRACT_BITS)\n#define MAKE_INT_FIXED(x) ((x) << FRACT_BITS)\n#define MAKE_FLOAT_FIXED(x) ((int_fixed)((x) * FIXED_POINT_ONE))\n#define MAKE_FIXED_INT(x) ((x) >> FRACT_BITS)\n#define MAKE_FIXED_FLOAT(x) (((float)(x)) / FIXED_POINT_ONE)\n\n#define FIXED_MULT(x, y) ((x)*(y) >> FRACT_BITS)\n#define FIXED_DIV(x, y) (((x)<<FRACT_BITS) / (y))\n\n/* tests */\nint main()\n{\n int_fixed fixed_x = MAKE_FLOAT_FIXED( 4.5f );\n int_fixed fixed_y = MAKE_INT_FIXED( 2 );\n\n int_fixed fixed_result = FIXED_MULT( fixed_x, fixed_y );\n printf( \"%.1f\\n\", MAKE_FIXED_FLOAT( fixed_result ) );\n\n fixed_result = FIXED_DIV( fixed_result, fixed_y );\n printf( \"%.1f\\n\", MAKE_FIXED_FLOAT( fixed_result ) );\n\n return 0;\n}\n\nWhich writes out \n\n9.0\n4.5\n\nNote that there are all kinds of integer overflow issues with those macros, I just wanted to keep the macros simple. This is just a quick and dirty example of how I've done this in C. In C++ you could make something a lot cleaner using operator overloading. Actually, you could easily make that C code a lot prettier too...\nI guess this is a long-winded way of saying: I think it's OK to use a typedef and macro approach. So long as you're clear about what variables contain fixed point values it isn't too hard to maintain, but it probably won't be as pretty as a C++ class. \nIf I was in your position, I would try to get some profiling numbers to show where the bottlenecks are. If there are relatively few of them then go with a typedef and macros. If you decide that you need a global replacement of all floats with fixed-point math though, then you'll probably be better off with a class.\n",
"Changing fixed point representations is commonly called 'scaling'.\nIf you can do this with a class with no performance penalty, then that's the way to go. It depends heavily on the compiler and how it inlines. If there is a performance penalty using classes, then you need a more traditional C-style approach. The OOP approach will give you compiler-enforced type safety which the traditional implementation only approximates.\n@cibyr has a good OOP implementation. Now for the more traditional one.\nTo keep track of which variables are scaled, you need to use a consistent convention. Make a notation at the end of each variable name to indicate whether the value is scaled or not, and write macros SCALE() and UNSCALE() that expand to x>>8 and x<<8. \n#define SCALE(x) (x>>8)\n#define UNSCALE(x) (x<<8)\n\nxPositionUnscaled = UNSCALE(10);\nxPositionScaled = SCALE(xPositionUnscaled);\n\nIt may seem like extra work to use so much notation, but notice how you can tell at a glance that any line is correct without looking at other lines. For example:\nxPositionScaled = SCALE(xPositionScaled);\n\nis obviously wrong, by inspection.\nThis is a variation of the Apps Hungarian idea that Joel mentions in this post.\n",
"The original version of Tricks of the Game Programming Gurus has an entire chapter on implementing fixed-point math.\n",
"template <int precision = 8> class FixedPoint {\nprivate:\n int val_;\npublic:\n inline FixedPoint(int val) : val_ (val << precision) {};\n inline operator int() { return val_ >> precision; }\n // Other operators...\n};\n\n",
"Whichever way you decide to go (I'd lean toward a typedef and some CPP macros for converting), you will need to be careful to convert back and forth with some discipline.\nYou might find that you never need to convert back and forth. Just imagine everything in the whole system is x256.\n"
] |
[
54,
35,
10,
8,
7,
6,
5,
2,
0
] |
[] |
[] |
[
"c++",
"fixed_point"
] |
stackoverflow_0000079677_c++_fixed_point.txt
|
Q:
Issues with DB after publishing via Database Publishing Wizard from MSFT
I work on quite a few DotNetNuke sites, and occasionally (I haven't figured out the common factor yet), when I use the Database Publishing Wizard from Microsoft to create scripts for the site I've created on my Dev server, after running the scripts at the host (usually GoDaddy.com), and uploading the site files, I get an error... I'm 99.9% sure that it's not file related, so not sure where to begin in the DB. Unfortunately with DotNetNuke you don't get the YSOD, but a generic error, with no real way to find the actual exception that has occured.
I'm just curious if anyone has had similar deployment issues using the Database Publishing Wizard, and if so, how they overcame them? I own the RedGate toolset, but some hosts like GoDaddy don't allow you to direct connect to their servers...
A:
The Database Publishing Wizard's generated scripts usually need to be tweaked since it sometimes gets the order wrong of table/procedure creation when dealing with constraints. What I do is first backup the database, then run the script, and if I get an error, I move that query to the end of the script. Continue restoring the database and running the script until it works.
A:
There are two areas that I would look at -
Are you running in the dbo schema and was your scripted database
using dbo?
Are you using an objectqualifier in either your dev or your
production environment? (look at your sqldataprovider configuration
settings)
A:
You should be able to expose the underlying error message by setting the following in the web.config:
customErrors mode="Off"
Could you elaborate on "and uploading the site files"? New instance of DNN? updating an existing site? upgrading DNN version? If upgrade or update -- what files are you adding/overwriting?
Also, when using GoDaddy, can you check to verify that the web site's identity (network service or asp.net machine account depending on your IIS version) has sufficient permissions to the website's file system? It should have modify permissions and these may need to be reapplied if you are overwriting files.
IIS6 (XP, Server 2000, 2003) = ASP.Net Machine Account
IIS7 (Vista, Server 2008) = Network Service
A:
Test your generated scripts on a new local database (using the free SQL Express product or the full meal deal). If it runs fine locally, then you can be confident that it will run elsewhere, all things being equal.
If it bombs when you run it locally, use the process of elimination and work your way through the script execution to find the offending code.
My hunch is that the order of scripts could be off. I think I've had that happen before with the database publishing wizard.
A:
Just read your follow up. In every case that I've had your problem, it was always something to do with the connection string in web.config. Even after hours of staring at it, it was always a connection string issue in web.config. Get up, take a walk and then come back.
A:
If you are getting one of DNN's error pages, there is a chance it may have logged the error to the eventlog table.
A:
Depending on exactly what is happening and what DNN is showing you you might be able to manually look inside the EventLog table, pull out the XML data stored there, and parse it to find the stack trace and detailed information regarding the specific error at hand.
I have found however though that I get MUCH better overall experiences with deployments using backups and restores of my database, that way I am 100% sure that all objects moved correctly, and honestly it works better in my experience.
With GoDaddy I know another MAJOR common issue is incorrect file permissions, preventing DNN from modifying the web.config and other files that it needs to do.
|
Issues with DB after publishing via Database Publishing Wizard from MSFT
|
I work on quite a few DotNetNuke sites, and occasionally (I haven't figured out the common factor yet), when I use the Database Publishing Wizard from Microsoft to create scripts for the site I've created on my Dev server, after running the scripts at the host (usually GoDaddy.com), and uploading the site files, I get an error... I'm 99.9% sure that it's not file related, so not sure where to begin in the DB. Unfortunately with DotNetNuke you don't get the YSOD, but a generic error, with no real way to find the actual exception that has occured.
I'm just curious if anyone has had similar deployment issues using the Database Publishing Wizard, and if so, how they overcame them? I own the RedGate toolset, but some hosts like GoDaddy don't allow you to direct connect to their servers...
|
[
"The Database Publishing Wizard's generated scripts usually need to be tweaked since it sometimes gets the order wrong of table/procedure creation when dealing with constraints. What I do is first backup the database, then run the script, and if I get an error, I move that query to the end of the script. Continue restoring the database and running the script until it works.\n",
"There are two areas that I would look at - \n\nAre you running in the dbo schema and was your scripted database\nusing dbo?\nAre you using an objectqualifier in either your dev or your\nproduction environment? (look at your sqldataprovider configuration\nsettings)\n\n",
"You should be able to expose the underlying error message by setting the following in the web.config:\ncustomErrors mode=\"Off\"\n\nCould you elaborate on \"and uploading the site files\"? New instance of DNN? updating an existing site? upgrading DNN version? If upgrade or update -- what files are you adding/overwriting?\nAlso, when using GoDaddy, can you check to verify that the web site's identity (network service or asp.net machine account depending on your IIS version) has sufficient permissions to the website's file system? It should have modify permissions and these may need to be reapplied if you are overwriting files.\n\nIIS6 (XP, Server 2000, 2003) = ASP.Net Machine Account\nIIS7 (Vista, Server 2008) = Network Service\n\n",
"Test your generated scripts on a new local database (using the free SQL Express product or the full meal deal). If it runs fine locally, then you can be confident that it will run elsewhere, all things being equal.\nIf it bombs when you run it locally, use the process of elimination and work your way through the script execution to find the offending code.\nMy hunch is that the order of scripts could be off. I think I've had that happen before with the database publishing wizard.\n",
"Just read your follow up. In every case that I've had your problem, it was always something to do with the connection string in web.config. Even after hours of staring at it, it was always a connection string issue in web.config. Get up, take a walk and then come back.\n",
"If you are getting one of DNN's error pages, there is a chance it may have logged the error to the eventlog table.\n",
"Depending on exactly what is happening and what DNN is showing you you might be able to manually look inside the EventLog table, pull out the XML data stored there, and parse it to find the stack trace and detailed information regarding the specific error at hand.\nI have found however though that I get MUCH better overall experiences with deployments using backups and restores of my database, that way I am 100% sure that all objects moved correctly, and honestly it works better in my experience.\nWith GoDaddy I know another MAJOR common issue is incorrect file permissions, preventing DNN from modifying the web.config and other files that it needs to do.\n"
] |
[
1,
1,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"deployment",
"dotnetnuke",
"sql"
] |
stackoverflow_0000014828_deployment_dotnetnuke_sql.txt
|
Q:
How to track changes to business objects?
I get the concept of creating a business object or entity to represent something like a Person. I can then serialize the Person using a DTO and send it down to the client. If the client changes the object, it can have an IsDirty flag on there so when it gets sent back to the server I know to update it.
But what if I have an Order object? This has the main header informaton, customer, supplier, required date, etc. Then it has OrderItems which is a List< OrderItem>, being the items to be ordered. I want to be able to use this business object on my UI. So I have some textboxes hooked up to the location, supplier, required date, etc and a grid hooked up to OrderItems. Since OrderItems is a List I can easily add and delete records to it. But how do I track this, especially the deleted items. I don't want the deleted items to be visible in my grid and I shouldn't be able to iterate over them if I used foreach, because they have been deleted. But I still need to track the fact there was a deletion. How do I track the changes. I think I need to use a unit of work? But then the code seems to become quite complex. So then I wonder why not simply use DataTables and get the change tracking for free? But then I read how business objects are the way to go.
I’ve found various examples on simple Person examples, bnut not header-detail examples like Orders.
BTW using C# 3.5 for this.
A:
Firstly, you can use an existing framework that addresses these issues, like CSLA.NET. The author of this framework has tackled these very issues. Go to http://www.rockfordlhotka.net/cslanet/ for this. Even if you don't use the full framework, the concepts are still applicable.
If you wanted to roll your own, what I've done in the past was to instead of using List for my collections, I've used a custom type derived from BindingList. Inhereting from BindingList allows you to override the behaviour of add/remove item. So you can for example have another internal collection of "delteted" items. Every time the overriden Remove method is called on your collection, put the item into the "deleted" collection, and then call the base implementation of the Remove method. You can do the same for added items or changed items.
A:
You're spot on about needing a unit of work, but don't write one. Use NHibernate or some other ORM. That is what they're made for. They have Unit of Works built in.
Business objects are indeed "the way to go" for most applications. You're diving into a deep area and there will be much learning to do. Look into DDD.
I'd also strongly advise against code like that in your code-behind. Look into the MVP pattern.
I'd also (while I was bothering to learn lots of new, highly critical things) look into SOLID.
You may want to check out JP Boodhoo's nothing but .net course as it covers a lot of these things.
|
How to track changes to business objects?
|
I get the concept of creating a business object or entity to represent something like a Person. I can then serialize the Person using a DTO and send it down to the client. If the client changes the object, it can have an IsDirty flag on there so when it gets sent back to the server I know to update it.
But what if I have an Order object? This has the main header informaton, customer, supplier, required date, etc. Then it has OrderItems which is a List< OrderItem>, being the items to be ordered. I want to be able to use this business object on my UI. So I have some textboxes hooked up to the location, supplier, required date, etc and a grid hooked up to OrderItems. Since OrderItems is a List I can easily add and delete records to it. But how do I track this, especially the deleted items. I don't want the deleted items to be visible in my grid and I shouldn't be able to iterate over them if I used foreach, because they have been deleted. But I still need to track the fact there was a deletion. How do I track the changes. I think I need to use a unit of work? But then the code seems to become quite complex. So then I wonder why not simply use DataTables and get the change tracking for free? But then I read how business objects are the way to go.
I’ve found various examples on simple Person examples, bnut not header-detail examples like Orders.
BTW using C# 3.5 for this.
|
[
"Firstly, you can use an existing framework that addresses these issues, like CSLA.NET. The author of this framework has tackled these very issues. Go to http://www.rockfordlhotka.net/cslanet/ for this. Even if you don't use the full framework, the concepts are still applicable.\nIf you wanted to roll your own, what I've done in the past was to instead of using List for my collections, I've used a custom type derived from BindingList. Inhereting from BindingList allows you to override the behaviour of add/remove item. So you can for example have another internal collection of \"delteted\" items. Every time the overriden Remove method is called on your collection, put the item into the \"deleted\" collection, and then call the base implementation of the Remove method. You can do the same for added items or changed items.\n",
"You're spot on about needing a unit of work, but don't write one. Use NHibernate or some other ORM. That is what they're made for. They have Unit of Works built in. \nBusiness objects are indeed \"the way to go\" for most applications. You're diving into a deep area and there will be much learning to do. Look into DDD.\nI'd also strongly advise against code like that in your code-behind. Look into the MVP pattern.\nI'd also (while I was bothering to learn lots of new, highly critical things) look into SOLID.\nYou may want to check out JP Boodhoo's nothing but .net course as it covers a lot of these things.\n"
] |
[
6,
2
] |
[
"The data objects don't track changes. The change tracking occurs on the DataContext and objects that you've retrieved through the DataContext. So in order to track changes you need to do the following:\npublic class FooDataContext : DataContext\n{\n public Table<Order> Orders; \n}\n\npublic class Order\n{\n [DbColumn(Identity = true)]\n [Column(DbType = \"Int NOT NULL IDENTITY\", IsPrimaryKey = true, IsDbGenerated = true)]\n public int Id { get; set; }\n\n [DbColumn(Default = \"(getutcdate())\")]\n [Column(DbType = \"DateTime\", CanBeNull = false, IsDbGenerated = true)]\n public DateTime DateCreated { get; set; }\n\n [Column(DbType = \"varchar(50)\", CanBeNull = false, IsDbGenerated = false)]\n public string Name { get; set; }\n}\n\nNow in your codebehind you can do something like:\npublic void UpdateOrder(int id, string name)\n{\n FooDataContext db = new FooDataContext();\n Order order = db.Orders.Where(o=>o.Id == id).FirstOrDefault();\n\n if (order == null) return;\n\n order.Name = name;\n\n db.SubmitChanges();\n}\n\nI wouldn't recommend directly using the data context in the code behind, but this is a good way to get started with Linq To SQL. I would recommend putting all your database interactions in an external project and call from the GUI to the classes that encapsulate this behavior.\nI would recommend creating a Linq To Sql (dbml) file if you're new to Linq To Sql.\nRight click on your project in solution explorer, and select Add New Item. Select Linq To SQL file, and it will then let you connect to your database and select the tables.\nYou can then look at the generated code, and get some great ideas on how Linq To Sql works and what you can do with it.\nUse that as a guideline on working with Linq to SQL and that will take you far...\n"
] |
[
-1
] |
[
"business_objects",
"c#"
] |
stackoverflow_0000080182_business_objects_c#.txt
|
Q:
Best Mocking Library
Which is the best mocking library for C# 3.0/ ASP.NET MVC? Why?
A:
Moq
It's amazing, fully supports the new language features of C# 3.0 and it's very easy to get going with. I would highly recommend it.
A:
Very subjective question. What do you mean by "best"? Maybe you should provide some more context on your situation.
RhinoMocks is one of the most popular, as to whether it's the best for you, who knows?
A:
I'm going through that process now, weighing them up for use by my team, and I have to say Moq as well, it seems to have the least of a learning curve, and some nice features, I love the use of Moq generics to specify a Mock class
A:
there's also this other post about this topic:
Best mock framework that can do both WebForms and MVC?
|
Best Mocking Library
|
Which is the best mocking library for C# 3.0/ ASP.NET MVC? Why?
|
[
"Moq\nIt's amazing, fully supports the new language features of C# 3.0 and it's very easy to get going with. I would highly recommend it.\n",
"Very subjective question. What do you mean by \"best\"? Maybe you should provide some more context on your situation.\nRhinoMocks is one of the most popular, as to whether it's the best for you, who knows?\n",
"I'm going through that process now, weighing them up for use by my team, and I have to say Moq as well, it seems to have the least of a learning curve, and some nice features, I love the use of Moq generics to specify a Mock class\n",
"there's also this other post about this topic:\nBest mock framework that can do both WebForms and MVC?\n"
] |
[
21,
8,
0,
0
] |
[] |
[] |
[
"asp.net_mvc",
"c#",
"mocking"
] |
stackoverflow_0000080067_asp.net_mvc_c#_mocking.txt
|
Q:
How do I preview a url using ajax?
How do I preview a url using ajax? I have seen this done with search engine plug ins and would like to learn how to do this. Specifically, I would like to be able to mouse over a link and see the preview of the webpage using ajax.
A:
There's the easy solution, the hard solution, and the use-a-library solution.
use-a-library : I prefer always doing the use-a-library solution unless you have a darn good reason otherwise. One possible site which wraps the "hard solution" as a service for you: http://thumbnails.iwebtool.com/demo/
easy: The easy solution is to just load the target webpage as a downscaled AJAXy window. You can use many of the Lightbox-class plugins for this task, particularly the ones which allow you to target arbitrary HTTP content for the Lightbox window. GreyBox is my favorite of those which I have used before. Lightbox Gone Wild is also nice.
hard: Then there is the hard solution: you need to render the web page server side, cache the rendering as an image, and then serve up that image using Lightbox-esque Javascript (which is trivial next to the other requirements). How you would go about doing this is outside the scope of this box. Why would you do it this way? The preview generates MUCH faster for the client, and it hermetically seals the client's session away from things which might bust it in the target website -- poorly behaving Javascript and/or malware can cause Really Bad Things when you open them, even in an AJAXy window-within-a-window.
A:
I think I know what he's driving at. What happens is that he wants a windows to appear on hover over a hyperlink (javascript), and for that windows to display a snapshot image of the website being referenced by the hyperlink.
The ajax part connects to the server where you are hosting your site, asynchronously, and hits a page that goes and fetches an image of the site to display in a img tag.
Now, how does one generate the image of the site? I would suggest that this is done in advance (for example as the content is being created) and that already-generated image is recalled.
How to generate the images to begin with? I think that would be another question: "How to generate snapshot images of websites?"
|
How do I preview a url using ajax?
|
How do I preview a url using ajax? I have seen this done with search engine plug ins and would like to learn how to do this. Specifically, I would like to be able to mouse over a link and see the preview of the webpage using ajax.
|
[
"There's the easy solution, the hard solution, and the use-a-library solution.\nuse-a-library : I prefer always doing the use-a-library solution unless you have a darn good reason otherwise. One possible site which wraps the \"hard solution\" as a service for you: http://thumbnails.iwebtool.com/demo/\neasy: The easy solution is to just load the target webpage as a downscaled AJAXy window. You can use many of the Lightbox-class plugins for this task, particularly the ones which allow you to target arbitrary HTTP content for the Lightbox window. GreyBox is my favorite of those which I have used before. Lightbox Gone Wild is also nice.\nhard: Then there is the hard solution: you need to render the web page server side, cache the rendering as an image, and then serve up that image using Lightbox-esque Javascript (which is trivial next to the other requirements). How you would go about doing this is outside the scope of this box. Why would you do it this way? The preview generates MUCH faster for the client, and it hermetically seals the client's session away from things which might bust it in the target website -- poorly behaving Javascript and/or malware can cause Really Bad Things when you open them, even in an AJAXy window-within-a-window.\n",
"I think I know what he's driving at. What happens is that he wants a windows to appear on hover over a hyperlink (javascript), and for that windows to display a snapshot image of the website being referenced by the hyperlink.\nThe ajax part connects to the server where you are hosting your site, asynchronously, and hits a page that goes and fetches an image of the site to display in a img tag.\nNow, how does one generate the image of the site? I would suggest that this is done in advance (for example as the content is being created) and that already-generated image is recalled. \nHow to generate the images to begin with? I think that would be another question: \"How to generate snapshot images of websites?\"\n"
] |
[
2,
0
] |
[] |
[] |
[
"ajax",
"asp.net"
] |
stackoverflow_0000080313_ajax_asp.net.txt
|
Q:
Reparenting a Window as a Tab in a GTK Notebook
I'm using Mono with GTK# and am trying to display an existing window as a new tab in a GTK.Notebook. I'm currently re-parenting the widget to the notebook as follows:
MyWindow myWindow = new MyWindow();
myWindow.Children[0].Reparent(myNotebook)
Should I be doing this, or is there a better way to re-use an existing window so that you can display it on a tab?
A:
Your way is the best way, there's no way to embed windows into tabs without using horrible hacks like GtkPlug (which I'd guess you'd be uninterested in if you're using .NET). Look at the code to gnome-terminal for an example of how to do this.
|
Reparenting a Window as a Tab in a GTK Notebook
|
I'm using Mono with GTK# and am trying to display an existing window as a new tab in a GTK.Notebook. I'm currently re-parenting the widget to the notebook as follows:
MyWindow myWindow = new MyWindow();
myWindow.Children[0].Reparent(myNotebook)
Should I be doing this, or is there a better way to re-use an existing window so that you can display it on a tab?
|
[
"Your way is the best way, there's no way to embed windows into tabs without using horrible hacks like GtkPlug (which I'd guess you'd be uninterested in if you're using .NET). Look at the code to gnome-terminal for an example of how to do this.\n"
] |
[
3
] |
[] |
[] |
[
"gtk",
"gtk#"
] |
stackoverflow_0000080370_gtk_gtk#.txt
|
Q:
How to insert a text-like element into document using javascript and CSS?
I want to use javascript to insert some elements into the current page.
Such as this is the original document:
<p>Hello world!</p>
Now I want to insert an element in to the text so that it will become:
<p>Hello <span id=span1>new</span> world!</p>
I need the span tag because I want to handle it later.Show or hide.
But now problem comes out, if the original page has already defined a strange CSS style on all <span> tags, the "new" I just inserted will not appear to be the same as "Hello" and "world". How can I avoid this? I want the "new" be exactly the same as the "Hello" and "world".
A:
Well, I don't know how married you are to using a <span> tag, but why not do this?
<p style="display: inline">Hello <p id="myIdValue" style="display: inline">new</p> World</p>
That way the inserted html retains the same styling as the outer, and you can still have a handle to it, etc. Granted, you will have to add the inline CSS style, but it would work.
A:
The only way to do this is to either modify the other spans to include a class name and only apply the styles to spans with that class, or override the styles set for all spans for your new span.
So if you've done:
span {
display: block;
margin: 10px;
padding: 10px;
}
You could override with:
<span style="display: inline; margin: 0; padding: 0;">New Span</span>
A:
Simply override any span styles. Set layout properties back to browser defaults and set formating to inherit from the parent:
span#yourSpan {
/* defaults */
position: static;
display: inline;
margin: 0;
padding: 0;
background: transparent;
border: none;
/* inherit from parent node */
font: inherit;
color: inherit;
text-decoration: inherit;
line-height: inherit;
letter-spacing: inherit;
text-transform: inherit;
white-space: inherit;
word-spacing: inherit;
}
This should be sufficient, although you may need to add !important if you are not using an id:
<span class="hello-node">hello</span>
span.hello-node {
/* defaults */
position: static !important;
display: inline !important;
...
}
A:
Include the class definition that's defined in CSS on your JavaScript version of the <span> tag as well.
<span class="class_defined_in_css">
(where this <span> tag would be part of your JavaScript code.)
A:
Why not give the paragraph an id and then use Javascript to add the word, or remove it, if necessary? Surely it will retain the same formatting as the paragraph when you insert the word "new", or change the contents of the paragraph entirely.
|
How to insert a text-like element into document using javascript and CSS?
|
I want to use javascript to insert some elements into the current page.
Such as this is the original document:
<p>Hello world!</p>
Now I want to insert an element in to the text so that it will become:
<p>Hello <span id=span1>new</span> world!</p>
I need the span tag because I want to handle it later.Show or hide.
But now problem comes out, if the original page has already defined a strange CSS style on all <span> tags, the "new" I just inserted will not appear to be the same as "Hello" and "world". How can I avoid this? I want the "new" be exactly the same as the "Hello" and "world".
|
[
"Well, I don't know how married you are to using a <span> tag, but why not do this?\n<p style=\"display: inline\">Hello <p id=\"myIdValue\" style=\"display: inline\">new</p> World</p>\n\nThat way the inserted html retains the same styling as the outer, and you can still have a handle to it, etc. Granted, you will have to add the inline CSS style, but it would work.\n",
"The only way to do this is to either modify the other spans to include a class name and only apply the styles to spans with that class, or override the styles set for all spans for your new span.\nSo if you've done:\nspan {\n display: block;\n margin: 10px;\n padding: 10px;\n}\n\nYou could override with:\n<span style=\"display: inline; margin: 0; padding: 0;\">New Span</span>\n\n",
"Simply override any span styles. Set layout properties back to browser defaults and set formating to inherit from the parent:\nspan#yourSpan {\n /* defaults */\n position: static;\n display: inline;\n margin: 0;\n padding: 0;\n background: transparent;\n border: none;\n\n /* inherit from parent node */\n font: inherit;\n color: inherit;\n text-decoration: inherit;\n line-height: inherit;\n letter-spacing: inherit;\n text-transform: inherit;\n white-space: inherit;\n word-spacing: inherit;\n}\n\nThis should be sufficient, although you may need to add !important if you are not using an id:\n<span class=\"hello-node\">hello</span>\n\nspan.hello-node {\n /* defaults */\n position: static !important;\n display: inline !important;\n ...\n}\n\n",
"Include the class definition that's defined in CSS on your JavaScript version of the <span> tag as well.\n<span class=\"class_defined_in_css\">\n\n(where this <span> tag would be part of your JavaScript code.)\n",
"Why not give the paragraph an id and then use Javascript to add the word, or remove it, if necessary? Surely it will retain the same formatting as the paragraph when you insert the word \"new\", or change the contents of the paragraph entirely.\n"
] |
[
1,
1,
1,
0,
0
] |
[] |
[] |
[
"css",
"javascript"
] |
stackoverflow_0000080202_css_javascript.txt
|
Q:
Using X-Sendfile with Apache/PHP
I can't seem to find much documentation on X-Sendfile or example code for PHP (there is some rails code).
Anyone used it before and would mind giving a quick snippet of code and a brief description?
A:
X-Sendfile is an HTTP header, so you want something like this:
header("X-Sendfile: $filename");
Your web server picks it up if correctly configured. Here's some more details:
http://www.jasny.net/articles/how-i-php-x-sendfile/
A:
If tweaking the web server configuration is not an option, consider PHP's standard readfile() function. It won't be quite as fast as sendfiling, but it will be more widely compatible. Also note that when doing this, you should also send a Content-Type header at the very least.
|
Using X-Sendfile with Apache/PHP
|
I can't seem to find much documentation on X-Sendfile or example code for PHP (there is some rails code).
Anyone used it before and would mind giving a quick snippet of code and a brief description?
|
[
"X-Sendfile is an HTTP header, so you want something like this:\nheader(\"X-Sendfile: $filename\");\n\nYour web server picks it up if correctly configured. Here's some more details:\nhttp://www.jasny.net/articles/how-i-php-x-sendfile/\n",
"If tweaking the web server configuration is not an option, consider PHP's standard readfile() function. It won't be quite as fast as sendfiling, but it will be more widely compatible. Also note that when doing this, you should also send a Content-Type header at the very least.\n"
] |
[
30,
3
] |
[] |
[] |
[
"apache",
"php",
"x_sendfile"
] |
stackoverflow_0000080186_apache_php_x_sendfile.txt
|
Q:
How can I Trim the leading comma in my string
I have a string that is like below.
,liger, unicorn, snipe
in other languages I'm familiar with I can just do a string.trim(",") but how can I do that in c#?
Thanks.
There's been a lot of back and forth about the StartTrim function. As several have pointed out, the StartTrim doesn't affect the primary variable. However, given the construction of the data vs the question, I'm torn as to which answer to accept. True the question only wants the first character trimmed off not the last (if anny), however, there would never be a "," at the end of the data. So, with that said, I'm going to accept the first answer that that said to use StartTrim assigned to a new variable.
A:
string s = ",liger, unicorn, snipe";
s.TrimStart(',');
A:
string sample = ",liger, unicorn, snipe";
sample = sample.TrimStart(','); // to remove just the first comma
Or perhaps:
sample = sample.Trim().TrimStart(','); // to remove any whitespace and then the first comma
A:
.net strings can do Trim() and TrimStart(). Because it takes params, you can write:
",liger, unicorn, snipe".TrimStart(',')
and if you have more than one character to trim, you can write:
",liger, unicorn, snipe".TrimStart(",; ".ToCharArray())
A:
here is an easy way to not produce the leading comma to begin with:
string[] animals = { "liger", "unicorn", "snipe" };
string joined = string.Join(", ", animals);
A:
string.TrimStart(',') will remove the comma, however you will have trouble with a split operation due to the space after the comma.
Best to join just on the single comma or use
Split(", ".ToCharArray(),StringSplitOptions.RemoveEmptyEntries);
A:
",liger, unicorn, snipe".Trim(',') -> "liger, unicor, snipe"
A:
Try string.Trim(',') and see if that does what you want.
A:
Note, the original string is left untouched, Trim will return you a new string:
string s1 = ",abc,d";
string s2 = s1.TrimStart(",".ToCharArray());
Console.WriteLine("s1 = {0}", s1);
Console.WriteLine("s2 = {0}", s2);
prints:
s1 = ,abc,d
s2 = abc,d
A:
string s = ",liger, unicorn, snipe";
s = s.TrimStart(',');
It's important to assign the result of TrimStart to a variable. As it says on the TrimStart page, "This method does not modify the value of the current instance. Instead, it returns a new string...".
In .NET, strings don't change.
A:
you can use this
,liger, unicorn, snipe".TrimStart(',');
A:
if (s.StartsWith(",")) {
s = s.Substring(1, s.Length - 1);
}
A:
string t = ",liger, unicorn, snipe".TrimStart(new char[] {','});
A:
The same way as everywhere else: string.trim
A:
string s = ",liger, tiger";
if (s.Substring(0, 1) == ",")
s = s.Substring(1);
A:
Did you mean trim all instances of "," in that string?
In which case, you can do:
s = s.Replace(",", "");
A:
Just use Substring to ignore the first character (or assign it to another string);
string o = ",liger, unicorn, snipe";
string s = o.Substring(1);
A:
See: http://msdn.microsoft.com/en-us/library/d4tt83f9.aspx
string animals = ",liger, unicorn, snipe";
//trimmed will contain "liger, unicorn, snipe"
string trimmed = word.Trim(',');
|
How can I Trim the leading comma in my string
|
I have a string that is like below.
,liger, unicorn, snipe
in other languages I'm familiar with I can just do a string.trim(",") but how can I do that in c#?
Thanks.
There's been a lot of back and forth about the StartTrim function. As several have pointed out, the StartTrim doesn't affect the primary variable. However, given the construction of the data vs the question, I'm torn as to which answer to accept. True the question only wants the first character trimmed off not the last (if anny), however, there would never be a "," at the end of the data. So, with that said, I'm going to accept the first answer that that said to use StartTrim assigned to a new variable.
|
[
"string s = \",liger, unicorn, snipe\";\ns.TrimStart(',');\n\n",
"string sample = \",liger, unicorn, snipe\";\nsample = sample.TrimStart(','); // to remove just the first comma\n\nOr perhaps:\nsample = sample.Trim().TrimStart(','); // to remove any whitespace and then the first comma\n\n",
".net strings can do Trim() and TrimStart(). Because it takes params, you can write:\n\",liger, unicorn, snipe\".TrimStart(',')\n\nand if you have more than one character to trim, you can write:\n\",liger, unicorn, snipe\".TrimStart(\",; \".ToCharArray())\n\n",
"here is an easy way to not produce the leading comma to begin with:\nstring[] animals = { \"liger\", \"unicorn\", \"snipe\" };\nstring joined = string.Join(\", \", animals);\n\n",
"string.TrimStart(',') will remove the comma, however you will have trouble with a split operation due to the space after the comma. \nBest to join just on the single comma or use \n\nSplit(\", \".ToCharArray(),StringSplitOptions.RemoveEmptyEntries);\n\n",
"\",liger, unicorn, snipe\".Trim(',') -> \"liger, unicor, snipe\"\n",
"Try string.Trim(',') and see if that does what you want.\n",
"Note, the original string is left untouched, Trim will return you a new string:\nstring s1 = \",abc,d\";\nstring s2 = s1.TrimStart(\",\".ToCharArray());\nConsole.WriteLine(\"s1 = {0}\", s1);\nConsole.WriteLine(\"s2 = {0}\", s2);\n\nprints:\ns1 = ,abc,d\ns2 = abc,d\n\n",
"string s = \",liger, unicorn, snipe\";\ns = s.TrimStart(',');\n\nIt's important to assign the result of TrimStart to a variable. As it says on the TrimStart page, \"This method does not modify the value of the current instance. Instead, it returns a new string...\".\nIn .NET, strings don't change.\n",
"you can use this \n,liger, unicorn, snipe\".TrimStart(',');\n",
"if (s.StartsWith(\",\")) {\n s = s.Substring(1, s.Length - 1);\n}\n\n",
"string t = \",liger, unicorn, snipe\".TrimStart(new char[] {','});\n\n",
"The same way as everywhere else: string.trim\n",
" string s = \",liger, tiger\";\n\n if (s.Substring(0, 1) == \",\")\n s = s.Substring(1);\n\n",
"Did you mean trim all instances of \",\" in that string?\nIn which case, you can do:\ns = s.Replace(\",\", \"\");\n\n",
"Just use Substring to ignore the first character (or assign it to another string);\n string o = \",liger, unicorn, snipe\";\n string s = o.Substring(1);\n\n",
"See: http://msdn.microsoft.com/en-us/library/d4tt83f9.aspx\n string animals = \",liger, unicorn, snipe\";\n\n //trimmed will contain \"liger, unicorn, snipe\"\n string trimmed = word.Trim(',');\n\n"
] |
[
20,
17,
4,
3,
3,
1,
1,
1,
1,
1,
0,
0,
0,
0,
0,
0,
0
] |
[] |
[] |
[
".net",
"c#",
"string",
"trim"
] |
stackoverflow_0000073629_.net_c#_string_trim.txt
|
Q:
How to iterate through a string and check the byte value of every character?
Code I have:
cell_val = CStr(Nz(fld.value, ""))
Dim iter As Long
For iter = 0 To Len(cell_val) - 1 Step 1
If Asc(Mid(cell_val, iter, 1)) > 127 Then
addlog "Export contains ascii character > 127"
End If
Next iter
This code doesn't work. Anyone know how to do this? I've simply got no idea with VB or VBA.
A:
I believe your problem is that in VBA string indexes start at 1 and not at 0. Try the following:
For iter = 1 To Len(cell_val)
If Asc(Mid(cell_val, iter, 1)) > 127 Then
addlog "Export contains ascii character > 127"
End If
Next
A:
With VBA, VB6 you can just declare a byte array and assign a string value to it and it will be converted for you. Then you can just iterate through it like a regular array.
e.g.
Dim b() as byte
Dim iter As Long
b = CStr(Nz(fld.value, ""))
For iter = 0 To UBound(b)
if b(iter) > 127 then
addlog "Export contains ascii character > 127"
end if
next
A:
Your example should be modfied so it does not have external dependencies, it now depends on Nz and addLog.
Anyway, the problem here seems to be that you are looping from 0 to len()-1. In VBA this would be 1 to n.
Dim cell_val As String
cell_val = "øabcdæøå~!#%&/()"
Dim iter As Long
For iter = 1 To Len(cell_val)
If Asc(Mid(cell_val, iter, 1)) > 127 Then
'addlog "Export contains ascii character > 127"
Debug.Print iter, "Export contains ascii character > 127"
End If
Next iter
A:
Did you debug it? ;) Are you sure the cell_val is not empty? Also you don't need the 'Step 1' in the For loop since it's default. Also what do you expect to acomplish with your code? It logs if any ascii values are above 127? But that's it - there is no branching depending on the result?
A:
Try AscW()
A:
VB/VBA strings are based from one rather than zero so you need to use:
For iter = 1 To Len(cell_val)
I've also left off the step 1 since that's the default.
A:
Did you debug it? ;) Are you sure the
cell_val is not empty? Also you don't
need the 'Step 1' in the For loop
since it's default. Also what do you
expect to acomplish with your code? It
logs if any ascii values are above
127? But that's it - there is no
branching depending on the result?
I didn't debug it, I have no idea how to use vba or any of the tools that go along with it.
Yes I am sure cell_val is not empty.
The code was representative, I was ensuring the branch condition works before writing the branch itself.
I believe your problem is that in VBA string indexes start at 1 and not at 0.
Ah, the exact kind of thing that goes along with vba that I was bound to miss, thank you.
|
How to iterate through a string and check the byte value of every character?
|
Code I have:
cell_val = CStr(Nz(fld.value, ""))
Dim iter As Long
For iter = 0 To Len(cell_val) - 1 Step 1
If Asc(Mid(cell_val, iter, 1)) > 127 Then
addlog "Export contains ascii character > 127"
End If
Next iter
This code doesn't work. Anyone know how to do this? I've simply got no idea with VB or VBA.
|
[
"I believe your problem is that in VBA string indexes start at 1 and not at 0. Try the following:\nFor iter = 1 To Len(cell_val) \n If Asc(Mid(cell_val, iter, 1)) > 127 Then\n addlog \"Export contains ascii character > 127\"\n End If\nNext\n\n",
"With VBA, VB6 you can just declare a byte array and assign a string value to it and it will be converted for you. Then you can just iterate through it like a regular array.\ne.g.\nDim b() as byte\nDim iter As Long\nb = CStr(Nz(fld.value, \"\"))\n\nFor iter = 0 To UBound(b)\n if b(iter) > 127 then\n addlog \"Export contains ascii character > 127\"\n end if\nnext\n\n",
"Your example should be modfied so it does not have external dependencies, it now depends on Nz and addLog.\nAnyway, the problem here seems to be that you are looping from 0 to len()-1. In VBA this would be 1 to n.\n Dim cell_val As String\n cell_val = \"øabcdæøå~!#%&/()\"\n Dim iter As Long\n For iter = 1 To Len(cell_val)\n If Asc(Mid(cell_val, iter, 1)) > 127 Then\n 'addlog \"Export contains ascii character > 127\"\n Debug.Print iter, \"Export contains ascii character > 127\"\n End If\n Next iter\n\n",
"Did you debug it? ;) Are you sure the cell_val is not empty? Also you don't need the 'Step 1' in the For loop since it's default. Also what do you expect to acomplish with your code? It logs if any ascii values are above 127? But that's it - there is no branching depending on the result?\n",
"Try AscW()\n",
"VB/VBA strings are based from one rather than zero so you need to use:\nFor iter = 1 To Len(cell_val)\n\nI've also left off the step 1 since that's the default.\n",
"\nDid you debug it? ;) Are you sure the\n cell_val is not empty? Also you don't\n need the 'Step 1' in the For loop\n since it's default. Also what do you\n expect to acomplish with your code? It\n logs if any ascii values are above\n 127? But that's it - there is no\n branching depending on the result?\n\nI didn't debug it, I have no idea how to use vba or any of the tools that go along with it.\nYes I am sure cell_val is not empty.\nThe code was representative, I was ensuring the branch condition works before writing the branch itself.\n\nI believe your problem is that in VBA string indexes start at 1 and not at 0.\n\nAh, the exact kind of thing that goes along with vba that I was bound to miss, thank you.\n"
] |
[
12,
3,
2,
0,
0,
0,
0
] |
[] |
[] |
[
"excel",
"for_loop",
"string",
"vba"
] |
stackoverflow_0000080427_excel_for_loop_string_vba.txt
|
Q:
Visual Studio basicHttpBinding and endpoint problems
I have a WPF application in VS 2008 with some web service references. For varying reasons (max message size, authentication methods) I need to manually define a number of settings in the WPF client's app.config for the service bindings.
Unfortunately, this means that when I update the service references in the project we end up with a mess - multiple bindings and endpoints. Visual Studio creates new bindings and endpoints with a numeric suffix (ie "Service1" as a duplicate of "Service"), resulting in an invalid configuration as there may only be a single binding per service reference in a project.
This is easy to duplicate - just create a simple "Hello World" ASP.Net web service and WPF application in a solution, change the maxBufferSize and maxReceivedMessageSize in the app.config binding and then update the service reference.
At the moment we are working around this by simply undoing checkout on the app.config after updating the references but I can't help but think there must be a better way!
Also, the settings we need to manually change are:
<security mode="TransportCredentialOnly">
<transport clientCredentialType="Ntlm" />
</security>
and:
<binding maxBufferSize="655360" maxReceivedMessageSize="655360" />
We use a service factory class so if these settings are somehow able to be set programmatically that would work, although the properties don't seem to be exposed.
A:
Create a .Bat file which uses svcutil, for proxygeneration, that has the settings that is right for your project. It's fairly easy. Clicking on the batfile, to generate new proxyfiles whenever the interface have been changed is easy.
The batch can then later be used in automated builds. Then you only need to set up the app.config (or web.config) once. We generally separate the different configs for different environments, such as dev, test prod.
Example (watch out for linebreaks):
REM generate meta data
call "SVCUTIL.EXE" /t:metadata "MyProject.dll" /reference:"MyReference.dll"
REM making sure the file is writable
attrib -r "MyServiceProxy.cs"
REM create new proxy file
call "SVCUTIL.EXE" /t:code *.wsdl *.xsd /serializable /serializer:Auto /collectionType:System.Collections.Generic.List`1 /out:"MyServiceProxy.cs" /namespace:*,MY.Name.Space /reference:"MyReference.dll"
:)
//W
A:
Rather than changing the generated endpoint, uou could add a second endpoint and binding definition with the configuration you need, then in your code just put the name of the new endpoint in your service client constructor.
A:
Somehow I prefer using svcutil.exe directly than to use the "Add Service Reference" feature of Visual Studio :P This is what we're doing on our WCF projects.
A:
I take your point, svcutil is definetly the more advanced way of adding and updating service references. Its just a fair bit more manual work when "right click, update reference" is so close to just working in a single step.
I guess we could create some batch files or something to just output the reference code. Even then, manually checking out and updating the service code with svcutil will probably be more work than just undoing the check out on the config.
Thanks for the advice in any case.
A:
What we do is we check out (from source control) the app.config and *.cs files that are autogenerated by the svcutil.exe utility, then we run a batch file that runs svcutil.exe to retrieve the service metadata. When it's done, we recompile the code, make sure it works, then check the updated app.config and *.cs files back in. It's a whole lot more reliable than using the oft-buggy "Add Service Reference" with Visual Studio.
|
Visual Studio basicHttpBinding and endpoint problems
|
I have a WPF application in VS 2008 with some web service references. For varying reasons (max message size, authentication methods) I need to manually define a number of settings in the WPF client's app.config for the service bindings.
Unfortunately, this means that when I update the service references in the project we end up with a mess - multiple bindings and endpoints. Visual Studio creates new bindings and endpoints with a numeric suffix (ie "Service1" as a duplicate of "Service"), resulting in an invalid configuration as there may only be a single binding per service reference in a project.
This is easy to duplicate - just create a simple "Hello World" ASP.Net web service and WPF application in a solution, change the maxBufferSize and maxReceivedMessageSize in the app.config binding and then update the service reference.
At the moment we are working around this by simply undoing checkout on the app.config after updating the references but I can't help but think there must be a better way!
Also, the settings we need to manually change are:
<security mode="TransportCredentialOnly">
<transport clientCredentialType="Ntlm" />
</security>
and:
<binding maxBufferSize="655360" maxReceivedMessageSize="655360" />
We use a service factory class so if these settings are somehow able to be set programmatically that would work, although the properties don't seem to be exposed.
|
[
"Create a .Bat file which uses svcutil, for proxygeneration, that has the settings that is right for your project. It's fairly easy. Clicking on the batfile, to generate new proxyfiles whenever the interface have been changed is easy.\nThe batch can then later be used in automated builds. Then you only need to set up the app.config (or web.config) once. We generally separate the different configs for different environments, such as dev, test prod.\nExample (watch out for linebreaks):\nREM generate meta data\ncall \"SVCUTIL.EXE\" /t:metadata \"MyProject.dll\" /reference:\"MyReference.dll\"\n\nREM making sure the file is writable\nattrib -r \"MyServiceProxy.cs\"\n\nREM create new proxy file\ncall \"SVCUTIL.EXE\" /t:code *.wsdl *.xsd /serializable /serializer:Auto /collectionType:System.Collections.Generic.List`1 /out:\"MyServiceProxy.cs\" /namespace:*,MY.Name.Space /reference:\"MyReference.dll\" \n\n:)\n//W\n",
"Rather than changing the generated endpoint, uou could add a second endpoint and binding definition with the configuration you need, then in your code just put the name of the new endpoint in your service client constructor.\n",
"Somehow I prefer using svcutil.exe directly than to use the \"Add Service Reference\" feature of Visual Studio :P This is what we're doing on our WCF projects.\n",
"I take your point, svcutil is definetly the more advanced way of adding and updating service references. Its just a fair bit more manual work when \"right click, update reference\" is so close to just working in a single step.\nI guess we could create some batch files or something to just output the reference code. Even then, manually checking out and updating the service code with svcutil will probably be more work than just undoing the check out on the config.\nThanks for the advice in any case.\n",
"What we do is we check out (from source control) the app.config and *.cs files that are autogenerated by the svcutil.exe utility, then we run a batch file that runs svcutil.exe to retrieve the service metadata. When it's done, we recompile the code, make sure it works, then check the updated app.config and *.cs files back in. It's a whole lot more reliable than using the oft-buggy \"Add Service Reference\" with Visual Studio.\n"
] |
[
2,
2,
0,
0,
0
] |
[] |
[] |
[
"asp.net",
"visual_studio",
"web_services",
"wpf"
] |
stackoverflow_0000069000_asp.net_visual_studio_web_services_wpf.txt
|
Q:
Visual studio automation: Enumerate opened windows upon solution loading
How to enumerate opened code windows (i.e. those windows where you edit documents) upon solution loading using macros?
As you probably know, MSVS remembers opened documents, i.e. when you load solution, IDE will load previously opened files. What I want to do is to perform some actions with those windows upon solution loading.
I tried to access these windows in SolutionEvents_Opened handler. But have no luck - it seems that mentioned windows are not available at the moment SolutionEvents_Opened invoked. DTE.Documents is empty and DTE.Windows.Items doesn't contain them.
I need some code like:
Private Sub SolutionEvents_Opened() Handles SolutionEvents.Opened
Dim window As Window = DTE.Documents.Item(?).Windows // one of the opened windows
...
End Sub
A:
One way I've found to enumerate the window is on DocumentEvents.DocumentOpened event, but it fires it always and not only during the loading of a solution. It does not seem that the SolutionEvents.Opened gets fired at all in my experience otherwise a static variable could be changed in it.
This might help explain it though.
|
Visual studio automation: Enumerate opened windows upon solution loading
|
How to enumerate opened code windows (i.e. those windows where you edit documents) upon solution loading using macros?
As you probably know, MSVS remembers opened documents, i.e. when you load solution, IDE will load previously opened files. What I want to do is to perform some actions with those windows upon solution loading.
I tried to access these windows in SolutionEvents_Opened handler. But have no luck - it seems that mentioned windows are not available at the moment SolutionEvents_Opened invoked. DTE.Documents is empty and DTE.Windows.Items doesn't contain them.
I need some code like:
Private Sub SolutionEvents_Opened() Handles SolutionEvents.Opened
Dim window As Window = DTE.Documents.Item(?).Windows // one of the opened windows
...
End Sub
|
[
"One way I've found to enumerate the window is on DocumentEvents.DocumentOpened event, but it fires it always and not only during the loading of a solution. It does not seem that the SolutionEvents.Opened gets fired at all in my experience otherwise a static variable could be changed in it.\nThis might help explain it though.\n"
] |
[
1
] |
[] |
[] |
[
"automation",
"scripting",
"vb.net",
"visual_studio"
] |
stackoverflow_0000055804_automation_scripting_vb.net_visual_studio.txt
|
Q:
Transmitfile, download with weird behaviour
I am using httpresponse.Transmitfile to download files. If I, in the file download dialog, choose to save in a different folder than the suggested one, the download rate drops down to 10 - 20 kb. If I cancel, or always choose to download in the same folder, then transfer rate are 200 kb and more. Here are my code :
procedure TDefault.LastNedBilde(strURL: string);
var
Outfil: FileInfo;
begin
Outfil:= FileInfo.Create(Server.MapPath(strUrl) );
response.Clear();
response.ClearContent();
response.ClearHeaders();
response.Buffer := True;
response.ContentType :='image/tiff';
response.AddHeader('Content-Disposition',
'attachment; filename=' + filename;');
response.AddHeader('Content-Length', Outfil.Length.ToString());
response.Transmitfile(strUrl,0,Outfil.Length);
response.Flush();
response.&End;
end;
This is written in RadStudio 2007, Delphi for .Net. Have anybody experienced anything like this ? This is not a problem in Opera or Firefox, only internet explorer.
A:
The server does not know where the user saves the file, so the server-code is not what is causing this.
Could it be that your browser is caching the file, and then if you save it again to the same location, it only uses the cached version and does not download from the server? Try to save the file to the same (but another) directory two times in a row, and see if the second try gets a higher download rate.
|
Transmitfile, download with weird behaviour
|
I am using httpresponse.Transmitfile to download files. If I, in the file download dialog, choose to save in a different folder than the suggested one, the download rate drops down to 10 - 20 kb. If I cancel, or always choose to download in the same folder, then transfer rate are 200 kb and more. Here are my code :
procedure TDefault.LastNedBilde(strURL: string);
var
Outfil: FileInfo;
begin
Outfil:= FileInfo.Create(Server.MapPath(strUrl) );
response.Clear();
response.ClearContent();
response.ClearHeaders();
response.Buffer := True;
response.ContentType :='image/tiff';
response.AddHeader('Content-Disposition',
'attachment; filename=' + filename;');
response.AddHeader('Content-Length', Outfil.Length.ToString());
response.Transmitfile(strUrl,0,Outfil.Length);
response.Flush();
response.&End;
end;
This is written in RadStudio 2007, Delphi for .Net. Have anybody experienced anything like this ? This is not a problem in Opera or Firefox, only internet explorer.
|
[
"The server does not know where the user saves the file, so the server-code is not what is causing this.\nCould it be that your browser is caching the file, and then if you save it again to the same location, it only uses the cached version and does not download from the server? Try to save the file to the same (but another) directory two times in a row, and see if the second try gets a higher download rate.\n"
] |
[
1
] |
[] |
[] |
[
"asp.net",
"delphi"
] |
stackoverflow_0000080548_asp.net_delphi.txt
|
Q:
Given two dates what is the best way of finding the number of weekdays in PHP?
The title is pretty much self explanatory. Given two dates what is the best way of finding the number of week days using PHP? Week days being Monday to Friday.
For instance, how would I find out that there are 10 week days in between 31/08/2008 and 13/09/2008?
A:
$datefrom = strtotime($datefrom, 0);
$dateto = strtotime($dateto, 0);
$difference = $dateto - $datefrom;
$days_difference = floor($difference / 86400);
$weeks_difference = floor($days_difference / 7); // Complete weeks
$first_day = date("w", $datefrom);
$days_remainder = floor($days_difference % 7);
$odd_days = $first_day + $days_remainder; // Do we have a Saturday or Sunday in the remainder?
if ($odd_days > 7) { // Sunday
$days_remainder--;
}
if ($odd_days > 6) { // Saturday
$days_remainder--;
}
$datediff = ($weeks_difference * 5) + $days_remainder;
From here: http://www.addedbytes.com/php/php-datediff-function/
A:
If you are creating an invoicing system, you have to think about the bank holidays, Easter, etc. It is not simple to compute it.
The best solution I have ever seen is to pregenerate a table with days and its type to SQL database (row per day = 365 rows per year) and then perform simple count query with proper selection (WHERE clause).
You can find this solution fully described in Joe Celko's Thinking in Sets: Auxiliary, Temporal, and Virtual Tables in SQL
A:
One way would be to convert the dates to unix timestamps using strtotime(...), subtracting the results and div'ing with 86400 (24*60*60):
$dif_in_seconds = abs(strtotime($a) - strtotime($b));
$daysbetween = $dif_in_seconds / 86400;
ETA: Oh.. You meant weekdays as in Mon-Fri.. Didn't see that at first..
A:
The best way is to iterate through all dates in between the given date range, and get the day of week for each date. If it's a week day, increment a certain counter. At the end of the process you get the number of weekdays.
The PHP functions mktime() and date() (for working with UNIX timestamps) are your friends here.
|
Given two dates what is the best way of finding the number of weekdays in PHP?
|
The title is pretty much self explanatory. Given two dates what is the best way of finding the number of week days using PHP? Week days being Monday to Friday.
For instance, how would I find out that there are 10 week days in between 31/08/2008 and 13/09/2008?
|
[
" $datefrom = strtotime($datefrom, 0);\n $dateto = strtotime($dateto, 0);\n\n $difference = $dateto - $datefrom;\n\n $days_difference = floor($difference / 86400);\n $weeks_difference = floor($days_difference / 7); // Complete weeks\n\n $first_day = date(\"w\", $datefrom);\n $days_remainder = floor($days_difference % 7);\n\n $odd_days = $first_day + $days_remainder; // Do we have a Saturday or Sunday in the remainder?\n if ($odd_days > 7) { // Sunday\n $days_remainder--;\n }\n if ($odd_days > 6) { // Saturday\n $days_remainder--;\n }\n\n $datediff = ($weeks_difference * 5) + $days_remainder;\n\nFrom here: http://www.addedbytes.com/php/php-datediff-function/\n",
"If you are creating an invoicing system, you have to think about the bank holidays, Easter, etc. It is not simple to compute it. \nThe best solution I have ever seen is to pregenerate a table with days and its type to SQL database (row per day = 365 rows per year) and then perform simple count query with proper selection (WHERE clause). \nYou can find this solution fully described in Joe Celko's Thinking in Sets: Auxiliary, Temporal, and Virtual Tables in SQL\n",
"One way would be to convert the dates to unix timestamps using strtotime(...), subtracting the results and div'ing with 86400 (24*60*60):\n$dif_in_seconds = abs(strtotime($a) - strtotime($b));\n$daysbetween = $dif_in_seconds / 86400;\n\nETA: Oh.. You meant weekdays as in Mon-Fri.. Didn't see that at first..\n",
"The best way is to iterate through all dates in between the given date range, and get the day of week for each date. If it's a week day, increment a certain counter. At the end of the process you get the number of weekdays.\nThe PHP functions mktime() and date() (for working with UNIX timestamps) are your friends here.\n"
] |
[
3,
2,
0,
0
] |
[] |
[] |
[
"date",
"php"
] |
stackoverflow_0000080541_date_php.txt
|
Q:
What can cause a reduction in frame rate when upgrading a graphics card?
We have a two-screen DirectX application that previously ran at a consistent 60 FPS (the monitors' sync rate) using a NVIDIA 8400GS (256MB). However, when we swapped out the card for one with 512 MB of RAM the frame rate struggles to get above 40 FPS. (It only gets this high because we're using triple-buffering.) The two cards are from the same manufacturer (PNY). All other things are equal, this is a Windows XP Embedded application and we started from a fresh image for each card. The driver version number is 169.21.
The application is all 2D. I.E. just a bunch of textured quads and a whole lot of pre-rendered graphics (hence the need to upgrade the card's memory). We also have compressed animations which the CPU decodes on the fly - this involves a texture lock. The locks take forever but I've also tried having a separate system memory texture for the CPU to update and then updating the rendered texture using the device's UpdateTexture method. No overall difference in performance.
Although I've read through every FAQ I can find on the internet about DirectX performance, this is still the first time I've worked on a DirectX project so any arcane bits of knowledge you have would be useful. :)
One other thing whilst I'm on the subject; when calling Present on the swap chains it seems DirectX waits for the present to complete regardless of the fact that I'm using D3DPRESENT_DONOTWAIT in both present parameters (PresentationInterval) and the flags of the call itself. Because this is a two-screen application this is a problem as the two monitors do not appear to be genlocked, I'm working around it by running the Present calls through a threadpool. What could the underlying cause of this be?
A:
Are the cards exactly the same (both GeForce 8400GS), and only the memory size differ? Quite often with different memory sizes come slightly different clock rates (i.e. your card with more memory might use slower memory!).
So the first thing to check would be GPU core & memory clock rates, using something like GPU-Z.
A:
It's an easy test to see if the surface lock is the problem, just comment out the texture update and see if the framerate returns to 60hz. Unfortunately, writing to a locked surface and updating the resource kills perfomance, always has. Are you using mipmaps with the textures? I know DX9 added automatic generation of mipmaps, could be taking up a lot of time to generate those. If your constantly locking the same resource each frame, you could also try creating a pool of textures, kinda like triple-buffering except with textures. You would let the render use one texture, and on the next update you pick the next available texture in the pool that's not being used in to render. Unless of course your memory constrained or your only making diffs to the animated texture.
|
What can cause a reduction in frame rate when upgrading a graphics card?
|
We have a two-screen DirectX application that previously ran at a consistent 60 FPS (the monitors' sync rate) using a NVIDIA 8400GS (256MB). However, when we swapped out the card for one with 512 MB of RAM the frame rate struggles to get above 40 FPS. (It only gets this high because we're using triple-buffering.) The two cards are from the same manufacturer (PNY). All other things are equal, this is a Windows XP Embedded application and we started from a fresh image for each card. The driver version number is 169.21.
The application is all 2D. I.E. just a bunch of textured quads and a whole lot of pre-rendered graphics (hence the need to upgrade the card's memory). We also have compressed animations which the CPU decodes on the fly - this involves a texture lock. The locks take forever but I've also tried having a separate system memory texture for the CPU to update and then updating the rendered texture using the device's UpdateTexture method. No overall difference in performance.
Although I've read through every FAQ I can find on the internet about DirectX performance, this is still the first time I've worked on a DirectX project so any arcane bits of knowledge you have would be useful. :)
One other thing whilst I'm on the subject; when calling Present on the swap chains it seems DirectX waits for the present to complete regardless of the fact that I'm using D3DPRESENT_DONOTWAIT in both present parameters (PresentationInterval) and the flags of the call itself. Because this is a two-screen application this is a problem as the two monitors do not appear to be genlocked, I'm working around it by running the Present calls through a threadpool. What could the underlying cause of this be?
|
[
"Are the cards exactly the same (both GeForce 8400GS), and only the memory size differ? Quite often with different memory sizes come slightly different clock rates (i.e. your card with more memory might use slower memory!).\nSo the first thing to check would be GPU core & memory clock rates, using something like GPU-Z.\n",
"It's an easy test to see if the surface lock is the problem, just comment out the texture update and see if the framerate returns to 60hz. Unfortunately, writing to a locked surface and updating the resource kills perfomance, always has. Are you using mipmaps with the textures? I know DX9 added automatic generation of mipmaps, could be taking up a lot of time to generate those. If your constantly locking the same resource each frame, you could also try creating a pool of textures, kinda like triple-buffering except with textures. You would let the render use one texture, and on the next update you pick the next available texture in the pool that's not being used in to render. Unless of course your memory constrained or your only making diffs to the animated texture.\n"
] |
[
2,
1
] |
[] |
[] |
[
"c++",
"directx",
"hardware"
] |
stackoverflow_0000056424_c++_directx_hardware.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.