content
stringlengths 86
88.9k
| title
stringlengths 0
150
| question
stringlengths 1
35.8k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 30
130
|
---|---|---|---|---|---|---|---|---|
Q:
Using the same test suite on various implementations of a repository interface
I have been making a little toy web application in C# along the lines of Rob Connery's Asp.net MVC storefront.
I find that I have a repository interface, call it IFooRepository, with methods, say
IQueryable<Foo> GetFoo();
void PersistFoo(Foo foo);
And I have three implementations of this: ISqlFooRepository, IFileFooRepostory, and IMockFooRepository.
I also have some test cases. What I would like to do, and haven't worked out how to do yet, is to run the same test cases against each of these three implementations, and have a green tick for each test pass on each interface type.
e.g.
[TestMethod]
Public void GetFoo_NotNull_Test()
{
IFooRepository repository = GetRepository();
var results = repository. GetFoo();
Assert.IsNotNull(results);
}
I want this test method to be run three times, with some variation in the environment that allows it to get three different kinds of repository. At present I have three cut-and-pasted test classes that differ only in the implementation of the private helper method IFooRepository GetRepository(); Obviously, this is smelly.
However, I cannot just remove duplication by consolidating the cut and pasted methods, since they need to be present, public and marked as test for the test to run.
I am using the Microsoft testing framework, and would prefer to stay with it if I can. But a suggestion of how to do this in, say, MBUnit would also be of some interest.
A:
In MbUnit, you might be able to use the RowTest attribute to specify parameters on your test.
[RowTest]
[Row(new ThisRepository())]
[Row(new ThatRepository())]
Public void GetFoo_NotNull_Test(IFooRepository repository)
{
var results = repository.GetFoo();
Assert.IsNotNull(results);
}
A:
Create an abstract class that contains concrete versions of the tests and an abstract GetRepository method which returns IFooRepository.
Create three classes that derive from the abstract class, each of which implements GetRepository in a way that returns the appropriate IFooRepository implementation.
Add all three classes to your test suite, and you're ready to go.
To be able to selectively run the tests for some providers and not others, consider using the MbUnit '[FixtureCategory]' attribute to categorise your tests - suggested categories are 'quick' 'slow' 'db' 'important' and 'unimportant' (The last two are jokes - honest!)
A:
If you have your 3 copy and pasted test methods, you should be able to refactor (extract method) it to get rid of the duplication.
i.e. this is what I had in mind:
private IRepository GetRepository(RepositoryType repositoryType)
{
switch (repositoryType)
{
case RepositoryType.Sql:
// return a SQL repository
case RepositoryType.Mock:
// return a mock repository
// etc
}
}
private void TestGetFooNotNull(RepositoryType repositoryType)
{
IFooRepository repository = GetRepository(repositoryType);
var results = repository.GetFoo();
Assert.IsNotNull(results);
}
[TestMethod]
public void GetFoo_NotNull_Sql()
{
this.TestGetFooNotNull(RepositoryType.Sql);
}
[TestMethod]
public void GetFoo_NotNull_File()
{
this.TestGetFooNotNull(RepositoryType.File);
}
[TestMethod]
public void GetFoo_NotNull_Mock()
{
this.TestGetFooNotNull(RepositoryType.Mock);
}
A:
[TestMethod]
public void GetFoo_NotNull_Test_ForFile()
{
GetFoo_NotNull(new FileRepository().GetRepository());
}
[TestMethod]
public void GetFoo_NotNull_Test_ForSql()
{
GetFoo_NotNull(new SqlRepository().GetRepository());
}
private void GetFoo_NotNull(IFooRepository repository)
{
var results = repository. GetFoo();
Assert.IsNotNull(results);
}
A:
To Sum up, there are three ways to go:
1) Make the tests one liners that call down to common methods (answer by Rick, also Hallgrim)
2) Use MBUnit's RowTest feature to automate this (answer by Jon Limjap). I would also use an enum here, e.g.
[RowTest]
[Row(RepositoryType.Sql)]
[Row(RepositoryType.Mock)]
public void TestGetFooNotNull(RepositoryType repositoryType)
{
IFooRepository repository = GetRepository(repositoryType);
var results = repository.GetFoo();
Assert.IsNotNull(results);
}
3) Use a base class, answer by belugabob
I have made a sample based on this idea
public abstract class TestBase
{
protected int foo = 0;
[TestMethod]
public void TestUnderTen()
{
Assert.IsTrue(foo < 10);
}
[TestMethod]
public void TestOver2()
{
Assert.IsTrue(foo > 2);
}
}
[TestClass]
public class TestA: TestBase
{
public TestA()
{
foo = 4;
}
}
[TestClass]
public class TestB: TestBase
{
public TestB()
{
foo = 6;
}
}
This produces four passing tests in two test classes.
Upsides of 3 are:
1) Least extra code, least maintenance
2) Least typing to plug in a new repository if need be - it would be done in one place, unlike the others.
Downsides are:
1) Less flexibility to not run a test against a provider if need be
2) Harder to read.
|
Using the same test suite on various implementations of a repository interface
|
I have been making a little toy web application in C# along the lines of Rob Connery's Asp.net MVC storefront.
I find that I have a repository interface, call it IFooRepository, with methods, say
IQueryable<Foo> GetFoo();
void PersistFoo(Foo foo);
And I have three implementations of this: ISqlFooRepository, IFileFooRepostory, and IMockFooRepository.
I also have some test cases. What I would like to do, and haven't worked out how to do yet, is to run the same test cases against each of these three implementations, and have a green tick for each test pass on each interface type.
e.g.
[TestMethod]
Public void GetFoo_NotNull_Test()
{
IFooRepository repository = GetRepository();
var results = repository. GetFoo();
Assert.IsNotNull(results);
}
I want this test method to be run three times, with some variation in the environment that allows it to get three different kinds of repository. At present I have three cut-and-pasted test classes that differ only in the implementation of the private helper method IFooRepository GetRepository(); Obviously, this is smelly.
However, I cannot just remove duplication by consolidating the cut and pasted methods, since they need to be present, public and marked as test for the test to run.
I am using the Microsoft testing framework, and would prefer to stay with it if I can. But a suggestion of how to do this in, say, MBUnit would also be of some interest.
|
[
"In MbUnit, you might be able to use the RowTest attribute to specify parameters on your test.\n[RowTest]\n[Row(new ThisRepository())]\n[Row(new ThatRepository())]\nPublic void GetFoo_NotNull_Test(IFooRepository repository)\n{\n var results = repository.GetFoo();\n Assert.IsNotNull(results);\n}\n\n",
"Create an abstract class that contains concrete versions of the tests and an abstract GetRepository method which returns IFooRepository.\nCreate three classes that derive from the abstract class, each of which implements GetRepository in a way that returns the appropriate IFooRepository implementation.\nAdd all three classes to your test suite, and you're ready to go.\nTo be able to selectively run the tests for some providers and not others, consider using the MbUnit '[FixtureCategory]' attribute to categorise your tests - suggested categories are 'quick' 'slow' 'db' 'important' and 'unimportant' (The last two are jokes - honest!)\n",
"If you have your 3 copy and pasted test methods, you should be able to refactor (extract method) it to get rid of the duplication.\ni.e. this is what I had in mind:\nprivate IRepository GetRepository(RepositoryType repositoryType)\n{\n switch (repositoryType)\n { \n case RepositoryType.Sql:\n // return a SQL repository\n case RepositoryType.Mock:\n // return a mock repository\n // etc\n }\n}\n\nprivate void TestGetFooNotNull(RepositoryType repositoryType)\n{\n IFooRepository repository = GetRepository(repositoryType);\n var results = repository.GetFoo();\n Assert.IsNotNull(results);\n}\n\n[TestMethod]\npublic void GetFoo_NotNull_Sql()\n{\n this.TestGetFooNotNull(RepositoryType.Sql);\n}\n\n[TestMethod]\npublic void GetFoo_NotNull_File()\n{\n this.TestGetFooNotNull(RepositoryType.File);\n}\n\n[TestMethod]\npublic void GetFoo_NotNull_Mock()\n{\n this.TestGetFooNotNull(RepositoryType.Mock);\n}\n\n",
"[TestMethod]\npublic void GetFoo_NotNull_Test_ForFile()\n{ \n GetFoo_NotNull(new FileRepository().GetRepository());\n}\n\n[TestMethod]\npublic void GetFoo_NotNull_Test_ForSql()\n{ \n GetFoo_NotNull(new SqlRepository().GetRepository());\n}\n\n\nprivate void GetFoo_NotNull(IFooRepository repository)\n{\n var results = repository. GetFoo(); \n Assert.IsNotNull(results);\n}\n\n",
"To Sum up, there are three ways to go:\n1) Make the tests one liners that call down to common methods (answer by Rick, also Hallgrim)\n2) Use MBUnit's RowTest feature to automate this (answer by Jon Limjap). I would also use an enum here, e.g.\n[RowTest]\n[Row(RepositoryType.Sql)]\n[Row(RepositoryType.Mock)]\npublic void TestGetFooNotNull(RepositoryType repositoryType)\n{\n IFooRepository repository = GetRepository(repositoryType);\n var results = repository.GetFoo();\n Assert.IsNotNull(results);\n}\n\n3) Use a base class, answer by belugabob\nI have made a sample based on this idea\npublic abstract class TestBase\n{\n protected int foo = 0;\n\n [TestMethod]\n public void TestUnderTen()\n {\n Assert.IsTrue(foo < 10);\n }\n\n [TestMethod]\n public void TestOver2()\n {\n Assert.IsTrue(foo > 2);\n }\n}\n\n[TestClass]\npublic class TestA: TestBase\n{\n public TestA()\n {\n foo = 4;\n }\n}\n\n[TestClass]\npublic class TestB: TestBase\n{\n public TestB()\n {\n foo = 6;\n }\n}\n\nThis produces four passing tests in two test classes.\nUpsides of 3 are:\n1) Least extra code, least maintenance \n2) Least typing to plug in a new repository if need be - it would be done in one place, unlike the others.\nDownsides are:\n1) Less flexibility to not run a test against a provider if need be\n2) Harder to read.\n"
] |
[
3,
3,
1,
0,
0
] |
[] |
[] |
[
"c#",
"duplication",
"interface",
"unit_testing"
] |
stackoverflow_0000081317_c#_duplication_interface_unit_testing.txt
|
Q:
In Struts 1.3, what's the best way for the Controller to fill the View with variables?
I've just inherited some old Struts code.
If Struts (1.3) follows the MVC pattern, how do the Action classes fill the View with variables to render in HTML ?
So far, I've seen the Action classes push variables in (1) the HTTP request with
request.setAttribute("name", user.getName())
(2) in ActionForm classes, using methods specific to the application:
UserForm form = (UserForm) actionForm;
form.setUserName(user.getName());
and (3) a requestScope variable, that I see in the JSP layer (the view uses JSP), but I can't see in the Action classes.
<p style='color: red'><c:out value='${requestScope.userName}' /></p>
So, which of these is considered old-school, and what's the recommended way of pushing variables in the View in Struts ?
A:
My Struts days are long over, but as far as I remember we used to place one view-specific bean (which would work as a holder for fine-graner beans or collections of beans) into the request scope within our Action.perform() implementation. This view-specific bean would then be rendered by the view.
A:
As Struts 1.3 is considered old-school, I'd recommend to go with the flow and use the style that already is used throughout the application you inherited.
If all different styles are already used, pick the most used one. After that, pick your personal favourite. Mine would be 1 or 3 - the form (2) is usually best suited for data that will eventually be rendered inside some form controls. If this is the case - use the form, otherwise - don't.
|
In Struts 1.3, what's the best way for the Controller to fill the View with variables?
|
I've just inherited some old Struts code.
If Struts (1.3) follows the MVC pattern, how do the Action classes fill the View with variables to render in HTML ?
So far, I've seen the Action classes push variables in (1) the HTTP request with
request.setAttribute("name", user.getName())
(2) in ActionForm classes, using methods specific to the application:
UserForm form = (UserForm) actionForm;
form.setUserName(user.getName());
and (3) a requestScope variable, that I see in the JSP layer (the view uses JSP), but I can't see in the Action classes.
<p style='color: red'><c:out value='${requestScope.userName}' /></p>
So, which of these is considered old-school, and what's the recommended way of pushing variables in the View in Struts ?
|
[
"My Struts days are long over, but as far as I remember we used to place one view-specific bean (which would work as a holder for fine-graner beans or collections of beans) into the request scope within our Action.perform() implementation. This view-specific bean would then be rendered by the view.\n",
"As Struts 1.3 is considered old-school, I'd recommend to go with the flow and use the style that already is used throughout the application you inherited.\nIf all different styles are already used, pick the most used one. After that, pick your personal favourite. Mine would be 1 or 3 - the form (2) is usually best suited for data that will eventually be rendered inside some form controls. If this is the case - use the form, otherwise - don't.\n"
] |
[
1,
0
] |
[] |
[] |
[
"java",
"model_view_controller",
"struts"
] |
stackoverflow_0000082359_java_model_view_controller_struts.txt
|
Q:
How do I refresh a training database with the data from production database?
I have a particular system on out network where we need to maintain a training installation. The system uses SQL Server 2000 as its database engine and I need to set up a system for refreshing the data in the training database with the data from the production database on a regular basis.
I want to use SSIS as we have SQL 2005 servers I can run the process from. I have a fair bit of SQL experience, but not much with SSIS. I have been trying to do this with the "Transfer Database Task" but haven't been having much luck, as it always throws an error.
If we ignore the use of configuration items etc and pretend all the database names and so forth are hard-coded, I have the following:
A Single SSIS "Transfer Database Task" with the following properties:
Destination Overwrite: True
Action: Copy
Method: DatabaseOnline
The error I receive is:
Error: The Execute method on the task returned error code 0x80131500 (ERROR : errorCode=-1073548784 description=Executing the query "EXEC dbo.sp_addrole @rolename = N'XXXXX' " failed with the following error: "The role 'XXXXX' already exists in the current database.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly. helpFile= helpContext=0 idofInterfaceWithError={8BDFE893-E9D8-4D23-9739-DA807BCDC2AC}). The Execute method must succeed, and indicate the result using an "out" parameter.
I'm sure there is something obvious going on here, but surely if the task is set to overwrite the pre-existance of the role shouldn't matter? Does anyone know what I need to do to get this working?
A:
Apparently this should be fix in SQLServer 2005 SP2 see here. Looks like you need to make sure to patch the client machine too if you are running the SSIS package from within Visual Studio.
|
How do I refresh a training database with the data from production database?
|
I have a particular system on out network where we need to maintain a training installation. The system uses SQL Server 2000 as its database engine and I need to set up a system for refreshing the data in the training database with the data from the production database on a regular basis.
I want to use SSIS as we have SQL 2005 servers I can run the process from. I have a fair bit of SQL experience, but not much with SSIS. I have been trying to do this with the "Transfer Database Task" but haven't been having much luck, as it always throws an error.
If we ignore the use of configuration items etc and pretend all the database names and so forth are hard-coded, I have the following:
A Single SSIS "Transfer Database Task" with the following properties:
Destination Overwrite: True
Action: Copy
Method: DatabaseOnline
The error I receive is:
Error: The Execute method on the task returned error code 0x80131500 (ERROR : errorCode=-1073548784 description=Executing the query "EXEC dbo.sp_addrole @rolename = N'XXXXX' " failed with the following error: "The role 'XXXXX' already exists in the current database.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly. helpFile= helpContext=0 idofInterfaceWithError={8BDFE893-E9D8-4D23-9739-DA807BCDC2AC}). The Execute method must succeed, and indicate the result using an "out" parameter.
I'm sure there is something obvious going on here, but surely if the task is set to overwrite the pre-existance of the role shouldn't matter? Does anyone know what I need to do to get this working?
|
[
"Apparently this should be fix in SQLServer 2005 SP2 see here. Looks like you need to make sure to patch the client machine too if you are running the SSIS package from within Visual Studio.\n"
] |
[
2
] |
[] |
[] |
[
"sql_server",
"ssis"
] |
stackoverflow_0000080271_sql_server_ssis.txt
|
Q:
Tools for command line file parsing in cygwin
I have to deal with text files in a motley selection of formats. Here's an example (Columns A and B are tab delimited):
A B
a Name1=Val1, Name2=Val2, Name3=Val3
b Name1=Val4, Name3=Val5
c Name1=Val6, Name2=Val7, Name3=Val8
The files could have headers or not, have mixed delimiting schemes, have columns with name/value pairs as above etc.
I often have the ad-hoc need to extract data from such files in various ways. For example from the above data I might want the value associated with Name2 where it is present. i.e.
A B
a Val2
c Val7
What tools/techniques are there for performing such manipulations as one line commands, using the above as an example but extensible to other cases?
A:
You have all the basic bash shell commands, for example grep, cut, sed and awk at your disposal. You can also use Perl or Ruby for more complex things.
A:
I don't like sed too much, but it works for such things:
var="Name2";sed -n "1p;s/\([^ ]*\) .*$var=\([^ ,]*\).*/\1 \2/p" < filename
Gives you:
A B
a Val2
c Val7
A:
From what I've seen I'd start with Awk for this sort of thing and then if you need something more complex, I'd progress to Python.
A:
I would use sed:
# print section of file between two regular expressions (inclusive)
sed -n '/Iowa/,/Montana/p' # case sensitive
A:
Since you have cygwin, I'd go with Perl. It's the easiest to learn (check out the O'Reily book: Learning Perl) and widely applicable.
A:
I would use Perl. Write a small module (or more than one) for dealing with the different formats. You could then run perl oneliners using that library. Example for what it would
look like as follows:
perl -e 'use Parser;' -e 'parser("in.input").get("Name2");'
Don't quote me on the syntax, but that's the general idea. Abstract the task at hand to allow you to think in terms of what you need to do, not how you need to do it. Ruby would be another option, it tends to have a cleaner syntax, but either language would work.
|
Tools for command line file parsing in cygwin
|
I have to deal with text files in a motley selection of formats. Here's an example (Columns A and B are tab delimited):
A B
a Name1=Val1, Name2=Val2, Name3=Val3
b Name1=Val4, Name3=Val5
c Name1=Val6, Name2=Val7, Name3=Val8
The files could have headers or not, have mixed delimiting schemes, have columns with name/value pairs as above etc.
I often have the ad-hoc need to extract data from such files in various ways. For example from the above data I might want the value associated with Name2 where it is present. i.e.
A B
a Val2
c Val7
What tools/techniques are there for performing such manipulations as one line commands, using the above as an example but extensible to other cases?
|
[
"You have all the basic bash shell commands, for example grep, cut, sed and awk at your disposal. You can also use Perl or Ruby for more complex things.\n",
"I don't like sed too much, but it works for such things:\nvar=\"Name2\";sed -n \"1p;s/\\([^ ]*\\) .*$var=\\([^ ,]*\\).*/\\1 \\2/p\" < filename\n\nGives you:\n A B\n a Val2\n c Val7\n\n",
"From what I've seen I'd start with Awk for this sort of thing and then if you need something more complex, I'd progress to Python.\n",
"I would use sed:\n # print section of file between two regular expressions (inclusive)\n sed -n '/Iowa/,/Montana/p' # case sensitive\n\n",
"Since you have cygwin, I'd go with Perl. It's the easiest to learn (check out the O'Reily book: Learning Perl) and widely applicable.\n",
"I would use Perl. Write a small module (or more than one) for dealing with the different formats. You could then run perl oneliners using that library. Example for what it would\nlook like as follows:\nperl -e 'use Parser;' -e 'parser(\"in.input\").get(\"Name2\");'\n\nDon't quote me on the syntax, but that's the general idea. Abstract the task at hand to allow you to think in terms of what you need to do, not how you need to do it. Ruby would be another option, it tends to have a cleaner syntax, but either language would work.\n"
] |
[
1,
1,
0,
0,
0,
0
] |
[] |
[] |
[
"cygwin",
"parsing"
] |
stackoverflow_0000082268_cygwin_parsing.txt
|
Q:
How do you customize the RSS feeds in SharePoint
In the early days of SharePoint 2007 beta, I've come across the ability to customize the template used to emit the RSS feeds from lists. I can't find it again. Anybody know where it is?
A:
Ah, found it, based on a subtle hint from Jan Tielens. It's on the Settings page for the list, under Communications -> RSS settings.
/_layouts/listsyndication.aspx?List=<list id>
I could have sworn there was more, like an actual template file you could customize.
A:
I my search, also came across Customize RSS for the Content Query Web Part
"After you customize the Content Query Web Part to display the fields and content you want, you can set up the Web Part to emit a Really Simple Syndication (RSS) feed of that content."
|
How do you customize the RSS feeds in SharePoint
|
In the early days of SharePoint 2007 beta, I've come across the ability to customize the template used to emit the RSS feeds from lists. I can't find it again. Anybody know where it is?
|
[
"Ah, found it, based on a subtle hint from Jan Tielens. It's on the Settings page for the list, under Communications -> RSS settings.\n/_layouts/listsyndication.aspx?List=<list id>\nI could have sworn there was more, like an actual template file you could customize.\n",
"I my search, also came across Customize RSS for the Content Query Web Part\n\"After you customize the Content Query Web Part to display the fields and content you want, you can set up the Web Part to emit a Really Simple Syndication (RSS) feed of that content.\"\n"
] |
[
1,
1
] |
[] |
[] |
[
"moss",
"rss",
"sharepoint"
] |
stackoverflow_0000082214_moss_rss_sharepoint.txt
|
Q:
What is the best way to remotely reset the server cache in a web farm?
Each of our production web servers maintains its own cache for separate web sites (ASP.NET Web Applications). Currently to clear a cache we log into the server and "touch" the web.config file.
Does anyone have an example of a safe/secure way to remotely reset the cache for a specific web application? Ideally we'd be able to say "clear the cache for app X running on all servers" but also "clear the cache for app X running on server Y".
Edits/Clarifications:
I should probably clarify that doing this via the application itself isn't really an option (i.e. some sort of log in to the application, surf to a specific page or handler that would clear the cache). In order to do something like this we'd need to disable/bypass logging and stats tracking code, or mess up our stats.
Yes, the cache expires regularly. What I'd like to do though is setup something so I can expire a specific cache on demand, usually after we change something in the database (we're using SQL 2000). We can do this now but only by logging in to the servers themselves.
A:
For each application, you could write a little cache-dump.aspx script to kill the cache/application data. Copy it to all your applications and write a hub script to manage the calling.
For security, you could add all sorts of authentication-lookups or IP-checking.
Here the way I do the actual app-dumping:
Context.Application.Lock()
Context.Session.Abandon()
Context.Application.RemoveAll()
Context.Application.UnLock()
A:
Found a DevX article regarding a touch utility that look useful.
I'm going to try combining that with either a table in the database (add a record and the touch utility finds it and updates the appropriate web.config file) or a web service (make a call and the touch utility gets called to update the appropriate web.config file)
A:
This may not be "elegant", but you could setup a scheduled task that executes a batch script. The script would essentially "touch" the web.config (or some other file that causes a re-compile) for you.
Otherwise, is your application cache not set to expire after N minutes?
|
What is the best way to remotely reset the server cache in a web farm?
|
Each of our production web servers maintains its own cache for separate web sites (ASP.NET Web Applications). Currently to clear a cache we log into the server and "touch" the web.config file.
Does anyone have an example of a safe/secure way to remotely reset the cache for a specific web application? Ideally we'd be able to say "clear the cache for app X running on all servers" but also "clear the cache for app X running on server Y".
Edits/Clarifications:
I should probably clarify that doing this via the application itself isn't really an option (i.e. some sort of log in to the application, surf to a specific page or handler that would clear the cache). In order to do something like this we'd need to disable/bypass logging and stats tracking code, or mess up our stats.
Yes, the cache expires regularly. What I'd like to do though is setup something so I can expire a specific cache on demand, usually after we change something in the database (we're using SQL 2000). We can do this now but only by logging in to the servers themselves.
|
[
"For each application, you could write a little cache-dump.aspx script to kill the cache/application data. Copy it to all your applications and write a hub script to manage the calling.\nFor security, you could add all sorts of authentication-lookups or IP-checking.\nHere the way I do the actual app-dumping:\nContext.Application.Lock()\nContext.Session.Abandon()\nContext.Application.RemoveAll()\nContext.Application.UnLock()\n\n",
"Found a DevX article regarding a touch utility that look useful. \nI'm going to try combining that with either a table in the database (add a record and the touch utility finds it and updates the appropriate web.config file) or a web service (make a call and the touch utility gets called to update the appropriate web.config file)\n",
"This may not be \"elegant\", but you could setup a scheduled task that executes a batch script. The script would essentially \"touch\" the web.config (or some other file that causes a re-compile) for you.\nOtherwise, is your application cache not set to expire after N minutes?\n"
] |
[
2,
1,
0
] |
[] |
[] |
[
"caching"
] |
stackoverflow_0000076855_caching.txt
|
Q:
Database Design Lookup tables
I'm currently trying to improve the design of a legacy db and I have the following situation
Currently I have a table SalesLead in which we store the the LeadSource.
Create Table SalesLead(
....
LeadSource varchar(20)
....
)
The Lead Sources are helpfully stored in a table.
Create Table LeadSource (
LeadSourceId int, /*the PK*/
LeadSource varchar(20)
)
And so I just want to Create a foreign key from one to the other and drop the non-normalized column.
All standard stuff, I hope.
Here is my problem. I can't seem to get away from the issue that instead of writing
SELECT * FROM SalesLead Where LeadSource = 'foo'
Which is totally unambiguous I now have to write
SELECT * FROM SalesLead where FK_LeadSourceID = 1
or
SELECT * FROM SalesLead
INNER JOIN LeadSource ON SalesLead.FK_LeadSourceID = LeadSource.LeadSourceId
where LeadSource.LeadSource = "foo"
Which breaks if we ever alter the content of the LeadSource field.
In my application when ever I want to alter the value of SalesLead's LeadSource I don't want to update from 1 to 2 (for example) as I don't want to have developers having to remember these magic numbers. The ids are arbitrary and should be kept so.
How do I remove or negate the dependency on them in my app's code?
Edit Languages my solution will have to support
.NET 2.0 + 3 (for what its worth asp.net, vb.net and c#)
vba (access)
db (MSSQL 2000)
Edit 2.0 The join is fine is just that 'foo' may change on request to 'foobar' and I don't want to haul through the queries.
A:
If you want to de-normalize the table, simply add the LeadSource (Varchar) column to your SalesLead table, instead of using a FK or an ID.
On the other hand, if your language has support for ENUM structures, the "magic numbers" should be safely stored in an enum, so you could:
SELECT * FROM SALESLEAD WHERE LeadSouce = (int) EnmLeadSource.Foo; //pseudocode
And your code will have a
public enum EnmLeadSource
{
Foo = 1,
Bar = 2
}
It is OK to remove some excessive normalization if this causes you more trouble than what it fixes. However, bear in mind that if you use a VARCHAR field (as oposed to a Magic Number) you must maintain consistency and it could be hard to localize later if you need multiple languages or cultures.
The best approach after Normalization seems to be the usage of an Enum structure. It keeps the code clean and you can always pass enums across methods and functions. (I'm assuming .NET here but in other languages as well)
Update: Since you're using .NET, the DB Backend is "irrelevant" if you're constructing a query through code. Imagine this function:
public void GiveMeSalesLeadGiven( EnmLeadSource thisLeadSource )
{
// Construct your string using the value of thisLeadSource
}
In the table you'll have a LeadSource (INT) column. But the fact that it has 1,2 or N won't matter to you. If you later need to change foo to foobar, that can mean that:
1) All the "number 1" have to be number "2". You'll have to update the table.
2) Or You need Foo to now be number 2 and Bar number 1. You just change the Enum (but make sure that the table values remain consistent).
The Enum is a very useful structure if properly used.
Hope this helps.
A:
Have you considered just not using an artificial key for the LeadSource table? Then you get to use LeadSource as the FK in SalesLead, which simplifies your queries while retaining the benefits of using a canonical set of values (the rows in LeadSource).
A:
Did you consider an updatable view? Depending on your database server and the integrity of your database design you will be able to create a view that, when its values change, in turn it will update the constituent tables.
A:
I really don't see your problem behind the join.
Naturally, asking directly by the FK_LeadSourceID is wrong, but using the JOIN seems to be the right way to go as I masks changing IDs perfectly fine. If, for example, "foo" becomes 3 at one day (and you update the foreign key field), the last query you've displayed will still work exactly the same.
If you want to make the change to the schema without altering the current queries in the application, then a view encompassing this join is the way to go.
Or if you fear that the join Syntax is non-intuitive, there's always the subselect...
SELECT * FROM SalesLead where FK_LeadSourceID =
(SELECT LeadSourceID from LeadSource WHERE LeadSource = 'foo')
but remember to keep an index on LeadSource.LeadSource - at least if you have a lot of them stored in the table.
A:
If you "improve design" by introducing new relations/tables, you'll certainly have the need for different entities. If so, you'll need to deal with their semantics.
In the previous solution you were able to just update the LeadSource name to whatever you wanted in the appropriate SalesLead row. If you update the name in your new structure, you do so for all SalesLead rows.
There is no way around dealing with these different semantics. You just have to do so. In order to make the tables easier to query, you might use views as already suggested, but I'd expect them mostly for reporting purposes or backward compatibility, provided they are not updatable, because everybody updating this view would not be aware of changed semantics.
If you dislike the join try
SELECT * FROM SalesLead where LeadSourceId IN (SELECT Id FROM LeadSource WHERE LeadSource = 'foo')
A:
In a typical application the user would be presented with a list of Lead Sources (returned by querying the LeadSource table) and the subsequent SalesLead query would be dynamically created by the application based upon the user's selection.
Your application appears to have some 'well known' lead sources that you need to write specific queries for. If this is the case, then add a third (unique) field to the LeadSource table that includes an invariant 'name' that you can use as the basis of your application's queries.
This shifts the burden of magic-ness from a DB generated magic number (that may vary from installation to installation) to a system defined magic name (that is fixed by design).
A:
There's a false dichotomy here.
SELECT * FROM SalesLead
INNER JOIN LeadSource ON SalesLead.FK_LeadSourceID = LeadSource.LeadSourceId
where LeadSource.LeadSource = "foo"
doesn't break any more than the original
SELECT * FROM SalesLead Where LeadSource = 'foo'
when foo changes to foobar. Also, if you're using parameterized queries (and you really should be), you don't have to change anything when foo changes to foobar.
|
Database Design Lookup tables
|
I'm currently trying to improve the design of a legacy db and I have the following situation
Currently I have a table SalesLead in which we store the the LeadSource.
Create Table SalesLead(
....
LeadSource varchar(20)
....
)
The Lead Sources are helpfully stored in a table.
Create Table LeadSource (
LeadSourceId int, /*the PK*/
LeadSource varchar(20)
)
And so I just want to Create a foreign key from one to the other and drop the non-normalized column.
All standard stuff, I hope.
Here is my problem. I can't seem to get away from the issue that instead of writing
SELECT * FROM SalesLead Where LeadSource = 'foo'
Which is totally unambiguous I now have to write
SELECT * FROM SalesLead where FK_LeadSourceID = 1
or
SELECT * FROM SalesLead
INNER JOIN LeadSource ON SalesLead.FK_LeadSourceID = LeadSource.LeadSourceId
where LeadSource.LeadSource = "foo"
Which breaks if we ever alter the content of the LeadSource field.
In my application when ever I want to alter the value of SalesLead's LeadSource I don't want to update from 1 to 2 (for example) as I don't want to have developers having to remember these magic numbers. The ids are arbitrary and should be kept so.
How do I remove or negate the dependency on them in my app's code?
Edit Languages my solution will have to support
.NET 2.0 + 3 (for what its worth asp.net, vb.net and c#)
vba (access)
db (MSSQL 2000)
Edit 2.0 The join is fine is just that 'foo' may change on request to 'foobar' and I don't want to haul through the queries.
|
[
"If you want to de-normalize the table, simply add the LeadSource (Varchar) column to your SalesLead table, instead of using a FK or an ID.\nOn the other hand, if your language has support for ENUM structures, the \"magic numbers\" should be safely stored in an enum, so you could: \nSELECT * FROM SALESLEAD WHERE LeadSouce = (int) EnmLeadSource.Foo; //pseudocode\n\nAnd your code will have a\npublic enum EnmLeadSource \n{\n Foo = 1,\n Bar = 2\n}\n\nIt is OK to remove some excessive normalization if this causes you more trouble than what it fixes. However, bear in mind that if you use a VARCHAR field (as oposed to a Magic Number) you must maintain consistency and it could be hard to localize later if you need multiple languages or cultures.\nThe best approach after Normalization seems to be the usage of an Enum structure. It keeps the code clean and you can always pass enums across methods and functions. (I'm assuming .NET here but in other languages as well)\nUpdate: Since you're using .NET, the DB Backend is \"irrelevant\" if you're constructing a query through code. Imagine this function:\npublic void GiveMeSalesLeadGiven( EnmLeadSource thisLeadSource )\n{\n // Construct your string using the value of thisLeadSource \n}\n\nIn the table you'll have a LeadSource (INT) column. But the fact that it has 1,2 or N won't matter to you. If you later need to change foo to foobar, that can mean that:\n1) All the \"number 1\" have to be number \"2\". You'll have to update the table.\n2) Or You need Foo to now be number 2 and Bar number 1. You just change the Enum (but make sure that the table values remain consistent).\nThe Enum is a very useful structure if properly used. \nHope this helps.\n",
"Have you considered just not using an artificial key for the LeadSource table? Then you get to use LeadSource as the FK in SalesLead, which simplifies your queries while retaining the benefits of using a canonical set of values (the rows in LeadSource).\n",
"Did you consider an updatable view? Depending on your database server and the integrity of your database design you will be able to create a view that, when its values change, in turn it will update the constituent tables.\n",
"I really don't see your problem behind the join. \nNaturally, asking directly by the FK_LeadSourceID is wrong, but using the JOIN seems to be the right way to go as I masks changing IDs perfectly fine. If, for example, \"foo\" becomes 3 at one day (and you update the foreign key field), the last query you've displayed will still work exactly the same.\nIf you want to make the change to the schema without altering the current queries in the application, then a view encompassing this join is the way to go.\nOr if you fear that the join Syntax is non-intuitive, there's always the subselect...\nSELECT * FROM SalesLead where FK_LeadSourceID = \n (SELECT LeadSourceID from LeadSource WHERE LeadSource = 'foo')\n\nbut remember to keep an index on LeadSource.LeadSource - at least if you have a lot of them stored in the table.\n",
"If you \"improve design\" by introducing new relations/tables, you'll certainly have the need for different entities. If so, you'll need to deal with their semantics. \nIn the previous solution you were able to just update the LeadSource name to whatever you wanted in the appropriate SalesLead row. If you update the name in your new structure, you do so for all SalesLead rows.\nThere is no way around dealing with these different semantics. You just have to do so. In order to make the tables easier to query, you might use views as already suggested, but I'd expect them mostly for reporting purposes or backward compatibility, provided they are not updatable, because everybody updating this view would not be aware of changed semantics.\nIf you dislike the join try\nSELECT * FROM SalesLead where LeadSourceId IN (SELECT Id FROM LeadSource WHERE LeadSource = 'foo')\n",
"In a typical application the user would be presented with a list of Lead Sources (returned by querying the LeadSource table) and the subsequent SalesLead query would be dynamically created by the application based upon the user's selection.\nYour application appears to have some 'well known' lead sources that you need to write specific queries for. If this is the case, then add a third (unique) field to the LeadSource table that includes an invariant 'name' that you can use as the basis of your application's queries. \nThis shifts the burden of magic-ness from a DB generated magic number (that may vary from installation to installation) to a system defined magic name (that is fixed by design).\n",
"There's a false dichotomy here.\nSELECT * FROM SalesLead \nINNER JOIN LeadSource ON SalesLead.FK_LeadSourceID = LeadSource.LeadSourceId \nwhere LeadSource.LeadSource = \"foo\"\n\ndoesn't break any more than the original\nSELECT * FROM SalesLead Where LeadSource = 'foo'\n\nwhen foo changes to foobar. Also, if you're using parameterized queries (and you really should be), you don't have to change anything when foo changes to foobar.\n"
] |
[
3,
2,
1,
0,
0,
0,
0
] |
[] |
[] |
[
"database_design",
"magic_numbers"
] |
stackoverflow_0000081533_database_design_magic_numbers.txt
|
Q:
Best server-side framework for heavy RIA based application?
What do the collective beleive to be the best platform to use as a backend to AJAX / Flex / Silverlight applications and why?
We are undergoing a technology review and I would like to know some other opinions.
Is It Java, Grails, Python, Rails, ColdFusion, something else?
A:
There is no definitive answer. However, I would choose a light solution, like Python or Rails, over Java or ColdFusion.
You may want to investigate C# ASP.NET + Silverlight combo. Microsoft made it highly integrated, which is double-edged sword. But in many cases this helps.
You may also want to review existing solutions / applications / startups. Don't ditch PHP up front, there are many existing components for it. And don't overestimate the impact of server-side technology choice on success.
|
Best server-side framework for heavy RIA based application?
|
What do the collective beleive to be the best platform to use as a backend to AJAX / Flex / Silverlight applications and why?
We are undergoing a technology review and I would like to know some other opinions.
Is It Java, Grails, Python, Rails, ColdFusion, something else?
|
[
"There is no definitive answer. However, I would choose a light solution, like Python or Rails, over Java or ColdFusion.\nYou may want to investigate C# ASP.NET + Silverlight combo. Microsoft made it highly integrated, which is double-edged sword. But in many cases this helps.\nYou may also want to review existing solutions / applications / startups. Don't ditch PHP up front, there are many existing components for it. And don't overestimate the impact of server-side technology choice on success.\n"
] |
[
2
] |
[] |
[] |
[
"java",
"python",
"ria"
] |
stackoverflow_0000082599_java_python_ria.txt
|
Q:
Setting Nameservers - how?
I understand how I can change the dns settings for my domains by editing my bind configs, when I run my own name-servers. I know that I can define the name-servers with my registrar via their online control panels. But I have no idea how that part works...
How does my registrar store the data about the name-servers? Is it something clever, like them having the authority to store NS records in the root name-servers?
I'm confused by this part, can anyone explain?
A:
The registrar is responsible for setting the Root DNS entry that says, "When someone asks for stackoverflow.com, tell them that the authoritative DNS is xxx.xxx.xxx.xxx". They have an interface that allows them to make changes to the records they own.
Then the requester must go to the authoritative DNS (Which is the one you specified to your registrar was your DNS) to find the IP for stackoverflow.com, any subdomain of it, email server, and other DNS records pertaining to that domain.
-Adam
A:
I've just been shown this:
# dig +trace ns stackoverflow.com
; <<>> DiG 9.2.4 <<>> +trace ns stackoverflow.com
;; global options: printcmd
. 269431 IN NS B.ROOT-SERVERS.NET.
. 269431 IN NS C.ROOT-SERVERS.NET.
. 269431 IN NS D.ROOT-SERVERS.NET.
. 269431 IN NS E.ROOT-SERVERS.NET.
. 269431 IN NS F.ROOT-SERVERS.NET.
. 269431 IN NS G.ROOT-SERVERS.NET.
. 269431 IN NS H.ROOT-SERVERS.NET.
. 269431 IN NS I.ROOT-SERVERS.NET.
. 269431 IN NS J.ROOT-SERVERS.NET.
. 269431 IN NS K.ROOT-SERVERS.NET.
. 269431 IN NS L.ROOT-SERVERS.NET.
. 269431 IN NS M.ROOT-SERVERS.NET.
. 269431 IN NS A.ROOT-SERVERS.NET.
;; Received 504 bytes from 83.138.151.80#53(83.138.151.80) in 3 ms
com. 172800 IN NS A.GTLD-SERVERS.NET.
com. 172800 IN NS B.GTLD-SERVERS.NET.
com. 172800 IN NS C.GTLD-SERVERS.NET.
com. 172800 IN NS D.GTLD-SERVERS.NET.
com. 172800 IN NS E.GTLD-SERVERS.NET.
com. 172800 IN NS F.GTLD-SERVERS.NET.
com. 172800 IN NS G.GTLD-SERVERS.NET.
com. 172800 IN NS H.GTLD-SERVERS.NET.
com. 172800 IN NS I.GTLD-SERVERS.NET.
com. 172800 IN NS J.GTLD-SERVERS.NET.
com. 172800 IN NS K.GTLD-SERVERS.NET.
com. 172800 IN NS L.GTLD-SERVERS.NET.
com. 172800 IN NS M.GTLD-SERVERS.NET.
;; Received 495 bytes from 192.228.79.201#53(B.ROOT-SERVERS.NET) in 145 ms
stackoverflow.com. 172800 IN NS ns51.domaincontrol.com.
stackoverflow.com. 172800 IN NS ns52.domaincontrol.com.
;; Received 119 bytes from 192.5.6.30#53(A.GTLD-SERVERS.NET) in 156 ms
Does this tell me that the stackoverflow.com nameservers have been stored in the .com name servers?
Or is it just that they happen to be there now?
A:
It may be helpful to understand the difference between a "registrar" and a "registry" to begin with. A registrar is a company that sells domain names (ie. godaddy) to buyers. Anyone can be a registrar. You can become a registrar.
A registry is an entity (chosen by ICANN) that maintains the master database of domain names. There are several registries out there. The Internet Society (ISOC) is the registry for all .org names, Verisign is the registry for all .com and .net domain names. There are others and each country has one for their domain. All the registrars access and update the registry databases.
A registry is responsible for maintaining the top level domain (TLD) which is the ultimate DNS server. A request to resolve a domain name, if it can't be resolved by any other DNS server will filter up to the TLD. Think of it as a hierarchy like a tree where the TLD is the trunk. At that point it will be resolved into an IP address or an error will be returned.
A:
Sorry I can't help toooooo much, but go to http://twit.tv, and find the Security Now podcast - they did one a couple of weeks ago on DNS - get the first one. It has a good explanation of how it works etc (which may help).
The second one on that site is about how it's been "hacked" - the first one is the how it works.
To kinda answer it:
The "root servers" (for .com for eg) hold a record for stackoverflow.com. But they can't hold all the details, so they have an NS record (name server record) saying "if you want more info, go look over there". So your machine asks that target machine (ns1.stackoverflow.com) for www.stackoverflow.com, and gets back the A record (IP address), or MX (mail etc)
So, your domain register will store it in a database or whatever they chose, and when you do an update, they SOMEHOW (I dont know, but I guess it's published by NIC, but they DO have to pay to be a registrar, and be checked out etc) push that change to the (cluster of) root name servers. They would then push the changes for your domain (eg where www goes, where your mail goes etc) to their local server, which actually serves the domain info.
Hope that makes SOME sense :)
Does this tell me that the
stackoverflow.com nameservers have
been stored in the .com name servers?
Yes and no.
Its like you going calling directory assistance for everything ending in .com. You ask for stackoverflow - they tell you "if you want SO, call this number, and they can tell you how to get Jeff (www), Joel (mail), etc.".
The root server is the first directory assistance. Your register's name server is the one on the end of the second call (assuming you called it :) )
A:
There are some mistakes in the answers so far (and I have not yet sufficient reputation to comment on them).
The ".com" name servers are in no way related with the root name servers. When you change the name servers of stackoverflow.com, through your registrar, the change is made in the ".com" name servers. Root name servers are unaffected.
It is not true that the registries are all chosen by ICANN. The ccTLD registries (country-code TLD like ".jp" or ".ca"), for instance, are chosen locally, by a process which depends on the country.
Not all TLD use a registry/registrar system.
|
Setting Nameservers - how?
|
I understand how I can change the dns settings for my domains by editing my bind configs, when I run my own name-servers. I know that I can define the name-servers with my registrar via their online control panels. But I have no idea how that part works...
How does my registrar store the data about the name-servers? Is it something clever, like them having the authority to store NS records in the root name-servers?
I'm confused by this part, can anyone explain?
|
[
"The registrar is responsible for setting the Root DNS entry that says, \"When someone asks for stackoverflow.com, tell them that the authoritative DNS is xxx.xxx.xxx.xxx\". They have an interface that allows them to make changes to the records they own.\nThen the requester must go to the authoritative DNS (Which is the one you specified to your registrar was your DNS) to find the IP for stackoverflow.com, any subdomain of it, email server, and other DNS records pertaining to that domain.\n-Adam\n",
"I've just been shown this:\n# dig +trace ns stackoverflow.com \n\n; <<>> DiG 9.2.4 <<>> +trace ns stackoverflow.com\n;; global options: printcmd\n. 269431 IN NS B.ROOT-SERVERS.NET.\n. 269431 IN NS C.ROOT-SERVERS.NET.\n. 269431 IN NS D.ROOT-SERVERS.NET.\n. 269431 IN NS E.ROOT-SERVERS.NET.\n. 269431 IN NS F.ROOT-SERVERS.NET.\n. 269431 IN NS G.ROOT-SERVERS.NET.\n. 269431 IN NS H.ROOT-SERVERS.NET.\n. 269431 IN NS I.ROOT-SERVERS.NET.\n. 269431 IN NS J.ROOT-SERVERS.NET.\n. 269431 IN NS K.ROOT-SERVERS.NET.\n. 269431 IN NS L.ROOT-SERVERS.NET.\n. 269431 IN NS M.ROOT-SERVERS.NET.\n. 269431 IN NS A.ROOT-SERVERS.NET.\n;; Received 504 bytes from 83.138.151.80#53(83.138.151.80) in 3 ms\n\ncom. 172800 IN NS A.GTLD-SERVERS.NET.\ncom. 172800 IN NS B.GTLD-SERVERS.NET.\ncom. 172800 IN NS C.GTLD-SERVERS.NET.\ncom. 172800 IN NS D.GTLD-SERVERS.NET.\ncom. 172800 IN NS E.GTLD-SERVERS.NET.\ncom. 172800 IN NS F.GTLD-SERVERS.NET.\ncom. 172800 IN NS G.GTLD-SERVERS.NET.\ncom. 172800 IN NS H.GTLD-SERVERS.NET.\ncom. 172800 IN NS I.GTLD-SERVERS.NET.\ncom. 172800 IN NS J.GTLD-SERVERS.NET.\ncom. 172800 IN NS K.GTLD-SERVERS.NET.\ncom. 172800 IN NS L.GTLD-SERVERS.NET.\ncom. 172800 IN NS M.GTLD-SERVERS.NET.\n;; Received 495 bytes from 192.228.79.201#53(B.ROOT-SERVERS.NET) in 145 ms\n\nstackoverflow.com. 172800 IN NS ns51.domaincontrol.com.\nstackoverflow.com. 172800 IN NS ns52.domaincontrol.com.\n;; Received 119 bytes from 192.5.6.30#53(A.GTLD-SERVERS.NET) in 156 ms\n\nDoes this tell me that the stackoverflow.com nameservers have been stored in the .com name servers?\nOr is it just that they happen to be there now?\n",
"It may be helpful to understand the difference between a \"registrar\" and a \"registry\" to begin with. A registrar is a company that sells domain names (ie. godaddy) to buyers. Anyone can be a registrar. You can become a registrar. \nA registry is an entity (chosen by ICANN) that maintains the master database of domain names. There are several registries out there. The Internet Society (ISOC) is the registry for all .org names, Verisign is the registry for all .com and .net domain names. There are others and each country has one for their domain. All the registrars access and update the registry databases.\nA registry is responsible for maintaining the top level domain (TLD) which is the ultimate DNS server. A request to resolve a domain name, if it can't be resolved by any other DNS server will filter up to the TLD. Think of it as a hierarchy like a tree where the TLD is the trunk. At that point it will be resolved into an IP address or an error will be returned.\n",
"Sorry I can't help toooooo much, but go to http://twit.tv, and find the Security Now podcast - they did one a couple of weeks ago on DNS - get the first one. It has a good explanation of how it works etc (which may help).\nThe second one on that site is about how it's been \"hacked\" - the first one is the how it works.\nTo kinda answer it:\nThe \"root servers\" (for .com for eg) hold a record for stackoverflow.com. But they can't hold all the details, so they have an NS record (name server record) saying \"if you want more info, go look over there\". So your machine asks that target machine (ns1.stackoverflow.com) for www.stackoverflow.com, and gets back the A record (IP address), or MX (mail etc)\nSo, your domain register will store it in a database or whatever they chose, and when you do an update, they SOMEHOW (I dont know, but I guess it's published by NIC, but they DO have to pay to be a registrar, and be checked out etc) push that change to the (cluster of) root name servers. They would then push the changes for your domain (eg where www goes, where your mail goes etc) to their local server, which actually serves the domain info.\nHope that makes SOME sense :)\n\nDoes this tell me that the\n stackoverflow.com nameservers have\n been stored in the .com name servers?\n\nYes and no.\nIts like you going calling directory assistance for everything ending in .com. You ask for stackoverflow - they tell you \"if you want SO, call this number, and they can tell you how to get Jeff (www), Joel (mail), etc.\".\nThe root server is the first directory assistance. Your register's name server is the one on the end of the second call (assuming you called it :) ) \n",
"There are some mistakes in the answers so far (and I have not yet sufficient reputation to comment on them).\n\nThe \".com\" name servers are in no way related with the root name servers. When you change the name servers of stackoverflow.com, through your registrar, the change is made in the \".com\" name servers. Root name servers are unaffected.\nIt is not true that the registries are all chosen by ICANN. The ccTLD registries (country-code TLD like \".jp\" or \".ca\"), for instance, are chosen locally, by a process which depends on the country.\nNot all TLD use a registry/registrar system.\n\n"
] |
[
2,
2,
2,
1,
1
] |
[] |
[] |
[
"config",
"dns"
] |
stackoverflow_0000046305_config_dns.txt
|
Q:
How do I use the same field type in multiple lists on SharePoint?
I have a SharePoint site with multiple lists, some of which have the same fields - a choice of products or countries.
How can I build the lists in a way that I configure the choice field once and use it in multiple lists, so that in the future, if I add a value to the choice, I add it only once?
A:
If you go to Site Settings, under Galleries there is an option for Site Columns. You can create your choice list there. Then, under the Library Settings there is an option to Add From Existing Site Columns. You should be able to see and select your newly created column there.
A:
You should create a list which contains the countries. Then in the lists where you want to reuse the countries lookup, create a column of type Lookup and select the countries list in the "Get infomation from" dropdown.
Here is a link to a more visual guide:
http://blog.phase2int.com/?p=101
|
How do I use the same field type in multiple lists on SharePoint?
|
I have a SharePoint site with multiple lists, some of which have the same fields - a choice of products or countries.
How can I build the lists in a way that I configure the choice field once and use it in multiple lists, so that in the future, if I add a value to the choice, I add it only once?
|
[
"If you go to Site Settings, under Galleries there is an option for Site Columns. You can create your choice list there. Then, under the Library Settings there is an option to Add From Existing Site Columns. You should be able to see and select your newly created column there.\n",
"You should create a list which contains the countries. Then in the lists where you want to reuse the countries lookup, create a column of type Lookup and select the countries list in the \"Get infomation from\" dropdown.\nHere is a link to a more visual guide:\nhttp://blog.phase2int.com/?p=101\n"
] |
[
1,
0
] |
[] |
[] |
[
"sharepoint"
] |
stackoverflow_0000082269_sharepoint.txt
|
Q:
ASP.NET MVC versus the Zeitgeist
ASP.NET MVC seems to be making a pretty big entrance. Can anyone summarize how its MVC implementation stacks up against popular MVC frameworks for other languages? (I'm thinking specifically of Rails and Zend Framework, though there are obviously lots.) Observations on learning curve, common terminology, ease of use and feelgood factor welcome.
(For the sake of a little background, I've been avoiding using ASP.NET for some time because I really hate the webforms approach, but Jeff's prolific praise on the podcast has almost convinced me to give it a go.)
A:
I'm just getting into ASP.NET MVC, so these are some early thoughts comparing it to Rails:
Mostly manages to stick with static typing, at the expense of a little extra code.
This will either give you the warm fuzzies or make you feel slightly shackled depending on how you feel about dynamic typing. For instance, you can have your views expect particular typed data (and so get compile-time checking of your views).
Better separation of bits of the framework.
So there's no prescribed data access mechanism such as ActiveRecord in Rails; you're free to choose your own. LINQ feels similar if you want something cheap, if a bit more verbose. You can use the non-WebForms parts of ASP.NET like caching and authentication.
Still playing feature catch-up.
Preview 5 brought AcceptVerbs, model updaters (similar to Ruby's hash.merge) and more ways to bind forms to models. Feels like there's still more to come before they check off most of the feature set that Rails has.
I'm still missing a little of Rails' freedom and elegance (much of which is down to Ruby, I guess), but ASP.NET MVC really does feel quite close.
A:
If you're already programming in the .NET idiom, it's pretty easy to pick up on a lot of what's going on in the MVC Framework. Rails, on the other hand, can be pretty easy to pick up (granted, at a basic level) if you've never set eyes on Ruby before you start.
It seems like you're talking about quality-as-MVC, though, and it looks to me like both frameworks (can't speak for Zend) do a very good job of separating the concerns.
|
ASP.NET MVC versus the Zeitgeist
|
ASP.NET MVC seems to be making a pretty big entrance. Can anyone summarize how its MVC implementation stacks up against popular MVC frameworks for other languages? (I'm thinking specifically of Rails and Zend Framework, though there are obviously lots.) Observations on learning curve, common terminology, ease of use and feelgood factor welcome.
(For the sake of a little background, I've been avoiding using ASP.NET for some time because I really hate the webforms approach, but Jeff's prolific praise on the podcast has almost convinced me to give it a go.)
|
[
"I'm just getting into ASP.NET MVC, so these are some early thoughts comparing it to Rails:\nMostly manages to stick with static typing, at the expense of a little extra code.\nThis will either give you the warm fuzzies or make you feel slightly shackled depending on how you feel about dynamic typing. For instance, you can have your views expect particular typed data (and so get compile-time checking of your views).\nBetter separation of bits of the framework.\nSo there's no prescribed data access mechanism such as ActiveRecord in Rails; you're free to choose your own. LINQ feels similar if you want something cheap, if a bit more verbose. You can use the non-WebForms parts of ASP.NET like caching and authentication.\nStill playing feature catch-up.\nPreview 5 brought AcceptVerbs, model updaters (similar to Ruby's hash.merge) and more ways to bind forms to models. Feels like there's still more to come before they check off most of the feature set that Rails has.\nI'm still missing a little of Rails' freedom and elegance (much of which is down to Ruby, I guess), but ASP.NET MVC really does feel quite close. \n",
"If you're already programming in the .NET idiom, it's pretty easy to pick up on a lot of what's going on in the MVC Framework. Rails, on the other hand, can be pretty easy to pick up (granted, at a basic level) if you've never set eyes on Ruby before you start.\nIt seems like you're talking about quality-as-MVC, though, and it looks to me like both frameworks (can't speak for Zend) do a very good job of separating the concerns. \n"
] |
[
7,
1
] |
[] |
[] |
[
"asp.net_mvc",
"model_view_controller",
"zend_framework"
] |
stackoverflow_0000055414_asp.net_mvc_model_view_controller_zend_framework.txt
|
Q:
AJAX Dropdown Extender Question
Ok, so I got my extender working on a default.aspx page on my website and it looks good. I basically copied and pasted the code for it into a user control control.ascx page. When I do this I completely loose the functionality (just shows the target control label and no dropdown, even upon hover). Is there any reason why it doesn't work in a custom user control inside a masterpage setup?
Edit:
Didn't quite do the trick. Any other suggestions?
Its in a master page setup, using eo web tabs (I tried it inside the tabs and outside the tabs but on the same page as the tabs, to no avail), and its in a custom user control. Think there are dependency issues?
A:
Apparently EO has compatibility issues with MS Ajax Control Toolkit. http://www.essentialobjects.com/Forum/Default.aspx?g=posts&t=1319
I guess I'll leave this question open to see if anyone figures out some sort of workaround.
A:
After a few days of looking I found a call to a modal popup extender .show() in the code behind. After commenting it out everything worked fine.
A:
Check the DocType. Here is what I have found useful
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd" >
Place this in your user control (or the page that uses it) and all should be well. I had a similar problem with a collapsible extender and this worked for me.
Edit: Here is a link to my question for further details.
A:
I don't know if this helps, but I had the same problem with the autocomplete extender and determined that the server-side function could not be in the user control, but needed to be on the page (or in a webservice, I guess). Once I moved the function, it worked fine.
A:
Hmm all that functionality on the loose! careful you don't lose it (sorry!)
Are you using something like Firebug (firefox plug-in) so you can see what ajax calls the page is trying to make? If it is making the call but the server is behaving oddly then you will see the error there too. IE users maybe able to use dev toolbar.
|
AJAX Dropdown Extender Question
|
Ok, so I got my extender working on a default.aspx page on my website and it looks good. I basically copied and pasted the code for it into a user control control.ascx page. When I do this I completely loose the functionality (just shows the target control label and no dropdown, even upon hover). Is there any reason why it doesn't work in a custom user control inside a masterpage setup?
Edit:
Didn't quite do the trick. Any other suggestions?
Its in a master page setup, using eo web tabs (I tried it inside the tabs and outside the tabs but on the same page as the tabs, to no avail), and its in a custom user control. Think there are dependency issues?
|
[
"Apparently EO has compatibility issues with MS Ajax Control Toolkit. http://www.essentialobjects.com/Forum/Default.aspx?g=posts&t=1319\nI guess I'll leave this question open to see if anyone figures out some sort of workaround.\n",
"After a few days of looking I found a call to a modal popup extender .show() in the code behind. After commenting it out everything worked fine.\n",
"Check the DocType. Here is what I have found useful\n<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.1//EN\" \"http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd\" >\n\nPlace this in your user control (or the page that uses it) and all should be well. I had a similar problem with a collapsible extender and this worked for me.\nEdit: Here is a link to my question for further details.\n",
"I don't know if this helps, but I had the same problem with the autocomplete extender and determined that the server-side function could not be in the user control, but needed to be on the page (or in a webservice, I guess). Once I moved the function, it worked fine.\n",
"Hmm all that functionality on the loose! careful you don't lose it (sorry!)\nAre you using something like Firebug (firefox plug-in) so you can see what ajax calls the page is trying to make? If it is making the call but the server is behaving oddly then you will see the error there too. IE users maybe able to use dev toolbar.\n"
] |
[
1,
1,
0,
0,
0
] |
[] |
[] |
[
"asp.net",
"asp.net_ajax",
"dropdownextender"
] |
stackoverflow_0000057552_asp.net_asp.net_ajax_dropdownextender.txt
|
Q:
Best Way to Unit Test a Website With Multiple User Types with PHPUnit
I'm starting to learn how to use PHPUnit to test the website I'm working on. The problem I'm running into is that I have five different user types defined and I need to be able to test every class with the different types. I currently have a user class and I would like to pass this to each function but I can't figure out how to pass this or test the different errors that could come back as being correct or not.
Edit: I should have said. I have a user class and I want to pass a different instance of this class to each unit test.
A:
If your various user classes inherit from a parent user class, then I recommend you use the same inheritance structure for your test case classes.
Consider the following sample classes:
class User
{
public function commonFunctionality()
{
return 'Something';
}
public function modifiedFunctionality()
{
return 'One Thing';
}
}
class SpecialUser extends User
{
public function specialFunctionality()
{
return 'Nothing';
}
public function modifiedFunctionality()
{
return 'Another Thing';
}
}
You could do the following with your test case classes:
class Test_User extends PHPUnit_Framework_TestCase
{
public function create()
{
return new User();
}
public function testCommonFunctionality()
{
$user = $this->create();
$this->assertEquals('Something', $user->commonFunctionality);
}
public function testModifiedFunctionality()
{
$user = $this->create();
$this->assertEquals('One Thing', $user->commonFunctionality);
}
}
class Test_SpecialUser extends Test_User
{
public function create() {
return new SpecialUser();
}
public function testSpecialFunctionality()
{
$user = $this->create();
$this->assertEquals('Nothing', $user->commonFunctionality);
}
public function testModifiedFunctionality()
{
$user = $this->create();
$this->assertEquals('Another Thing', $user->commonFunctionality);
}
}
Because each test depends on a create method which you can override, and because the test methods are inherited from the parent test class, all tests for the parent class will be run against the child class, unless you override them to change the expected behavior.
This has worked great in my limited experience.
A:
If you're looking to test the actual UI, you could try using something like Selenium (www.openqa.org). It lets you write the code in PHP (which I'm assuming would work with phpUnit) to drive the browser..
Another approach would be to have a common method that could be called by each test for your different user type. ie, something like 'ValidatePage', which you could then call from TestAdminUser or TestRegularUser and have the method simply perform the same basic validation of what you're expecting..
A:
Just make sure you're not running into an anti-pattern here. Maybe you do too much work in the constructor? Or maybe these should be in fact different classes? Tests often give you clues about design of code. Listen to them.
|
Best Way to Unit Test a Website With Multiple User Types with PHPUnit
|
I'm starting to learn how to use PHPUnit to test the website I'm working on. The problem I'm running into is that I have five different user types defined and I need to be able to test every class with the different types. I currently have a user class and I would like to pass this to each function but I can't figure out how to pass this or test the different errors that could come back as being correct or not.
Edit: I should have said. I have a user class and I want to pass a different instance of this class to each unit test.
|
[
"If your various user classes inherit from a parent user class, then I recommend you use the same inheritance structure for your test case classes.\nConsider the following sample classes:\nclass User\n{\n public function commonFunctionality()\n {\n return 'Something';\n }\n\n public function modifiedFunctionality()\n {\n return 'One Thing';\n }\n}\n\nclass SpecialUser extends User\n{\n public function specialFunctionality()\n {\n return 'Nothing';\n }\n\n public function modifiedFunctionality()\n {\n return 'Another Thing';\n }\n}\n\nYou could do the following with your test case classes:\nclass Test_User extends PHPUnit_Framework_TestCase\n{\n public function create()\n {\n return new User();\n }\n\n public function testCommonFunctionality()\n {\n $user = $this->create();\n $this->assertEquals('Something', $user->commonFunctionality);\n }\n\n public function testModifiedFunctionality()\n {\n $user = $this->create();\n $this->assertEquals('One Thing', $user->commonFunctionality);\n }\n}\n\nclass Test_SpecialUser extends Test_User\n{\n public function create() {\n return new SpecialUser();\n }\n\n public function testSpecialFunctionality()\n {\n $user = $this->create();\n $this->assertEquals('Nothing', $user->commonFunctionality);\n }\n\n public function testModifiedFunctionality()\n {\n $user = $this->create();\n $this->assertEquals('Another Thing', $user->commonFunctionality);\n }\n}\n\nBecause each test depends on a create method which you can override, and because the test methods are inherited from the parent test class, all tests for the parent class will be run against the child class, unless you override them to change the expected behavior.\nThis has worked great in my limited experience.\n",
"If you're looking to test the actual UI, you could try using something like Selenium (www.openqa.org). It lets you write the code in PHP (which I'm assuming would work with phpUnit) to drive the browser..\nAnother approach would be to have a common method that could be called by each test for your different user type. ie, something like 'ValidatePage', which you could then call from TestAdminUser or TestRegularUser and have the method simply perform the same basic validation of what you're expecting..\n",
"Just make sure you're not running into an anti-pattern here. Maybe you do too much work in the constructor? Or maybe these should be in fact different classes? Tests often give you clues about design of code. Listen to them.\n"
] |
[
2,
1,
1
] |
[] |
[] |
[
"phpunit",
"types",
"unit_testing"
] |
stackoverflow_0000058969_phpunit_types_unit_testing.txt
|
Q:
Does silverlight work on chrome?
Does anyone know if silverlight plugs into chrome, or when they plan to support it?
A:
This guy have had partial success with silverlight in chrome, but it does not seem to be supported:
http://wildermuth.com/2008/09/02/Silverlight_2_and_Google_Chrome
From The Microsoft Silverlight Team in the silverlight forum:
Hello, currently we don't have plans
to support Chrome. We will support it
in the future if it gains enough
market share. Please understand, each
browser implements the plug-in model
differently, so it'll be a lot of
effort to officially support a browser
100%... By the way, IE 8 also runs
each tab in its own process. If a tab
crashes, other tabs will still work
fine.
UPDATE:
Jon Galloway has just posted instructions on how to get silverlight successfully running on Chrome here:
http://weblogs.asp.net/jgalloway/archive/2008/09/17/silverlight-on-chrome.aspx
A:
The official word on what is supported looks like this:
alt text http://www.jesseliberty.com/sl/browsers.jpg
The reality is that we do run on a lot of browsers, but things change might quickly in these here parts.
A:
For what it is worth, the Dev Branch of Google Chrome was recently updated to support Silverlight 2. I tried it and it works for me. Of course, you have to use the Dev release of Google Chrome. You can get more information about switching to Chrome Dev here.
A:
Silverlight already works with web-kit, and since Google's Chrome is based on web-kit, it shouldn't be too much effort to get it working.
Indeed, this gentleman seems to have had some success.
Based on this, I would suspect that Silverlight will be fully supported by Chrome by the time it goes gold.
|
Does silverlight work on chrome?
|
Does anyone know if silverlight plugs into chrome, or when they plan to support it?
|
[
"This guy have had partial success with silverlight in chrome, but it does not seem to be supported:\nhttp://wildermuth.com/2008/09/02/Silverlight_2_and_Google_Chrome\nFrom The Microsoft Silverlight Team in the silverlight forum:\n\nHello, currently we don't have plans\n to support Chrome. We will support it\n in the future if it gains enough\n market share. Please understand, each\n browser implements the plug-in model\n differently, so it'll be a lot of\n effort to officially support a browser\n 100%... By the way, IE 8 also runs\n each tab in its own process. If a tab\n crashes, other tabs will still work\n fine.\n\nUPDATE:\nJon Galloway has just posted instructions on how to get silverlight successfully running on Chrome here:\nhttp://weblogs.asp.net/jgalloway/archive/2008/09/17/silverlight-on-chrome.aspx\n",
"The official word on what is supported looks like this:\nalt text http://www.jesseliberty.com/sl/browsers.jpg\nThe reality is that we do run on a lot of browsers, but things change might quickly in these here parts. \n",
"For what it is worth, the Dev Branch of Google Chrome was recently updated to support Silverlight 2. I tried it and it works for me. Of course, you have to use the Dev release of Google Chrome. You can get more information about switching to Chrome Dev here.\n",
"Silverlight already works with web-kit, and since Google's Chrome is based on web-kit, it shouldn't be too much effort to get it working.\nIndeed, this gentleman seems to have had some success.\nBased on this, I would suspect that Silverlight will be fully supported by Chrome by the time it goes gold. \n"
] |
[
3,
2,
2,
1
] |
[] |
[] |
[
"google_chrome",
"silverlight"
] |
stackoverflow_0000050421_google_chrome_silverlight.txt
|
Q:
WPF ListBox WrapPanel clips long groups
I've created a ListBox to display items in groups, where the groups are wrapped right to left when they can no longer fit within the height of the ListBox's panel. So, the groups would appear similar to this in the listbox, where each group's height is arbitrary (group 1, for instance, is twice as tall as group 2):
[ 1 ][ 3 ][ 5 ]
[ ][ 4 ][ 6 ]
[ 2 ][ ]
The following XAML works correctly in that it performs the wrapping, and allows the horizontal scroll bar to appear when the items run off the right side of the ListBox.
<ListBox>
<ListBox.ItemsPanel>
<ItemsPanelTemplate>
<StackPanel Orientation="Vertical"/>
</ItemsPanelTemplate>
</ListBox.ItemsPanel>
<ListBox.GroupStyle>
<ItemsPanelTemplate>
<WrapPanel Orientation="Vertical"
Height="{Binding Path=ActualHeight,
RelativeSource={RelativeSource
FindAncestor,
AncestorLevel=1,
AncestorType={x:Type ScrollContentPresenter}}}"/>
</ItemsPanelTemplate>
</ListBox.GroupStyle>
</ListBox>
The problem occurs when a group of items is longer than the height of the WrapPanel. Instead of allowing the vertical scroll bar to appear to view the cutoff item group, the items in that group are simply clipped. I'm assuming that this is a side effect of the Height binding in the WrapPanel - the scrollbar thinks it does not have to enabled.
Is there any way to enable the scrollbar, or another way around this issue that I'm not seeing?
A:
By setting the Height property on the WrapPanel to the height of the ScrollContentPresenter, it will never scroll vertically. However, if you remove that Binding, it will never wrap, since in the layout pass, it has infinite height to layout in.
I would suggest creating your own panel class to get the behavior you want. Have a separate dependency property that you can bind the desired height to, so you can use that to calculate the target height in the measure and arrange steps. If any one child is taller than the desired height, use that child's height as the target height to calculate the wrapping.
Here is an example panel to do this:
public class SmartWrapPanel : WrapPanel
{
/// <summary>
/// Identifies the DesiredHeight dependency property
/// </summary>
public static readonly DependencyProperty DesiredHeightProperty = DependencyProperty.Register(
"DesiredHeight",
typeof(double),
typeof(SmartWrapPanel),
new FrameworkPropertyMetadata(Double.NaN,
FrameworkPropertyMetadataOptions.AffectsArrange |
FrameworkPropertyMetadataOptions.AffectsMeasure));
/// <summary>
/// Gets or sets the height to attempt to be. If any child is taller than this, will use the child's height.
/// </summary>
public double DesiredHeight
{
get { return (double)GetValue(DesiredHeightProperty); }
set { SetValue(DesiredHeightProperty, value); }
}
protected override Size MeasureOverride(Size constraint)
{
Size ret = base.MeasureOverride(constraint);
double h = ret.Height;
if (!Double.IsNaN(DesiredHeight))
{
h = DesiredHeight;
foreach (UIElement child in Children)
{
if (child.DesiredSize.Height > h)
h = child.DesiredSize.Height;
}
}
return new Size(ret.Width, h);
}
protected override System.Windows.Size ArrangeOverride(Size finalSize)
{
double h = finalSize.Height;
if (!Double.IsNaN(DesiredHeight))
{
h = DesiredHeight;
foreach (UIElement child in Children)
{
if (child.DesiredSize.Height > h)
h = child.DesiredSize.Height;
}
}
return base.ArrangeOverride(new Size(finalSize.Width, h));
}
}
A:
Here is the slightly modified code - all credit given to Abe Heidebrecht, who previously posted it - that allows both horizontal and vertical scrolling. The only change is that the return value of MeasureOverride needs to be base.MeasureOverride(new Size(ret.width, h)).
// Original code : Abe Heidebrecht
public class SmartWrapPanel : WrapPanel
{
/// <summary>
/// Identifies the DesiredHeight dependency property
/// </summary>
public static readonly DependencyProperty DesiredHeightProperty = DependencyProperty.Register(
"DesiredHeight",
typeof(double),
typeof(SmartWrapPanel),
new FrameworkPropertyMetadata(Double.NaN,
FrameworkPropertyMetadataOptions.AffectsArrange |
FrameworkPropertyMetadataOptions.AffectsMeasure));
/// <summary>
/// Gets or sets the height to attempt to be. If any child is taller than this, will use the child's height.
/// </summary>
public double DesiredHeight
{
get { return (double)GetValue(DesiredHeightProperty); }
set { SetValue(DesiredHeightProperty, value); }
}
protected override Size MeasureOverride(Size constraint)
{
Size ret = base.MeasureOverride(constraint);
double h = ret.Height;
if (!Double.IsNaN(DesiredHeight))
{
h = DesiredHeight;
foreach (UIElement child in Children)
{
if (child.DesiredSize.Height > h)
h = child.DesiredSize.Height;
}
}
return base.MeasureOverride(new Size(ret.Width, h));
}
protected override System.Windows.Size ArrangeOverride(Size finalSize)
{
double h = finalSize.Height;
if (!Double.IsNaN(DesiredHeight))
{
h = DesiredHeight;
foreach (UIElement child in Children)
{
if (child.DesiredSize.Height > h)
h = child.DesiredSize.Height;
}
}
return base.ArrangeOverride(new Size(finalSize.Width, h));
}
}
A:
I would think that you are correct that it has to do with the binding. What happens when you remove the binding? With the binding are you trying to fill up at least the entire height of the list box? If so, consider binding to MinHeight instead, or try using the VerticalAlignment property.
A:
Thanks for answering, David.
When the binding is removed, no wrapping occurs. The WrapPanel puts every group into a single vertical column.
The binding is meant to force the WrapPanel to actually wrap. If no binding is set, the WrapPanel assumes the height is infinite and never wraps.
Binding to MinHeight results in an empty listbox. I can see how the VerticalAlignment property could seem to be a solution, but alignment itself prevents any wrapping from occurring. When binding and alignment are used together, the alignment has no effect on the problem.
|
WPF ListBox WrapPanel clips long groups
|
I've created a ListBox to display items in groups, where the groups are wrapped right to left when they can no longer fit within the height of the ListBox's panel. So, the groups would appear similar to this in the listbox, where each group's height is arbitrary (group 1, for instance, is twice as tall as group 2):
[ 1 ][ 3 ][ 5 ]
[ ][ 4 ][ 6 ]
[ 2 ][ ]
The following XAML works correctly in that it performs the wrapping, and allows the horizontal scroll bar to appear when the items run off the right side of the ListBox.
<ListBox>
<ListBox.ItemsPanel>
<ItemsPanelTemplate>
<StackPanel Orientation="Vertical"/>
</ItemsPanelTemplate>
</ListBox.ItemsPanel>
<ListBox.GroupStyle>
<ItemsPanelTemplate>
<WrapPanel Orientation="Vertical"
Height="{Binding Path=ActualHeight,
RelativeSource={RelativeSource
FindAncestor,
AncestorLevel=1,
AncestorType={x:Type ScrollContentPresenter}}}"/>
</ItemsPanelTemplate>
</ListBox.GroupStyle>
</ListBox>
The problem occurs when a group of items is longer than the height of the WrapPanel. Instead of allowing the vertical scroll bar to appear to view the cutoff item group, the items in that group are simply clipped. I'm assuming that this is a side effect of the Height binding in the WrapPanel - the scrollbar thinks it does not have to enabled.
Is there any way to enable the scrollbar, or another way around this issue that I'm not seeing?
|
[
"By setting the Height property on the WrapPanel to the height of the ScrollContentPresenter, it will never scroll vertically. However, if you remove that Binding, it will never wrap, since in the layout pass, it has infinite height to layout in. \nI would suggest creating your own panel class to get the behavior you want. Have a separate dependency property that you can bind the desired height to, so you can use that to calculate the target height in the measure and arrange steps. If any one child is taller than the desired height, use that child's height as the target height to calculate the wrapping.\nHere is an example panel to do this:\npublic class SmartWrapPanel : WrapPanel\n{\n /// <summary>\n /// Identifies the DesiredHeight dependency property\n /// </summary>\n public static readonly DependencyProperty DesiredHeightProperty = DependencyProperty.Register(\n \"DesiredHeight\",\n typeof(double),\n typeof(SmartWrapPanel),\n new FrameworkPropertyMetadata(Double.NaN, \n FrameworkPropertyMetadataOptions.AffectsArrange |\n FrameworkPropertyMetadataOptions.AffectsMeasure));\n\n /// <summary>\n /// Gets or sets the height to attempt to be. If any child is taller than this, will use the child's height.\n /// </summary>\n public double DesiredHeight\n {\n get { return (double)GetValue(DesiredHeightProperty); }\n set { SetValue(DesiredHeightProperty, value); }\n }\n\n protected override Size MeasureOverride(Size constraint)\n {\n Size ret = base.MeasureOverride(constraint);\n double h = ret.Height;\n\n if (!Double.IsNaN(DesiredHeight))\n {\n h = DesiredHeight;\n foreach (UIElement child in Children)\n {\n if (child.DesiredSize.Height > h)\n h = child.DesiredSize.Height;\n }\n }\n\n return new Size(ret.Width, h);\n }\n\n protected override System.Windows.Size ArrangeOverride(Size finalSize)\n {\n double h = finalSize.Height;\n\n if (!Double.IsNaN(DesiredHeight))\n {\n h = DesiredHeight;\n foreach (UIElement child in Children)\n {\n if (child.DesiredSize.Height > h)\n h = child.DesiredSize.Height;\n }\n }\n\n return base.ArrangeOverride(new Size(finalSize.Width, h));\n }\n}\n\n",
"Here is the slightly modified code - all credit given to Abe Heidebrecht, who previously posted it - that allows both horizontal and vertical scrolling. The only change is that the return value of MeasureOverride needs to be base.MeasureOverride(new Size(ret.width, h)).\n// Original code : Abe Heidebrecht\npublic class SmartWrapPanel : WrapPanel\n{\n /// <summary>\n /// Identifies the DesiredHeight dependency property\n /// </summary>\n public static readonly DependencyProperty DesiredHeightProperty = DependencyProperty.Register(\n \"DesiredHeight\",\n typeof(double),\n typeof(SmartWrapPanel),\n new FrameworkPropertyMetadata(Double.NaN, \n FrameworkPropertyMetadataOptions.AffectsArrange |\n FrameworkPropertyMetadataOptions.AffectsMeasure));\n\n /// <summary>\n /// Gets or sets the height to attempt to be. If any child is taller than this, will use the child's height.\n /// </summary>\n public double DesiredHeight\n {\n get { return (double)GetValue(DesiredHeightProperty); }\n set { SetValue(DesiredHeightProperty, value); }\n }\n\n protected override Size MeasureOverride(Size constraint)\n {\n Size ret = base.MeasureOverride(constraint);\n double h = ret.Height;\n\n if (!Double.IsNaN(DesiredHeight))\n {\n h = DesiredHeight;\n foreach (UIElement child in Children)\n {\n if (child.DesiredSize.Height > h)\n h = child.DesiredSize.Height;\n }\n }\n\n return base.MeasureOverride(new Size(ret.Width, h));\n }\n\n protected override System.Windows.Size ArrangeOverride(Size finalSize)\n {\n double h = finalSize.Height;\n\n if (!Double.IsNaN(DesiredHeight))\n {\n h = DesiredHeight;\n foreach (UIElement child in Children)\n {\n if (child.DesiredSize.Height > h)\n h = child.DesiredSize.Height;\n }\n }\n\n return base.ArrangeOverride(new Size(finalSize.Width, h));\n }\n}\n\n",
"I would think that you are correct that it has to do with the binding. What happens when you remove the binding? With the binding are you trying to fill up at least the entire height of the list box? If so, consider binding to MinHeight instead, or try using the VerticalAlignment property.\n",
"Thanks for answering, David.\nWhen the binding is removed, no wrapping occurs. The WrapPanel puts every group into a single vertical column.\nThe binding is meant to force the WrapPanel to actually wrap. If no binding is set, the WrapPanel assumes the height is infinite and never wraps.\nBinding to MinHeight results in an empty listbox. I can see how the VerticalAlignment property could seem to be a solution, but alignment itself prevents any wrapping from occurring. When binding and alignment are used together, the alignment has no effect on the problem.\n"
] |
[
2,
2,
0,
0
] |
[] |
[] |
[
"grouping",
"listbox",
"scroll",
"wpf",
"wrappanel"
] |
stackoverflow_0000074188_grouping_listbox_scroll_wpf_wrappanel.txt
|
Q:
How can I merge my files when the folder structure has changed using Borland StarTeam?
I'm in the process of refactoring some code which includes moving folders around, and I would like to regularly merge to keep things current. What is the best way to merge after I've moved folders around in my working copy?
A:
You can move the files around in StarTeam also. Then merge after that.
Whatever you do, make sure you don't delete the files and re-add in StarTeam. You'll lose the file history if you do that.
A:
Moving the files in StarTeam and then updating your project/solution is the cleaner way to go. I would also suggest creating a view label prior to doing anything so you have a definite "roll back" point if things go wrong :)
A:
Folders in StarTeam can be renamed to match the filesystem moves by right clicking the folder and going to Properties. If you created new nesting levels, you will have to create those folders normally. If you moved files between existing folders, you can move those in StarTeam by dragging them from the file window on the right to the new folder on the left. Files can be renamed to match a new name in StarTeam the same way folders are, right click the file and select Properties.
As a fellow StarTeam user, my condolences go out to you.
A:
In an ideal world, you could branch the view and merge back when you are happy with your revisions to avoid breaking the build. However, as you are using StarTeam, I would suggest making small incremental changes to the folder structure and accept that you will probably have a few breakages along the way. It will likely be less time consuming and more intuitive than trying to use the view-merge interface.
A:
The problem is I'm worried about breaking the build in the meantime while I'm moving folders in StarTeam. I suppose the only way to avoid that is to be ready to upload updated project files as soon as I move things around in StarTeam and do it as quickly as possible.
|
How can I merge my files when the folder structure has changed using Borland StarTeam?
|
I'm in the process of refactoring some code which includes moving folders around, and I would like to regularly merge to keep things current. What is the best way to merge after I've moved folders around in my working copy?
|
[
"You can move the files around in StarTeam also. Then merge after that. \nWhatever you do, make sure you don't delete the files and re-add in StarTeam. You'll lose the file history if you do that.\n",
"Moving the files in StarTeam and then updating your project/solution is the cleaner way to go. I would also suggest creating a view label prior to doing anything so you have a definite \"roll back\" point if things go wrong :)\n",
"Folders in StarTeam can be renamed to match the filesystem moves by right clicking the folder and going to Properties. If you created new nesting levels, you will have to create those folders normally. If you moved files between existing folders, you can move those in StarTeam by dragging them from the file window on the right to the new folder on the left. Files can be renamed to match a new name in StarTeam the same way folders are, right click the file and select Properties.\nAs a fellow StarTeam user, my condolences go out to you.\n",
"In an ideal world, you could branch the view and merge back when you are happy with your revisions to avoid breaking the build. However, as you are using StarTeam, I would suggest making small incremental changes to the folder structure and accept that you will probably have a few breakages along the way. It will likely be less time consuming and more intuitive than trying to use the view-merge interface.\n",
"The problem is I'm worried about breaking the build in the meantime while I'm moving folders in StarTeam. I suppose the only way to avoid that is to be ready to upload updated project files as soon as I move things around in StarTeam and do it as quickly as possible.\n"
] |
[
3,
3,
1,
1,
0
] |
[] |
[] |
[
"merge",
"refactoring",
"starteam",
"version_control"
] |
stackoverflow_0000028578_merge_refactoring_starteam_version_control.txt
|
Q:
Hosting a website on your own server
Is there a detailed guide which explains how to host a website on your own server on linux.
I have currently hosted it on one of the commerical web-hosts.
Also the domain is registered to a different vendor.
Thanks
A:
This guide is probably more info than you really requested, but webserver information is in there. It's Gentoo-specific, but you can apply the same information with minor translations to any other distro.
A:
I think it depends on how familiar you are with linux. Certainly, many people do this for hobbyist websites.
There are many aspects involved - you should begin with something simple like getting apache running and visible to the outside world.
A:
I would look into installing apache
99% of linux distributions will have a package for it.
On ubuntu you can run:
sudo apt-get install apache2
Are you considering hosting a web page locally for the internet? Or is this just for development etc..
If it's for an internet server, you will need a stable internet connection with a good upstream.
You may also need a static IP address so you can setup DNS to point to the right place.
A:
While I don't have an url to a good tutorial in english, I would just warn you that this is not something you should take lightly. Administrating a server involves getting your hands dirty in linux stuff and dealing with security can be pretty complex depending on your knowledge and requirements.
So if you know nothing about it, you should be very careful and if the website you host has is of any commercial importance you are probably better off hiring a server admin.
A:
Just to point out; if this is a personal (home) server, as opposed to one in a corporate environment, then it's better not to bother hosting it - you won't necessarily have the bandwidth, and your ISP may not allow it.
As mentioned above, you will also need a static IP address, and you'll need to set up DNS records to point to the correct location, which your domain vendor may or may not help you with.
|
Hosting a website on your own server
|
Is there a detailed guide which explains how to host a website on your own server on linux.
I have currently hosted it on one of the commerical web-hosts.
Also the domain is registered to a different vendor.
Thanks
|
[
"This guide is probably more info than you really requested, but webserver information is in there. It's Gentoo-specific, but you can apply the same information with minor translations to any other distro.\n",
"I think it depends on how familiar you are with linux. Certainly, many people do this for hobbyist websites.\nThere are many aspects involved - you should begin with something simple like getting apache running and visible to the outside world.\n",
"I would look into installing apache\n99% of linux distributions will have a package for it.\nOn ubuntu you can run:\nsudo apt-get install apache2\n\nAre you considering hosting a web page locally for the internet? Or is this just for development etc.. \nIf it's for an internet server, you will need a stable internet connection with a good upstream. \nYou may also need a static IP address so you can setup DNS to point to the right place.\n",
"While I don't have an url to a good tutorial in english, I would just warn you that this is not something you should take lightly. Administrating a server involves getting your hands dirty in linux stuff and dealing with security can be pretty complex depending on your knowledge and requirements. \nSo if you know nothing about it, you should be very careful and if the website you host has is of any commercial importance you are probably better off hiring a server admin.\n",
"Just to point out; if this is a personal (home) server, as opposed to one in a corporate environment, then it's better not to bother hosting it - you won't necessarily have the bandwidth, and your ISP may not allow it.\nAs mentioned above, you will also need a static IP address, and you'll need to set up DNS records to point to the correct location, which your domain vendor may or may not help you with.\n"
] |
[
6,
1,
1,
1,
1
] |
[] |
[] |
[
"dns",
"linux",
"web_hosting"
] |
stackoverflow_0000082431_dns_linux_web_hosting.txt
|
Q:
Whats the best windows tool for merging RSS Feeds?
It seems like such a simple thing, but I can't find any obvious solutions...
I want to be able to take two or three feeds, and then merge then in to a single rss feed, to be published internally on our network.
Is there a simple tool out there that will do this? Free or commercial..
update: Should have mentioned, looking for a windows application that will run as a scheduled service on a server.
A:
Maybe http://www.planetplanet.org/
will do what you want.
It's for creating blog aggregations like planet lisp.
A:
Google reader, create a group, add your feeds into the folder and then share that as an RSS feed.
:-)
Works while you're asleep!
A:
There are a whole pile of options here: http://allrss.com/rssremixers.html.
A:
Yahoo Pipes could be nice. Depends on how much "private" you want the resulting feed to be.
For 100% offline solution investigate Atomisator. It's a Python framework basically for doing offline what Yahoo Pipes does online.
A:
If you're using PHP, the SimplePie library will do this. Here's a tutorial.
|
Whats the best windows tool for merging RSS Feeds?
|
It seems like such a simple thing, but I can't find any obvious solutions...
I want to be able to take two or three feeds, and then merge then in to a single rss feed, to be published internally on our network.
Is there a simple tool out there that will do this? Free or commercial..
update: Should have mentioned, looking for a windows application that will run as a scheduled service on a server.
|
[
"Maybe http://www.planetplanet.org/\nwill do what you want.\nIt's for creating blog aggregations like planet lisp.\n",
"Google reader, create a group, add your feeds into the folder and then share that as an RSS feed.\n:-)\nWorks while you're asleep!\n",
"There are a whole pile of options here: http://allrss.com/rssremixers.html.\n",
"Yahoo Pipes could be nice. Depends on how much \"private\" you want the resulting feed to be.\nFor 100% offline solution investigate Atomisator. It's a Python framework basically for doing offline what Yahoo Pipes does online.\n",
"If you're using PHP, the SimplePie library will do this. Here's a tutorial.\n"
] |
[
1,
1,
1,
1,
0
] |
[] |
[] |
[
"merge",
"rss"
] |
stackoverflow_0000082782_merge_rss.txt
|
Q:
What are the rules for naming AS3 classes?
I'm trying to write a RegEx for a code generator (in C#) to determine a proper class or package name of an AS3 class.
I know that class names
must start with a letter (capital or otherwise)
any other digit can be alphanumeric
cannot have spaces
Is there anything else?
A:
Although you can start class names with lower case letters and include underscores and dollar signs, the "naming convention" is to start the class name and each separate word with a capital letter (e.g. UsefulThing), and not include underscores. When I see classes like useful_thing, it looks wrong, because it's not the naming convention. Maybe your question should have said what are the valid names for an AS3 class?
Other than that I think you + maclema have it.
A:
The conventions for Class and Package naming as far as I've heard:
The package structure should use the "flipped domain" naming, with lowercase folders and CamelCased class names, i.e.:
import com.yourdomain.nameofsubfolder.YourSpecialClass;
This is reflected in all of the packages shipped with Flash and Flex. Examples:
import flash.events.MouseEvent;
import flash.display.MovieClip;
There is also a convention of naming Interfaces after the functionality they add or impose as in: Styleable, Drawable, Movable etc... Many (including Adobe) also prefer to use an upper case "I" to mark interfaces clearly as such, i.e.:
IEventDispatcher
IExternalizable
IFocusManager
which are all internal interfaces in the flash.* packages.
A:
Here are some more valid classes.
Actionscript 3 classes (and packages) must start with a letter, "_", or "$". They may also contain (but not start with) a number.
public class $Test {}
public class _Test {}
public class test {}
|
What are the rules for naming AS3 classes?
|
I'm trying to write a RegEx for a code generator (in C#) to determine a proper class or package name of an AS3 class.
I know that class names
must start with a letter (capital or otherwise)
any other digit can be alphanumeric
cannot have spaces
Is there anything else?
|
[
"Although you can start class names with lower case letters and include underscores and dollar signs, the \"naming convention\" is to start the class name and each separate word with a capital letter (e.g. UsefulThing), and not include underscores. When I see classes like useful_thing, it looks wrong, because it's not the naming convention. Maybe your question should have said what are the valid names for an AS3 class?\nOther than that I think you + maclema have it.\n",
"The conventions for Class and Package naming as far as I've heard:\nThe package structure should use the \"flipped domain\" naming, with lowercase folders and CamelCased class names, i.e.: \nimport com.yourdomain.nameofsubfolder.YourSpecialClass;\n\nThis is reflected in all of the packages shipped with Flash and Flex. Examples:\nimport flash.events.MouseEvent;\nimport flash.display.MovieClip;\n\nThere is also a convention of naming Interfaces after the functionality they add or impose as in: Styleable, Drawable, Movable etc... Many (including Adobe) also prefer to use an upper case \"I\" to mark interfaces clearly as such, i.e.: \nIEventDispatcher\nIExternalizable \nIFocusManager\n\nwhich are all internal interfaces in the flash.* packages.\n",
"Here are some more valid classes. \nActionscript 3 classes (and packages) must start with a letter, \"_\", or \"$\". They may also contain (but not start with) a number.\npublic class $Test {}\npublic class _Test {}\npublic class test {}\n\n"
] |
[
3,
2,
1
] |
[] |
[] |
[
"actionscript_3",
"convention",
"naming"
] |
stackoverflow_0000021698_actionscript_3_convention_naming.txt
|
Q:
How to Implement Database Independence with Entity Framework
I have used the Entity Framework to start a fairly simple sample project. In the project, I have created a new Entity Data Model from a SQL Server 2000 database. I am able to query the data using LINQ to Entities and display values on the screen.
I have an Oracle database with an extremely similar schema (I am trying to be exact but I do not know all the details of Oracle). I would like my project to be able to run on both the SQL Server and Oracle data stores with minimal effort. I was hoping that I could simply change the configuration string of my Entity Data Model and the Entity Framework would take care of the rest. However, it appears that will not work at seamlessly as I thought.
Has anyone done what I am trying to do? Again, I am trying to write an application that can query (and update) data from a SQL Server or Oracle database with minimal effort using the Entity Framework. The secondary goal is to not have to re-compile the application when switching back and forth between data stores. If I have to "Update Model from Database" that might be ok because I wouldn't have to recompile, but I'd prefer not to have to go this route. Does anyone know of any steps that might be necessary?
A:
What is generally understood under the term "Persistence Ignorance" is that your entity classes are not being flooded with framework dependencies (important for N-tier scenarios). This is not the case right now, as entity classes must implement certain EF interfaces ("IPOCO"), as opposed to plain old CLR objects. As another poster has mentioned, there is a solution called Persistence Ignorance (POCO) Adapter for Entity Framework V1 for that, and EF V2 will support POCO out of the box.
But I think what you really had in mind was database independence. With one big configuration XML that includes storage model, conceptual model and the mapping between those two from which a typed ObjectContext will be generated at designtime, I also find it hard to image how to transparently support two databases.
What probably looks more promising is applying a database-independent ADO.NET provider like the one from DataDirect. DataDirect has also announced EF support for Q3/2008.
A:
http://blogs.msdn.com/jkowalski/archive/2008/09/09/persistence-ignorance-poco-adapter-for-entity-framework-v1.aspx
The main problem is that the entity framework was not designed with persistence ignorance in mind. I would honestly look at using something other than entity framework.
|
How to Implement Database Independence with Entity Framework
|
I have used the Entity Framework to start a fairly simple sample project. In the project, I have created a new Entity Data Model from a SQL Server 2000 database. I am able to query the data using LINQ to Entities and display values on the screen.
I have an Oracle database with an extremely similar schema (I am trying to be exact but I do not know all the details of Oracle). I would like my project to be able to run on both the SQL Server and Oracle data stores with minimal effort. I was hoping that I could simply change the configuration string of my Entity Data Model and the Entity Framework would take care of the rest. However, it appears that will not work at seamlessly as I thought.
Has anyone done what I am trying to do? Again, I am trying to write an application that can query (and update) data from a SQL Server or Oracle database with minimal effort using the Entity Framework. The secondary goal is to not have to re-compile the application when switching back and forth between data stores. If I have to "Update Model from Database" that might be ok because I wouldn't have to recompile, but I'd prefer not to have to go this route. Does anyone know of any steps that might be necessary?
|
[
"What is generally understood under the term \"Persistence Ignorance\" is that your entity classes are not being flooded with framework dependencies (important for N-tier scenarios). This is not the case right now, as entity classes must implement certain EF interfaces (\"IPOCO\"), as opposed to plain old CLR objects. As another poster has mentioned, there is a solution called Persistence Ignorance (POCO) Adapter for Entity Framework V1 for that, and EF V2 will support POCO out of the box.\nBut I think what you really had in mind was database independence. With one big configuration XML that includes storage model, conceptual model and the mapping between those two from which a typed ObjectContext will be generated at designtime, I also find it hard to image how to transparently support two databases.\nWhat probably looks more promising is applying a database-independent ADO.NET provider like the one from DataDirect. DataDirect has also announced EF support for Q3/2008.\n",
"http://blogs.msdn.com/jkowalski/archive/2008/09/09/persistence-ignorance-poco-adapter-for-entity-framework-v1.aspx\nThe main problem is that the entity framework was not designed with persistence ignorance in mind. I would honestly look at using something other than entity framework.\n"
] |
[
2,
1
] |
[] |
[] |
[
"entity_framework",
"linq_to_entities",
"oracle"
] |
stackoverflow_0000075959_entity_framework_linq_to_entities_oracle.txt
|
Q:
Information about how many files to compile before build in Visual Studio
How can I figure out, how many files needs to be recompiled before I start the build process.
Sometimes I don't remember how many basic header files I changed so a Rebuild All would be better than a simple build. There seams to be no option for this, but IMHO it must be possible (f.e. XCode give me this information).
Update:
My problem is not, that Visual Studio doesn't know what to compile. I need to know how much it will compile so that I can decide if I can make a quick test with my new code or if I should write more code till I start the "expensive" build process. Or if my boss ask "When can I have the new build?" the best answer is not "It is done when it is done!".
It's really helpful when the IDE can say "compile 200 of 589 files" instead of "compile x,y, ..."
A:
Could your version control tell you this? For example in Subversion "Check for modifications" will list everything changed since your last checkin (although not since your last build)
Mind you, doesn't "build" automatically do exactly that? (build only what's changed)?
A:
Usually Visual Studio is good at knowing what needs to be compiled for you.
If you have multiple projects in a solution then just make sure your solution dependencies are set up correctly and it should just work when you hit Build.
|
Information about how many files to compile before build in Visual Studio
|
How can I figure out, how many files needs to be recompiled before I start the build process.
Sometimes I don't remember how many basic header files I changed so a Rebuild All would be better than a simple build. There seams to be no option for this, but IMHO it must be possible (f.e. XCode give me this information).
Update:
My problem is not, that Visual Studio doesn't know what to compile. I need to know how much it will compile so that I can decide if I can make a quick test with my new code or if I should write more code till I start the "expensive" build process. Or if my boss ask "When can I have the new build?" the best answer is not "It is done when it is done!".
It's really helpful when the IDE can say "compile 200 of 589 files" instead of "compile x,y, ..."
|
[
"Could your version control tell you this? For example in Subversion \"Check for modifications\" will list everything changed since your last checkin (although not since your last build)\nMind you, doesn't \"build\" automatically do exactly that? (build only what's changed)?\n",
"Usually Visual Studio is good at knowing what needs to be compiled for you.\nIf you have multiple projects in a solution then just make sure your solution dependencies are set up correctly and it should just work when you hit Build.\n"
] |
[
1,
1
] |
[] |
[] |
[
"visual_studio_2005"
] |
stackoverflow_0000082612_visual_studio_2005.txt
|
Q:
Classes in Python
In Python is there any way to make a class, then make a second version of that class with identical dat,a but which can be changed, then reverted to be the same as the data in the original class?
So I would make a class with the numbers 1 to 5 as the data in it, then make a second class with the same names for sections (or very similar). Mess around with the numbers in the second class then with one function then reset them to be the same as in the first class.
The only alternative I've found is to make one aggravatingly long class with too many separate pieces of data in it to be readily usable.
A:
A class is a template, it allows you to create a blueprint, you can then have multiple instances of a class each with different numbers, like so.
class dog(object):
def __init__(self, height, width, lenght):
self.height = height
self.width = width
self.length = length
def revert(self):
self.height = 1
self.width = 2
self.length = 3
dog1 = dog(5, 6, 7)
dog2 = dog(2, 3, 4)
dog1.revert()
A:
Here's another answer kind of like pobk's; it uses the instance's dict to do the work of saving/resetting variables, but doesn't require you to specify the names of them in your code. You can call save() at any time to save the state of the instance and reset() to reset to that state.
class MyReset:
def __init__(self, x, y):
self.x = x
self.y = y
self.save()
def save(self):
self.saved = self.__dict__.copy()
def reset(self):
self.__dict__ = self.saved.copy()
a = MyReset(20, 30)
a.x = 50
print a.x
a.reset()
print a.x
Why do you want to do this? It might not be the best/only way.
A:
Classes don't have values. Objects do. Is what you want basically a class that can reset an instance (object) to a set of default values?
How about just providing a reset method, that resets the properties of your object to whatever is the default?
I think you should simplify your question, or tell us what you really want to do. It's not at all clear.
A:
I think you are confused. You should re-check the meaning of "class" and "instance".
I think you are trying to first declare a Instance of a certain Class, and then declare a instance of other Class, use the data from the first one, and then find a way to convert the data in the second instance and use it on the first instance...
I recommend that you use operator overloading to assign the data.
A:
class ABC(self):
numbers = [0,1,2,3]
class DEF(ABC):
def __init__(self):
self.new_numbers = super(ABC,self).numbers
def setnums(self, numbers):
self.new_numbers = numbers
def getnums(self):
return self.new_numbers
def reset(self):
__init__()
A:
Just FYI, here's an alternate implementation... Probably violates about 15 million pythonic rules, but I publish it per information/observation:
class Resettable(object):
base_dict = {}
def reset(self):
self.__dict__ = self.__class__.base_dict
def __init__(self):
self.__dict__ = self.__class__.base_dict.copy()
class SomeClass(Resettable):
base_dict = {
'number_one': 1,
'number_two': 2,
'number_three': 3,
'number_four': 4,
'number_five': 5,
}
def __init__(self):
Resettable.__init__(self)
p = SomeClass()
p.number_one = 100
print p.number_one
p.reset()
print p.number_one
|
Classes in Python
|
In Python is there any way to make a class, then make a second version of that class with identical dat,a but which can be changed, then reverted to be the same as the data in the original class?
So I would make a class with the numbers 1 to 5 as the data in it, then make a second class with the same names for sections (or very similar). Mess around with the numbers in the second class then with one function then reset them to be the same as in the first class.
The only alternative I've found is to make one aggravatingly long class with too many separate pieces of data in it to be readily usable.
|
[
"A class is a template, it allows you to create a blueprint, you can then have multiple instances of a class each with different numbers, like so.\nclass dog(object):\n def __init__(self, height, width, lenght):\n self.height = height\n self.width = width\n self.length = length\n\n def revert(self):\n self.height = 1\n self.width = 2\n self.length = 3\n\ndog1 = dog(5, 6, 7)\ndog2 = dog(2, 3, 4)\n\ndog1.revert()\n\n",
"Here's another answer kind of like pobk's; it uses the instance's dict to do the work of saving/resetting variables, but doesn't require you to specify the names of them in your code. You can call save() at any time to save the state of the instance and reset() to reset to that state.\nclass MyReset:\n def __init__(self, x, y):\n self.x = x\n self.y = y\n self.save()\n\n def save(self):\n self.saved = self.__dict__.copy()\n\n def reset(self):\n self.__dict__ = self.saved.copy()\n\na = MyReset(20, 30)\na.x = 50\nprint a.x\na.reset()\nprint a.x\n\nWhy do you want to do this? It might not be the best/only way.\n",
"Classes don't have values. Objects do. Is what you want basically a class that can reset an instance (object) to a set of default values? \nHow about just providing a reset method, that resets the properties of your object to whatever is the default?\nI think you should simplify your question, or tell us what you really want to do. It's not at all clear.\n",
"I think you are confused. You should re-check the meaning of \"class\" and \"instance\".\nI think you are trying to first declare a Instance of a certain Class, and then declare a instance of other Class, use the data from the first one, and then find a way to convert the data in the second instance and use it on the first instance...\nI recommend that you use operator overloading to assign the data.\n",
"class ABC(self):\n numbers = [0,1,2,3]\n\nclass DEF(ABC):\n def __init__(self):\n self.new_numbers = super(ABC,self).numbers\n\n def setnums(self, numbers):\n self.new_numbers = numbers\n\n def getnums(self):\n return self.new_numbers\n\n def reset(self):\n __init__()\n\n",
"Just FYI, here's an alternate implementation... Probably violates about 15 million pythonic rules, but I publish it per information/observation:\nclass Resettable(object):\n base_dict = {}\n def reset(self):\n self.__dict__ = self.__class__.base_dict\n\n def __init__(self):\n self.__dict__ = self.__class__.base_dict.copy()\n\nclass SomeClass(Resettable):\n base_dict = {\n 'number_one': 1,\n 'number_two': 2,\n 'number_three': 3,\n 'number_four': 4,\n 'number_five': 5,\n }\n def __init__(self):\n Resettable.__init__(self)\n\n\np = SomeClass()\np.number_one = 100\nprint p.number_one\np.reset()\nprint p.number_one\n\n"
] |
[
5,
2,
1,
1,
1,
1
] |
[] |
[] |
[
"class",
"python"
] |
stackoverflow_0000064141_class_python.txt
|
Q:
How do I alter a TEXT column on a database table in SQL server?
In a SQL server database, I have a table which contains a TEXT field which is set to allow NULLs. I need to change this to not allow NULLs. I can do this no problem via Enterprise Manager, but when I try to run the following script, alter table dbo.[EventLog] Alter column [Message] text Not null, I get an error:
Cannot alter column 'ErrorMessage' because it is 'text'.
Reading SQL Books Online does indeed reveal you are not allow to do an ALTER COLUMN on TEXT fields. I really need to be able to do this via a script though, and not manually in Enterprise Manager. What are the options for doing this in script then?
A:
You can use Enterprise Manager to create your script. Right click on the table in EM and select Design. Uncheck the Allow Nulls column for the Text field. Instead of hitting the regular save icon (the floppy), click an icon that looks like a golden scroll with a tiny floppy or just do Table Designer > Generate Change Script from the menu. Save the script to a file so you can reuse it. Here is a sample script:
/* To prevent any potential data loss issues, you should review this script in detail before running it outside the context of the database designer.*/
BEGIN TRANSACTION
SET QUOTED_IDENTIFIER ON
SET ARITHABORT ON
SET NUMERIC_ROUNDABORT OFF
SET CONCAT_NULL_YIELDS_NULL ON
SET ANSI_NULLS ON
SET ANSI_PADDING ON
SET ANSI_WARNINGS ON
COMMIT
BEGIN TRANSACTION
GO
CREATE TABLE dbo.Tmp_TestTable
(
tableKey int NOT NULL,
Description varchar(50) NOT NULL,
TextData text NOT NULL
) ON [PRIMARY]
TEXTIMAGE_ON [PRIMARY]
GO
IF EXISTS(SELECT * FROM dbo.TestTable)
EXEC('INSERT INTO dbo.Tmp_TestTable (tableKey, Description, TextData)
SELECT tableKey, Description, TextData FROM dbo.TestTable WITH (HOLDLOCK TABLOCKX)')
GO
DROP TABLE dbo.TestTable
GO
EXECUTE sp_rename N'dbo.Tmp_TestTable', N'TestTable', 'OBJECT'
GO
ALTER TABLE dbo.TestTable ADD CONSTRAINT
PK_TestTable PRIMARY KEY CLUSTERED
(
tableKey
) ON [PRIMARY]
GO
COMMIT
A:
Create a new field. Copy the data across. Drop the old field. Rename the new field.
A:
I think getting rid of the null values is the easist.
(as raz0rf1sh has said)
CREATE TABLE tmp1( col1 INT identity ( 1, 1 ), col2 TEXT )
GO
INSERT
INTO tmp1
SELECT NULL
GO 10
SELECT *
FROM tmp1
UPDATE tmp1
SET col2 = ''
WHERE col2 IS NULL
ALTER TABLE tmp1
ALTER COLUMN col2 TEXT NOT NULL
SELECT *
FROM tmp1
DROP TABLE tmp1
A:
Off the top of my head, I'd say you need to create a new table with the same structure as the existing table but with your text column set to not null and then run a query to move the records from one to the other.
I realize that's sort of a pseudocode answer but I think that's really the only option you've got.
If others with a better grip on the exact TSQL syntax care to supplement this answer, feel free.
A:
I would update all the columns with NULL values and set it to an empty string, for example, ''. Then you should be able to run your ALTER TABLE script with no problems. A lot less work than creating a new column.
A:
Try to generate the change script from within Enterprise Manager to see how it is done there
|
How do I alter a TEXT column on a database table in SQL server?
|
In a SQL server database, I have a table which contains a TEXT field which is set to allow NULLs. I need to change this to not allow NULLs. I can do this no problem via Enterprise Manager, but when I try to run the following script, alter table dbo.[EventLog] Alter column [Message] text Not null, I get an error:
Cannot alter column 'ErrorMessage' because it is 'text'.
Reading SQL Books Online does indeed reveal you are not allow to do an ALTER COLUMN on TEXT fields. I really need to be able to do this via a script though, and not manually in Enterprise Manager. What are the options for doing this in script then?
|
[
"You can use Enterprise Manager to create your script. Right click on the table in EM and select Design. Uncheck the Allow Nulls column for the Text field. Instead of hitting the regular save icon (the floppy), click an icon that looks like a golden scroll with a tiny floppy or just do Table Designer > Generate Change Script from the menu. Save the script to a file so you can reuse it. Here is a sample script:\n /* To prevent any potential data loss issues, you should review this script in detail before running it outside the context of the database designer.*/\nBEGIN TRANSACTION\nSET QUOTED_IDENTIFIER ON\nSET ARITHABORT ON\nSET NUMERIC_ROUNDABORT OFF\nSET CONCAT_NULL_YIELDS_NULL ON\nSET ANSI_NULLS ON\nSET ANSI_PADDING ON\nSET ANSI_WARNINGS ON\nCOMMIT\nBEGIN TRANSACTION\nGO\nCREATE TABLE dbo.Tmp_TestTable\n (\n tableKey int NOT NULL,\n Description varchar(50) NOT NULL,\n TextData text NOT NULL\n ) ON [PRIMARY]\n TEXTIMAGE_ON [PRIMARY]\nGO\nIF EXISTS(SELECT * FROM dbo.TestTable)\n EXEC('INSERT INTO dbo.Tmp_TestTable (tableKey, Description, TextData)\n SELECT tableKey, Description, TextData FROM dbo.TestTable WITH (HOLDLOCK TABLOCKX)')\nGO\nDROP TABLE dbo.TestTable\nGO\nEXECUTE sp_rename N'dbo.Tmp_TestTable', N'TestTable', 'OBJECT' \nGO\nALTER TABLE dbo.TestTable ADD CONSTRAINT\n PK_TestTable PRIMARY KEY CLUSTERED \n (\n tableKey\n ) ON [PRIMARY]\n\nGO\nCOMMIT\n\n",
"Create a new field. Copy the data across. Drop the old field. Rename the new field.\n",
"I think getting rid of the null values is the easist.\n(as raz0rf1sh has said)\nCREATE TABLE tmp1( col1 INT identity ( 1, 1 ), col2 TEXT ) \nGO \n\nINSERT \nINTO tmp1 \nSELECT NULL \n\nGO 10 \n\nSELECT * \nFROM tmp1 \n\nUPDATE tmp1 \nSET col2 = '' \nWHERE col2 IS NULL \n\nALTER TABLE tmp1 \nALTER COLUMN col2 TEXT NOT NULL \n\nSELECT *\nFROM tmp1 \n\nDROP TABLE tmp1 \n\n",
"Off the top of my head, I'd say you need to create a new table with the same structure as the existing table but with your text column set to not null and then run a query to move the records from one to the other. \nI realize that's sort of a pseudocode answer but I think that's really the only option you've got. \nIf others with a better grip on the exact TSQL syntax care to supplement this answer, feel free. \n",
"I would update all the columns with NULL values and set it to an empty string, for example, ''. Then you should be able to run your ALTER TABLE script with no problems. A lot less work than creating a new column.\n",
"Try to generate the change script from within Enterprise Manager to see how it is done there\n"
] |
[
4,
2,
1,
0,
0,
0
] |
[] |
[] |
[
"sql_server",
"tsql"
] |
stackoverflow_0000082721_sql_server_tsql.txt
|
Q:
CausesValidation is set to "False" but the client side validation is still firing
I have several RequiredFieldValidators in an ASP.NET 1.1 web application that are firing on the client side when I press the Cancel button, which has the CausesValidation attribute set to "False". How can I get this to stop?
I do not believe that Validation Groups are supported in 1.1.
Here's a code sample:
<asp:TextBox id="UsernameTextBox" runat="server"></asp:TextBox>
<br />
<asp:RequiredFieldValidator ID="UsernameTextBoxRequiredfieldvalidator" ControlToValidate="UsernameTextBox"
runat="server" ErrorMessage="This field is required."></asp:RequiredFieldValidator>
<asp:RegularExpressionValidator ID="UsernameTextBoxRegExValidator" runat="server" ControlToValidate="UsernameTextBox"
Display="Dynamic" ErrorMessage="Please specify a valid username (6 to 32 alphanumeric characters)."
ValidationExpression="[0-9,a-z,A-Z, ]{6,32}"></asp:RegularExpressionValidator>
<asp:Button CssClass="btn" id="addUserButton" runat="server" Text="Add User"></asp:Button>
<asp:Button CssClass="btn" id="cancelButton" runat="server" Text="Cancel" CausesValidation="False"></asp:Button>
Update: There was some dynamic page generating going on in the code behind that must have been messing it up, because when I cleaned that up it started working.
A:
Validation Groups were not added to ASP.NET until version 2.0. This is a 1.1 question.
Double check your setting and make sure you are not overwriting it in the code behind.
A:
Are they in separate validation groups (the button and validator controls)?
You're not manually calling the JS to do the client validation are you?
|
CausesValidation is set to "False" but the client side validation is still firing
|
I have several RequiredFieldValidators in an ASP.NET 1.1 web application that are firing on the client side when I press the Cancel button, which has the CausesValidation attribute set to "False". How can I get this to stop?
I do not believe that Validation Groups are supported in 1.1.
Here's a code sample:
<asp:TextBox id="UsernameTextBox" runat="server"></asp:TextBox>
<br />
<asp:RequiredFieldValidator ID="UsernameTextBoxRequiredfieldvalidator" ControlToValidate="UsernameTextBox"
runat="server" ErrorMessage="This field is required."></asp:RequiredFieldValidator>
<asp:RegularExpressionValidator ID="UsernameTextBoxRegExValidator" runat="server" ControlToValidate="UsernameTextBox"
Display="Dynamic" ErrorMessage="Please specify a valid username (6 to 32 alphanumeric characters)."
ValidationExpression="[0-9,a-z,A-Z, ]{6,32}"></asp:RegularExpressionValidator>
<asp:Button CssClass="btn" id="addUserButton" runat="server" Text="Add User"></asp:Button>
<asp:Button CssClass="btn" id="cancelButton" runat="server" Text="Cancel" CausesValidation="False"></asp:Button>
Update: There was some dynamic page generating going on in the code behind that must have been messing it up, because when I cleaned that up it started working.
|
[
"Validation Groups were not added to ASP.NET until version 2.0. This is a 1.1 question. \nDouble check your setting and make sure you are not overwriting it in the code behind. \n",
"Are they in separate validation groups (the button and validator controls)?\nYou're not manually calling the JS to do the client validation are you?\n"
] |
[
1,
0
] |
[] |
[] |
[
"asp.net",
"c#",
"javascript",
"validation"
] |
stackoverflow_0000082908_asp.net_c#_javascript_validation.txt
|
Q:
Overriding "find" in ActiveRecord the DRY way
I have a few models that need to have custom find conditions placed on them. For example, if I have a Contact model, every time Contact.find is called, I want to restrict the contacts returned that only belong to the Account in use.
I found this via Google (which I've customized a little):
def self.find(*args)
with_scope(:find => { :conditions => "account_id = #{$account.id}" }) do
super(*args)
end
end
This works great, except for a few occasions where account_id is ambiguous so I adapted it to:
def self.find(*args)
with_scope(:find => { :conditions => "#{self.to_s.downcase.pluralize}.account_id = #{$account.id}" }) do
super(*args)
end
end
This also works great, however, I want it to be DRY. Now I have a few different models that I want this kind of function to be used. What is the best way to do this?
When you answer, please include the code to help our minds grasp the metaprogramming Ruby-fu.
(I'm using Rails v2.1)
A:
You don't tell us which version of rails you are using [edit - it is on rails 2.1 thus following advice is fully operational], but I would recommand you use the following form instead of overloading find yourself :
account.contacts.find(...)
this will automatically wrap the find in a scope where the user clause is included (since you have the account_id I assume you have the account somewhere close)
I suggest you check the following resources on scopes
http://ryandaigle.com/articles/2008/3/24/what-s-new-in-edge-rails-has-finder-functionality
(this is not edge anymore :) )
http://ryandaigle.com/articles/2008/8/20/named-scope-it-s-not-just-for-conditions-ya-know
A:
Jean's advice is sound. Assuming your models look like this:
class Contact < ActiveRecord::Base
belongs_to :account
end
class Account < ActiveRecord::Base
has_many :contacts
end
You should be using the contacts association of the current account to ensure that you're only getting Contact records scoped to that account, like so:
@account.contacts
If you would like to add further conditions to your contacts query, you can specify them using find:
@account.contacts.find(:conditions => { :activated => true })
And if you find yourself constantly querying for activated users, you can refactor it into a named scope:
class Contact < ActiveRecord::Base
belongs_to :account
named_scope :activated, :conditions => { :activated => true }
end
Which you would then use like this:
@account.contacts.activated
A:
to give a specific answer to your problem, I'd suggest moving the above mentioned method into a module to be included by the models in question; so you'd have
class Contact
include NarrowFind
...
end
PS. watch out for sql escaping of the account_id, you should probably use the :conditions=>[".... =?", $account_id] syntax.
|
Overriding "find" in ActiveRecord the DRY way
|
I have a few models that need to have custom find conditions placed on them. For example, if I have a Contact model, every time Contact.find is called, I want to restrict the contacts returned that only belong to the Account in use.
I found this via Google (which I've customized a little):
def self.find(*args)
with_scope(:find => { :conditions => "account_id = #{$account.id}" }) do
super(*args)
end
end
This works great, except for a few occasions where account_id is ambiguous so I adapted it to:
def self.find(*args)
with_scope(:find => { :conditions => "#{self.to_s.downcase.pluralize}.account_id = #{$account.id}" }) do
super(*args)
end
end
This also works great, however, I want it to be DRY. Now I have a few different models that I want this kind of function to be used. What is the best way to do this?
When you answer, please include the code to help our minds grasp the metaprogramming Ruby-fu.
(I'm using Rails v2.1)
|
[
"You don't tell us which version of rails you are using [edit - it is on rails 2.1 thus following advice is fully operational], but I would recommand you use the following form instead of overloading find yourself : \naccount.contacts.find(...) \n\nthis will automatically wrap the find in a scope where the user clause is included (since you have the account_id I assume you have the account somewhere close)\nI suggest you check the following resources on scopes\n\nhttp://ryandaigle.com/articles/2008/3/24/what-s-new-in-edge-rails-has-finder-functionality\n(this is not edge anymore :) )\nhttp://ryandaigle.com/articles/2008/8/20/named-scope-it-s-not-just-for-conditions-ya-know\n\n",
"Jean's advice is sound. Assuming your models look like this:\nclass Contact < ActiveRecord::Base\n belongs_to :account\nend\n\nclass Account < ActiveRecord::Base\n has_many :contacts\nend\n\nYou should be using the contacts association of the current account to ensure that you're only getting Contact records scoped to that account, like so:\[email protected]\n\nIf you would like to add further conditions to your contacts query, you can specify them using find:\[email protected](:conditions => { :activated => true })\n\nAnd if you find yourself constantly querying for activated users, you can refactor it into a named scope:\nclass Contact < ActiveRecord::Base\n belongs_to :account\n named_scope :activated, :conditions => { :activated => true }\nend\n\nWhich you would then use like this:\[email protected]\n\n",
"to give a specific answer to your problem, I'd suggest moving the above mentioned method into a module to be included by the models in question; so you'd have \nclass Contact\n include NarrowFind\n ...\nend\n\nPS. watch out for sql escaping of the account_id, you should probably use the :conditions=>[\".... =?\", $account_id] syntax.\n"
] |
[
8,
5,
0
] |
[] |
[] |
[
"activerecord",
"metaprogramming",
"overriding",
"ruby",
"ruby_on_rails"
] |
stackoverflow_0000080424_activerecord_metaprogramming_overriding_ruby_ruby_on_rails.txt
|
Q:
Java JFormattedTextField for typing dates
I've been having trouble to make a JFormattedTextField to use dates with the format dd/MM/yyyy. Specifically, as the user types, the cursor should "jump" the slashes, and get directly to the next number position.
Also, the JFormattedTextField must verify if the date entered is valid, and reject it somehow if the date is invalid, or "correct it" to a valid date, such as if the user input "13" as month, set it as "01" and add +1 to the year.
I tried using a mask ("##/##/####") with the validate() method of JFormattedTextField to check if the date is valid, but it appears that those two don't work well together (or I'm too green on Java to know how... :), and then the user can type anything on the field.
Any help is really appreciated! Thanks!
A:
try using JCalendar
A:
You may have to use a regular JTextField and call setDocument() with a custom document. I recommend extending PlainDocument, this makes it easy to validate input as the document changes, and add slashes as appropriate.
|
Java JFormattedTextField for typing dates
|
I've been having trouble to make a JFormattedTextField to use dates with the format dd/MM/yyyy. Specifically, as the user types, the cursor should "jump" the slashes, and get directly to the next number position.
Also, the JFormattedTextField must verify if the date entered is valid, and reject it somehow if the date is invalid, or "correct it" to a valid date, such as if the user input "13" as month, set it as "01" and add +1 to the year.
I tried using a mask ("##/##/####") with the validate() method of JFormattedTextField to check if the date is valid, but it appears that those two don't work well together (or I'm too green on Java to know how... :), and then the user can type anything on the field.
Any help is really appreciated! Thanks!
|
[
"try using JCalendar\n",
"You may have to use a regular JTextField and call setDocument() with a custom document. I recommend extending PlainDocument, this makes it easy to validate input as the document changes, and add slashes as appropriate. \n"
] |
[
1,
0
] |
[] |
[] |
[
"date",
"java",
"mask",
"validation"
] |
stackoverflow_0000079662_date_java_mask_validation.txt
|
Q:
What is the best way of speccing plugins with RSpec?
I'm creating a plugin, and am looking to use RSpec so I can build it using BDD.
Is there a recommended method of doing this?
A:
OK, I think I have a solution:
Generate the plugin via script/generate plugin
change the Rakefile, and add
require 'spec/rake/spectask'
desc 'Test the PLUGIN_NAME plugin.'
Spec::Rake::SpecTask.new(:spec) do |t|
t.libs << 'lib'
t.verbose = true
end
Create a spec directory, and begin adding specs in *_spec.rb files, as normal
You can also modify the default task to run spec instead of test, too.
A:
For an example of an existing plugin that uses rspec, check out the restful_authentication plugin. Maybe it will help.
|
What is the best way of speccing plugins with RSpec?
|
I'm creating a plugin, and am looking to use RSpec so I can build it using BDD.
Is there a recommended method of doing this?
|
[
"OK, I think I have a solution:\n\nGenerate the plugin via script/generate plugin\nchange the Rakefile, and add\n\n\n require 'spec/rake/spectask'\ndesc 'Test the PLUGIN_NAME plugin.'\nSpec::Rake::SpecTask.new(:spec) do |t|\n t.libs << 'lib'\n t.verbose = true\nend\n\n\n\nCreate a spec directory, and begin adding specs in *_spec.rb files, as normal\n\nYou can also modify the default task to run spec instead of test, too.\n",
"For an example of an existing plugin that uses rspec, check out the restful_authentication plugin. Maybe it will help.\n"
] |
[
1,
0
] |
[] |
[] |
[
"plugins",
"rspec",
"ruby_on_rails"
] |
stackoverflow_0000082191_plugins_rspec_ruby_on_rails.txt
|
Q:
Migrating an Existing Application to accept Unicode
We have in the process of upgrading our application to full Unicode comptibility as we have recently got Delphi 2009 which provides this out of the box. I am looking for anyone who has experience of upgrading an application to accept Unicode characters. Specifically answers to any of the following questions.
We need to change VarChars to NVarchar, Char to NChar. Are there any gotchas here.
We need to update all sql statements to include N in front of any sql strings. So Update tbl_Customer set Name = 'Smith' must become Update tbl_Customer set Name = N'Smith' . Is there any way to default to this for certain Fields. It seems extraordinary this is still required.
Is it possible to get any defaults set up in SQLServer that will make this simpler?
ps We also need to upgrade our Oracle code
A:
Oracle doesn't require you to use nvarchar to store Unicode strings—the server can be configured to store varchar2 in UTF-8. If you only supported ASCII before, it should be transparent. That should prevent the need for all the application-side search-and-replace for ' to N'.
As for Damien's point: it might not help you now, but you should really make it a priority to get rid of non-parameterized queries. They are nothing but a drag on your system from a maintenance, performance, and safety standpoint.
A:
Obvious with SQL Server is that the limits for nchar/nvarchar are half of their char/varchar counterparts (unless you migrate everything > 4000 to nvarchar(max))
A:
Damien
I'm not sure how useful your answer is. We have a large 700,000 lines of compiled codebase that was written over the last ten years which contains a large number of sql queries. Most are standardised down to a few functions which are the basis for most of the updates on the database. These can be updated quite simply. However we also need to check every where clause for CustomerName = '%s' which should now be CustomerName = N'%s'
This is a real question which needs a real answer.
|
Migrating an Existing Application to accept Unicode
|
We have in the process of upgrading our application to full Unicode comptibility as we have recently got Delphi 2009 which provides this out of the box. I am looking for anyone who has experience of upgrading an application to accept Unicode characters. Specifically answers to any of the following questions.
We need to change VarChars to NVarchar, Char to NChar. Are there any gotchas here.
We need to update all sql statements to include N in front of any sql strings. So Update tbl_Customer set Name = 'Smith' must become Update tbl_Customer set Name = N'Smith' . Is there any way to default to this for certain Fields. It seems extraordinary this is still required.
Is it possible to get any defaults set up in SQLServer that will make this simpler?
ps We also need to upgrade our Oracle code
|
[
"Oracle doesn't require you to use nvarchar to store Unicode strings—the server can be configured to store varchar2 in UTF-8. If you only supported ASCII before, it should be transparent. That should prevent the need for all the application-side search-and-replace for ' to N'.\nAs for Damien's point: it might not help you now, but you should really make it a priority to get rid of non-parameterized queries. They are nothing but a drag on your system from a maintenance, performance, and safety standpoint.\n",
"Obvious with SQL Server is that the limits for nchar/nvarchar are half of their char/varchar counterparts (unless you migrate everything > 4000 to nvarchar(max))\n",
"Damien \nI'm not sure how useful your answer is. We have a large 700,000 lines of compiled codebase that was written over the last ten years which contains a large number of sql queries. Most are standardised down to a few functions which are the basis for most of the updates on the database. These can be updated quite simply. However we also need to check every where clause for CustomerName = '%s' which should now be CustomerName = N'%s'\nThis is a real question which needs a real answer.\n"
] |
[
1,
1,
0
] |
[] |
[] |
[
"delphi",
"migration",
"oracle",
"sql_server",
"unicode"
] |
stackoverflow_0000081587_delphi_migration_oracle_sql_server_unicode.txt
|
Q:
Granting access to hundreds of SPs?
In Sql Server 2000/2005, I have a few NT user groups that need to be granted access to hundreds of stored procedures.
Is there a nice easy way to do that?
A:
Create a role in sql server.
Write a
script that grants that role
permission to use those sprocs.
Add those NT user groups to that role.
A:
Here's a script that I use for granting permissions to lots of procedures:
DECLARE @DB sysname ; set @DB = DB_NAME()
DECLARE @U sysname ; set @U = QUOTENAME('UserID')
DECLARE @ID integer,
@LAST_ID integer,
@NAME varchar(1000),
@SQL varchar(4000)
SET @LAST_ID = 0
WHILE @LAST_ID IS NOT NULL
BEGIN
SELECT @ID = MIN(id)
FROM dbo.sysobjects
WHERE id > @LAST_ID AND type = 'P' AND category = 0
SET @LAST_ID = @ID
-- We have a record so go get the name
IF @ID IS NOT NULL
BEGIN
SELECT @NAME = name
FROM dbo.sysobjects
WHERE id = @ID
-- Build the DCL to do the GRANT
SET @SQL = 'GRANT EXECUTE ON ' + @NAME + ' TO ' + @U
-- Run the SQL Statement you just generated
EXEC master.dbo.xp_execresultset @SQL, @DB
END
END
You can modify the select to get to a more specific group of stored procs.
|
Granting access to hundreds of SPs?
|
In Sql Server 2000/2005, I have a few NT user groups that need to be granted access to hundreds of stored procedures.
Is there a nice easy way to do that?
|
[
"\nCreate a role in sql server.\nWrite a\nscript that grants that role\npermission to use those sprocs.\nAdd those NT user groups to that role.\n\n",
"Here's a script that I use for granting permissions to lots of procedures:\nDECLARE @DB sysname ; set @DB = DB_NAME()\nDECLARE @U sysname ; set @U = QUOTENAME('UserID')\n\nDECLARE @ID integer,\n @LAST_ID integer,\n @NAME varchar(1000),\n @SQL varchar(4000)\n\nSET @LAST_ID = 0\n\nWHILE @LAST_ID IS NOT NULL\nBEGIN\n SELECT @ID = MIN(id)\n FROM dbo.sysobjects\n WHERE id > @LAST_ID AND type = 'P' AND category = 0\n\n SET @LAST_ID = @ID\n\n -- We have a record so go get the name\n IF @ID IS NOT NULL\n BEGIN\n SELECT @NAME = name\n FROM dbo.sysobjects\n WHERE id = @ID\n\n -- Build the DCL to do the GRANT\n SET @SQL = 'GRANT EXECUTE ON ' + @NAME + ' TO ' + @U\n\n -- Run the SQL Statement you just generated\n EXEC master.dbo.xp_execresultset @SQL, @DB\n\n END \nEND\n\nYou can modify the select to get to a more specific group of stored procs.\n"
] |
[
4,
1
] |
[] |
[] |
[
"security",
"sql_server",
"stored_procedures"
] |
stackoverflow_0000080291_security_sql_server_stored_procedures.txt
|
Q:
How do I use a vendor Apache with a self-compiled Perl and mod_perl?
I want to use Apple's or RedHat's built-in Apache but I want to use Perl 5.10 and mod_perl. What's the least intrusive way to accomplish this? I want the advantage of free security patching for the vendor's Apache, dav, php, etc., but I care a lot about which version of Perl I use and what's in my @INC path. I don't mind compiling my own mod_perl.
A:
Build your version of Perl 5.10 following any special instructions from the mod_perl documentation. Tell Perl configurator to install in some non-standard place, like /usr/local/perl/5.10.0
Use the instructions to build a shared library (or dynamic, or .so) mod_perl against your distribution's Apache, but make sure you run the Makefile.PL using your version of perl:
/usr/local/perl/5.10.0/bin/perl Makefile.PL APXS=/usr/bin/apxs
Install and configure mod_perl like normal.
It may be helpful, after step one, to change your path so you don't accidentially get confused about which version of Perl you're using:
export PATH=/usr/local/perl/5.10.0/bin:$PATH
A:
You'll want to look into mod_so
A:
I've done this before. It wasn't pretty, but it worked, especially since vendor perl's are usually 2-3 years old.
I started with making my own perl RPM that installed perl into a different location, like /opt/. This was pretty straight forward. I mostly started with this because I didn't want the system utilities that used perl to break when I upgraded/installed new modules. I had to modify all my scripts to specify #!/opt/bin/perl at the top and sometimes I even played with the path to make sure my perl came first.
Next, I grabbed a mod_perl source RPM and modified it to use my /opt/bin/perl instead of /usr/bin/perl. I don't have access to the changes I made, since it was at a different gig. It took me a bit of playing around to get it.
It did work, but I'm not an RPM wizard, so dependency checking didn't work out so well. For example, I could uninstall my custom RPM and break everything. It wasn't a big deal for me, so I moved on.
I was also mixing RPM's with CPAN installs of modules (did I mention we built our own custom CPAN mirror with our own code?). This was a bit fragile too. Again, I didn't have the resources (ie, time) to figure out how to bend cpan2rpm to use my perl and not cause RPM conflicts.
If I had it all to do again, I would make a custom 5.10 perl RPM and just replace the system perl. Then I would use cpan2rpm to create the RPM packages I needed for my software and compile my own mod_perl RPM.
|
How do I use a vendor Apache with a self-compiled Perl and mod_perl?
|
I want to use Apple's or RedHat's built-in Apache but I want to use Perl 5.10 and mod_perl. What's the least intrusive way to accomplish this? I want the advantage of free security patching for the vendor's Apache, dav, php, etc., but I care a lot about which version of Perl I use and what's in my @INC path. I don't mind compiling my own mod_perl.
|
[
"\nBuild your version of Perl 5.10 following any special instructions from the mod_perl documentation. Tell Perl configurator to install in some non-standard place, like /usr/local/perl/5.10.0\nUse the instructions to build a shared library (or dynamic, or .so) mod_perl against your distribution's Apache, but make sure you run the Makefile.PL using your version of perl:\n/usr/local/perl/5.10.0/bin/perl Makefile.PL APXS=/usr/bin/apxs\nInstall and configure mod_perl like normal.\n\nIt may be helpful, after step one, to change your path so you don't accidentially get confused about which version of Perl you're using:\nexport PATH=/usr/local/perl/5.10.0/bin:$PATH\n\n",
"You'll want to look into mod_so\n",
"I've done this before. It wasn't pretty, but it worked, especially since vendor perl's are usually 2-3 years old.\nI started with making my own perl RPM that installed perl into a different location, like /opt/. This was pretty straight forward. I mostly started with this because I didn't want the system utilities that used perl to break when I upgraded/installed new modules. I had to modify all my scripts to specify #!/opt/bin/perl at the top and sometimes I even played with the path to make sure my perl came first.\nNext, I grabbed a mod_perl source RPM and modified it to use my /opt/bin/perl instead of /usr/bin/perl. I don't have access to the changes I made, since it was at a different gig. It took me a bit of playing around to get it. \nIt did work, but I'm not an RPM wizard, so dependency checking didn't work out so well. For example, I could uninstall my custom RPM and break everything. It wasn't a big deal for me, so I moved on.\nI was also mixing RPM's with CPAN installs of modules (did I mention we built our own custom CPAN mirror with our own code?). This was a bit fragile too. Again, I didn't have the resources (ie, time) to figure out how to bend cpan2rpm to use my perl and not cause RPM conflicts.\nIf I had it all to do again, I would make a custom 5.10 perl RPM and just replace the system perl. Then I would use cpan2rpm to create the RPM packages I needed for my software and compile my own mod_perl RPM.\n"
] |
[
5,
1,
1
] |
[] |
[] |
[
"apache",
"mod_perl",
"perl"
] |
stackoverflow_0000079493_apache_mod_perl_perl.txt
|
Q:
SQL Server compatibility mode
We're currently running a server on Compatibility mode 8 and I want to update it.
What are the implications of just going in and changing it?
What is likely to break?
Is there anything that checks the data will survive before I perform it?
Can I rollback to mode 8 without performing a restore and without loss of data?
A:
If you're going from 80 to 90, the differences are minimal. Going from 65 to 70+ can cause severe impact (NULLs are stored differently).
Implications - your SPs can return different results than you'd expect
Likely to break: functions, SPs
Data should survive; nothing in there should affect things.
Moving from 80 to 90 and back only takes a few seconds. Yes, you can move back and forth.
http://msdn.microsoft.com/en-us/library/bb510680.aspx
some gotchas: http://mapamdug.blogspot.com/2006/03/sql-server-2005-gotcha-1.html
A:
Compatibility mode does not affect storage. It's just a flag. Nothing will change in the data or queries. Only query execution will get affected.
Nothing - or lots of things. Did you use syntax marked as obsolete and subject to deletion in 2000? Did you use parethesis when providing hints in queries? Did you use query execution hints? If yes, it's better to revise your database first, remove obsolete syntax, put the parenthesis back and dig the BOL to find which hints are going to slow down your fine-tuned query on new engine.
No. But the data will survive. In fact, if you are able to run your database on server2005, even in mode 8, you're using new data format already.
Yes, you can roll back. It's not transforming, it's just setting a flag which says "My queries are that compatible."
A:
Compatibility mode disables the features of the newer version, personally I haven't really worked with many databases that have issues, the key thing that was a problem in our environment is after moving to 9, you can no longer use Enterprise Manager to view the database.
A backup/restore is a good option, and I also believe you can flip it back without any issues.
A:
(I did say it was only if you were moving from 6.5, which stored nothing in char() fields when NULL - 70 and greater use the whole of the field, which can cause massive size changes.)
VBStreets is right on his points - and definitely on point 3 - when you first ran the database on 2005 it converted the data structure. If you take a backup, it cannot be restored on prior versions, regardless of the compatibility level.
|
SQL Server compatibility mode
|
We're currently running a server on Compatibility mode 8 and I want to update it.
What are the implications of just going in and changing it?
What is likely to break?
Is there anything that checks the data will survive before I perform it?
Can I rollback to mode 8 without performing a restore and without loss of data?
|
[
"If you're going from 80 to 90, the differences are minimal. Going from 65 to 70+ can cause severe impact (NULLs are stored differently).\nImplications - your SPs can return different results than you'd expect\nLikely to break: functions, SPs\nData should survive; nothing in there should affect things.\nMoving from 80 to 90 and back only takes a few seconds. Yes, you can move back and forth.\nhttp://msdn.microsoft.com/en-us/library/bb510680.aspx\nsome gotchas: http://mapamdug.blogspot.com/2006/03/sql-server-2005-gotcha-1.html\n",
"\nCompatibility mode does not affect storage. It's just a flag. Nothing will change in the data or queries. Only query execution will get affected.\nNothing - or lots of things. Did you use syntax marked as obsolete and subject to deletion in 2000? Did you use parethesis when providing hints in queries? Did you use query execution hints? If yes, it's better to revise your database first, remove obsolete syntax, put the parenthesis back and dig the BOL to find which hints are going to slow down your fine-tuned query on new engine.\nNo. But the data will survive. In fact, if you are able to run your database on server2005, even in mode 8, you're using new data format already.\nYes, you can roll back. It's not transforming, it's just setting a flag which says \"My queries are that compatible.\"\n\n",
"Compatibility mode disables the features of the newer version, personally I haven't really worked with many databases that have issues, the key thing that was a problem in our environment is after moving to 9, you can no longer use Enterprise Manager to view the database.\nA backup/restore is a good option, and I also believe you can flip it back without any issues.\n",
"(I did say it was only if you were moving from 6.5, which stored nothing in char() fields when NULL - 70 and greater use the whole of the field, which can cause massive size changes.)\nVBStreets is right on his points - and definitely on point 3 - when you first ran the database on 2005 it converted the data structure. If you take a backup, it cannot be restored on prior versions, regardless of the compatibility level.\n"
] |
[
5,
3,
0,
0
] |
[] |
[] |
[
"compatibility",
"sql_server",
"upgrade"
] |
stackoverflow_0000077664_compatibility_sql_server_upgrade.txt
|
Q:
PHP/mySQL - regular recalculation of benchmark values as new users submit their data
I am confronted with a new kind of problem which I haven't encountered yet in my very young programming "career" and would like to know your opinion about how to tackle it best.
The situation
A research application (php/mysql) gathers stress related health data from users. User gets a an analyses after filling in the questionnaire. Value for each parameter is transformed into a percentile value using a benchmark (mean and standard devitation of existing data set).
The task
Since more and more ppl are filling in the questionnaire, there is the potential to make the benchmark values (mean/SD) more accurate by recalculating them using the new user data. I would like the database to regularly run a script that updates the benchmark values.
The question
I've never used stored precedures so far and I only have a slight notion of what they are but somehow I have a feeling they could maybe help me with this? Or should I write the script as php and then set up a cron job?
[edit]After the first couple of answers it looks like cron is clearly the way to go.[/edit]
A:
PHP set up as a cron job lets you keep it in your source code management system, and if you're using a database abstraction layer it'll be portable to other databases if you ever decide to switch. For those reasons, I tend to go with scripts over stored procedures.
A:
The easiest way to make this work is probably to write a script in the same language your website is using (sounds like PHP) and call it from cron.
No need to make it more complicated than it needs to be by putting the logic in two places (your existing calculations and a stored procedure).
A:
What you're considering could be done in a number of ways.
You could setup a trigger in your DB to recalculate the values whenever a new record is updated. You could store the code needed to update the values in a sproc if necessary.
You could write a PHP script and run it regularly via cron.
#1 will slow down inserts to your database but will make sure your data is always up to date. #2 may lock the tables while it updates the new values, and your data will only be accurate until the next update. #2 is much easier to back up, as the script can easily be stored in your versioning system, whereas you'd need to store the trigger and sproc creation scripts in whatever backup you'd make.
Obviously you'll have to weigh up your requirements before you pick a method.
A:
If the volume of data is big enough that calculating it on the fly is too much, then either:
Cron job with php script to denormalise the totals
Trigger on inserts that increments totals
A:
Go with the cron job way. Simple, solid, works. In the PHP/MySQL world I would say stored procedures are no-go.
|
PHP/mySQL - regular recalculation of benchmark values as new users submit their data
|
I am confronted with a new kind of problem which I haven't encountered yet in my very young programming "career" and would like to know your opinion about how to tackle it best.
The situation
A research application (php/mysql) gathers stress related health data from users. User gets a an analyses after filling in the questionnaire. Value for each parameter is transformed into a percentile value using a benchmark (mean and standard devitation of existing data set).
The task
Since more and more ppl are filling in the questionnaire, there is the potential to make the benchmark values (mean/SD) more accurate by recalculating them using the new user data. I would like the database to regularly run a script that updates the benchmark values.
The question
I've never used stored precedures so far and I only have a slight notion of what they are but somehow I have a feeling they could maybe help me with this? Or should I write the script as php and then set up a cron job?
[edit]After the first couple of answers it looks like cron is clearly the way to go.[/edit]
|
[
"PHP set up as a cron job lets you keep it in your source code management system, and if you're using a database abstraction layer it'll be portable to other databases if you ever decide to switch. For those reasons, I tend to go with scripts over stored procedures.\n",
"The easiest way to make this work is probably to write a script in the same language your website is using (sounds like PHP) and call it from cron.\nNo need to make it more complicated than it needs to be by putting the logic in two places (your existing calculations and a stored procedure).\n",
"What you're considering could be done in a number of ways.\n\nYou could setup a trigger in your DB to recalculate the values whenever a new record is updated. You could store the code needed to update the values in a sproc if necessary.\nYou could write a PHP script and run it regularly via cron.\n\n#1 will slow down inserts to your database but will make sure your data is always up to date. #2 may lock the tables while it updates the new values, and your data will only be accurate until the next update. #2 is much easier to back up, as the script can easily be stored in your versioning system, whereas you'd need to store the trigger and sproc creation scripts in whatever backup you'd make.\nObviously you'll have to weigh up your requirements before you pick a method.\n",
"If the volume of data is big enough that calculating it on the fly is too much, then either:\n\nCron job with php script to denormalise the totals\nTrigger on inserts that increments totals \n\n",
"Go with the cron job way. Simple, solid, works. In the PHP/MySQL world I would say stored procedures are no-go.\n"
] |
[
1,
1,
1,
0,
0
] |
[] |
[] |
[
"cron",
"mysql",
"php",
"stored_procedures"
] |
stackoverflow_0000083088_cron_mysql_php_stored_procedures.txt
|
Q:
Command switch to toggle Notepads word wrap
I have a costumer showing Notepad with a large set of data that looks totally misaligned if word wrap is on and I want to force it off. Is there a command switch to do this?
A:
I dont think there is a command switch to do this at all. If you want to force it off all the time then you may want to edit the registry:
Hive: HKEY_CURRENT_USER
Key: SOFTWARE\Microsoft\Notepad
Name: fWrap
Type: REG_DWORD
Value: 0
You could even create a .reg file and put it in a batch file to run it and reset it every time notepad runs.
Usually though if you have word wrap turned off, when you open it up again, it will still be turned off.
A:
you could just turn it off by going to Format -> Word Wrap.
A:
I do not believe there is any command-line option to do that.
You can however set the default behavior by setting the registry-value HKEY_CURRENT_USER\Software\Microsoft\Notepad\fWrap to 0.
Depending on your exact requirements, you might be able to solve your problem by making a bat-file that modifies the registry before starting Notepad. That would be a rather large hack, though.
A:
You could just use Wordpad instead of Notepad, it has word wrap off by default.
|
Command switch to toggle Notepads word wrap
|
I have a costumer showing Notepad with a large set of data that looks totally misaligned if word wrap is on and I want to force it off. Is there a command switch to do this?
|
[
"I dont think there is a command switch to do this at all. If you want to force it off all the time then you may want to edit the registry:\nHive: HKEY_CURRENT_USER\nKey: SOFTWARE\\Microsoft\\Notepad\nName: fWrap\nType: REG_DWORD\nValue: 0\n\nYou could even create a .reg file and put it in a batch file to run it and reset it every time notepad runs.\nUsually though if you have word wrap turned off, when you open it up again, it will still be turned off.\n",
"you could just turn it off by going to Format -> Word Wrap.\n",
"I do not believe there is any command-line option to do that.\nYou can however set the default behavior by setting the registry-value HKEY_CURRENT_USER\\Software\\Microsoft\\Notepad\\fWrap to 0.\nDepending on your exact requirements, you might be able to solve your problem by making a bat-file that modifies the registry before starting Notepad. That would be a rather large hack, though.\n",
"You could just use Wordpad instead of Notepad, it has word wrap off by default.\n"
] |
[
2,
1,
0,
0
] |
[] |
[] |
[
"command_line_arguments"
] |
stackoverflow_0000083045_command_line_arguments.txt
|
Q:
MVC Retrieve Model On Every Request
Let’s say I'm developing a helpdesk application that will be used by multiple departments. Every URL in the application will include a key indicating the specific department. The key will always be the first parameter of every action in the system. For example
http://helpdesk/HR/Members
http://helpdesk/HR/Members/PeterParker
http://helpdesk/HR/Categories
http://helpdesk/Finance/Members
http://helpdesk/Finance/Members/BruceWayne
http://helpdesk/Finance/Categories
The problem is that in each action on each request, I have to take this parameter and then retrieve the Helpdesk Department model from the repository based on that key. From that model I can retrieve the list of members, categories etc., which is different for each Helpdesk Department. This obviously violates DRY.
My question is, how can I create a base controller, which does this for me so that the particular Helpdesk Department specified in the URL is available to all derived controllers, and I can just focus on the actions?
A:
I have a similar scenario in one of my projects, and I'd tend to use a ModelBinder rather than using a separate inheritance hierarchy. You can make a ModelBinder attribute to fetch the entity/entites from the RouteData:
public class HelpdeskDepartmentBinder : CustomModelBinderAttribute, IModelBinder {
public override IModelBinder GetBinder() {
return this;
}
public object GetValue(ControllerContext controllerContext, string modelName, Type modelType, ModelStateDictionary modelState) {
//... extract appropriate value from RouteData and fetch corresponding entity from database.
}
}
...then you can use it to make the HelpdeskDepartment available to all your actions:
public class MyController : Controller {
public ActionResult Index([HelpdeskDepartmentBinder] HelpdeskDepartment department) {
return View();
}
}
A:
Disclaimer: I'm currently running MVC Preview 5, so some of this may be new.
The best-practices way: Just implement a static utility class that provides a method that does the model look-up, taking the RouteData from the action as a parameter. Then, call this method from all actions that require the model.
The kludgy way, for only if every single action in every single controller needs the model, and you really don't want to have an extra method call in your actions: In your Controller-implementing-base-class, override ExecuteCore(), use the RouteData to populate the model, then call the base.ExecuteCore().
A:
You can create a base controller class via normal C# inheritance:
public abstract class BaseController : Controller
{
}
public class DerivedController : BaseController
{
}
You can use this base class only for controllers which require a department. You do not have to do anything special to instantiate a derived controller.
Technically, this works fine. There is some risk from a design point of view, however. If, as you say, all of your controllers will require a department, this is fine. If only some of them will require a department, it might still be fine. But if some controllers require a department, and other controllers require some other inherited behavior, and both subsets intersect, then you could find yourself in a multiple inheritance problem. This would suggest that inheritance would not be the best design to solve your stated problem.
|
MVC Retrieve Model On Every Request
|
Let’s say I'm developing a helpdesk application that will be used by multiple departments. Every URL in the application will include a key indicating the specific department. The key will always be the first parameter of every action in the system. For example
http://helpdesk/HR/Members
http://helpdesk/HR/Members/PeterParker
http://helpdesk/HR/Categories
http://helpdesk/Finance/Members
http://helpdesk/Finance/Members/BruceWayne
http://helpdesk/Finance/Categories
The problem is that in each action on each request, I have to take this parameter and then retrieve the Helpdesk Department model from the repository based on that key. From that model I can retrieve the list of members, categories etc., which is different for each Helpdesk Department. This obviously violates DRY.
My question is, how can I create a base controller, which does this for me so that the particular Helpdesk Department specified in the URL is available to all derived controllers, and I can just focus on the actions?
|
[
"I have a similar scenario in one of my projects, and I'd tend to use a ModelBinder rather than using a separate inheritance hierarchy. You can make a ModelBinder attribute to fetch the entity/entites from the RouteData:\npublic class HelpdeskDepartmentBinder : CustomModelBinderAttribute, IModelBinder {\n\n public override IModelBinder GetBinder() {\n return this;\n }\n\n public object GetValue(ControllerContext controllerContext, string modelName, Type modelType, ModelStateDictionary modelState) {\n //... extract appropriate value from RouteData and fetch corresponding entity from database. \n }\n}\n\n...then you can use it to make the HelpdeskDepartment available to all your actions:\npublic class MyController : Controller {\n public ActionResult Index([HelpdeskDepartmentBinder] HelpdeskDepartment department) {\n return View();\n }\n}\n\n",
"Disclaimer: I'm currently running MVC Preview 5, so some of this may be new.\nThe best-practices way: Just implement a static utility class that provides a method that does the model look-up, taking the RouteData from the action as a parameter. Then, call this method from all actions that require the model.\nThe kludgy way, for only if every single action in every single controller needs the model, and you really don't want to have an extra method call in your actions: In your Controller-implementing-base-class, override ExecuteCore(), use the RouteData to populate the model, then call the base.ExecuteCore().\n",
"You can create a base controller class via normal C# inheritance:\npublic abstract class BaseController : Controller \n{\n}\n\npublic class DerivedController : BaseController \n{\n}\n\nYou can use this base class only for controllers which require a department. You do not have to do anything special to instantiate a derived controller.\nTechnically, this works fine. There is some risk from a design point of view, however. If, as you say, all of your controllers will require a department, this is fine. If only some of them will require a department, it might still be fine. But if some controllers require a department, and other controllers require some other inherited behavior, and both subsets intersect, then you could find yourself in a multiple inheritance problem. This would suggest that inheritance would not be the best design to solve your stated problem.\n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"asp.net_mvc"
] |
stackoverflow_0000068234_asp.net_mvc.txt
|
Q:
Switching form Visual SourceSafe to CVS: what features are lost in Visual Studio?
My company is using Visual SourceSafe (VSS) and Visual Studio 2005 (soon 2008). They want to switch to CVS, but the developers don't want to lose the integration we get with VSS and Visual Studio.
I know there are CVS plugins, but what functionality, if any, is lost with CVS?
A:
If you're going to switch, why not switch to something better? CVS is a long way from state of the art in version control. A more modern system like Subversion or Vault not only offers better features, but it will get you better Visual Studio integration as well.
A:
Screaming at VSS for lost source code, etc. Seriously though, it is a very different model (optimistic locking), so you will probably lose some productivity for the first little while. I would probably look at using TortoiseCVS and "Open Folder In Windows Explorer" right-click or the Visual Studio Explorer plug-in rather than a CVS plug-in if you are using Visual Studio 2008 (all of the CVS plug-ins I have tried have had either serious functionality issues, or serious stability issues).
VSS is really a terrible source control system, and moving to a modern style (optimistic locking) source control system will be a huge boon in the long run. You might want to skip the 1990s all together though and move to Subversion/Git/Mercurial and get into the 2000s.
A:
If you must switch to CVS (Subversion or a distributed VCS would be better) then the script we used to migrate and keep the change history can be found here.
We are very happy with CVS, although we don't use Visual Studio integration as we find TortoiseCVS and SmartCVS much better. However if I was switching now I would look at Git or Mercurial.
A:
My hack is as follows:
I am mainly a Java developer and I use Eclipse/RAD. The support for CVS is great and is very easy to work with.
For the C# work I do I tried to find a CVS plugin for Visual Studio but was unhappy with the one I found. In the end, I decided to use Eclipse to handle the versioning of my C# projects.
The procedure:
Create a simple project in Eclipse
Open VS and save the project into the directory created by Eclipse
Return to Eclipse, press F5 to refresh the project
Share the project (i.e. add to CVS)
Add .sln to the list of externally handled files in the Eclipse settings
VS can now be opened directly from Eclipse by clicking the .sln file, the project can be worked on within VS. Upon exit from VS the project must be refreshed in Eclipse and can be synchronised with CVS
Although I have not yet used the Subversion plugin, I guess that would work in a similar way.
This solution works well for me especially as I spend most my time in Eclipse anyway.
I did try using TortoiseCVS but found it tricky to use. Eclipse is free and the CVS interface is very usable.
A:
Visual Studio has a bad integration inside the IDE for CVS and SVN. Those free ones don't work well. I use Tortoise (outside Visual Studio), and it works fine. If you want something inside Visual Studio, you might check for not free plugin or to use TFS.
|
Switching form Visual SourceSafe to CVS: what features are lost in Visual Studio?
|
My company is using Visual SourceSafe (VSS) and Visual Studio 2005 (soon 2008). They want to switch to CVS, but the developers don't want to lose the integration we get with VSS and Visual Studio.
I know there are CVS plugins, but what functionality, if any, is lost with CVS?
|
[
"If you're going to switch, why not switch to something better? CVS is a long way from state of the art in version control. A more modern system like Subversion or Vault not only offers better features, but it will get you better Visual Studio integration as well.\n",
"Screaming at VSS for lost source code, etc. Seriously though, it is a very different model (optimistic locking), so you will probably lose some productivity for the first little while. I would probably look at using TortoiseCVS and \"Open Folder In Windows Explorer\" right-click or the Visual Studio Explorer plug-in rather than a CVS plug-in if you are using Visual Studio 2008 (all of the CVS plug-ins I have tried have had either serious functionality issues, or serious stability issues). \nVSS is really a terrible source control system, and moving to a modern style (optimistic locking) source control system will be a huge boon in the long run. You might want to skip the 1990s all together though and move to Subversion/Git/Mercurial and get into the 2000s. \n",
"If you must switch to CVS (Subversion or a distributed VCS would be better) then the script we used to migrate and keep the change history can be found here.\nWe are very happy with CVS, although we don't use Visual Studio integration as we find TortoiseCVS and SmartCVS much better. However if I was switching now I would look at Git or Mercurial.\n",
"My hack is as follows:\nI am mainly a Java developer and I use Eclipse/RAD. The support for CVS is great and is very easy to work with.\nFor the C# work I do I tried to find a CVS plugin for Visual Studio but was unhappy with the one I found. In the end, I decided to use Eclipse to handle the versioning of my C# projects.\nThe procedure:\n\nCreate a simple project in Eclipse\nOpen VS and save the project into the directory created by Eclipse\nReturn to Eclipse, press F5 to refresh the project\nShare the project (i.e. add to CVS)\nAdd .sln to the list of externally handled files in the Eclipse settings\nVS can now be opened directly from Eclipse by clicking the .sln file, the project can be worked on within VS. Upon exit from VS the project must be refreshed in Eclipse and can be synchronised with CVS\n\nAlthough I have not yet used the Subversion plugin, I guess that would work in a similar way.\nThis solution works well for me especially as I spend most my time in Eclipse anyway.\nI did try using TortoiseCVS but found it tricky to use. Eclipse is free and the CVS interface is very usable.\n",
"Visual Studio has a bad integration inside the IDE for CVS and SVN. Those free ones don't work well. I use Tortoise (outside Visual Studio), and it works fine. If you want something inside Visual Studio, you might check for not free plugin or to use TFS.\n"
] |
[
9,
6,
1,
0,
0
] |
[] |
[] |
[
"cvs",
"visual_sourcesafe",
"visual_studio"
] |
stackoverflow_0000052447_cvs_visual_sourcesafe_visual_studio.txt
|
Q:
Launch Local Mail Client from App Hosted on Citrix
I have a desktop application (Windows Forms) which my client hosts on a Citrix server. I would like to launch the user's locally configured mail client to send mail from my application. How do I do this?
In addition to this, I will need to attach a file to the email before it is sent.
A:
I'm not at all sure how the Citrix client would handle mailto: links (or if you can configure that), but if you haven't tried them, I suggest you do.
Example:
mailto:[email protected]?subject=hello&body=see+attachment&attachment=\\host\path-to\file.foo
Also note that not all email clients support the attachment parameter in mailto URLs.
|
Launch Local Mail Client from App Hosted on Citrix
|
I have a desktop application (Windows Forms) which my client hosts on a Citrix server. I would like to launch the user's locally configured mail client to send mail from my application. How do I do this?
In addition to this, I will need to attach a file to the email before it is sent.
|
[
"I'm not at all sure how the Citrix client would handle mailto: links (or if you can configure that), but if you haven't tried them, I suggest you do.\nExample:\nmailto:[email protected]?subject=hello&body=see+attachment&attachment=\\\\host\\path-to\\file.foo\n\nAlso note that not all email clients support the attachment parameter in mailto URLs.\n"
] |
[
1
] |
[] |
[] |
[
"citrix"
] |
stackoverflow_0000082266_citrix.txt
|
Q:
Why do we use extra expression?
Here is a sample from Kernighan & Ritchie's "The C Programming Language":
int getline(char s[], int lim)
{
int c, i = 0;
while (--lim > 0; && (c=getchar()) !=EOF && c !='\n')
{
s[i++] = c;
}
if (c =='\n')
{
s[i++] = c;
}
s[i] = '\0';
return i;
}
Why do we should check if c != '\n', despite we use s[i++] = c after that?
A:
The functions reads characters from the standard input until either EOF or a newline characters is found.
The second check ensures that the only newline character is put into the char array. EOF shouldn't occur in a proper c-string. Also, if the character isn't newline that means that we might have filled up our c-string, in which case we shouldn't put any more characters into it.
Notice we still append the '\0'. We've ensured that theres still room for one more character in our c-string, as we use the pre-fix decrementor, which evaluates before the comparison.
A:
The comparison is to ensure readline terminates when it encounters a newline character (the '\n'). On the iteration where it does, it terminates without adding the newline to the string, so the statement after that ensures that the string is always newline terminated, even if one of the other termination conditions was reached.
A:
There is a bug in the code.
If the size of s is N bytes and the user types a newline as the (N-1)th character, the Nth character will become a '\n' and the (N+1)th character (which is not allocated) will become a '\0'.
A:
You do that just to exit the while loop on new line. Else you would have to check it in while body and use break.
A:
That ensures that you stop at the end of the line even if it's not the end of the input. Then if there is a newline the \n is added to the end of the line and i incremented one more time to avoid overwriting it with the \0.
A:
int getline(char s[], int lim)
{
int c, i;
i=0;
/* While staying withing limit and there is a char in stdin and it's not new line sign */
while (--lim > 0; && (c=getchar()) !=EOF && c !='\n')
/* Store char at the current position in array, advance current pos by one */
s[i++] = c;
/* If While loop stopped on new-line, store it in array, advance current pos by one */
if (c =='\n')
s[i++] = c;
/* finally terminate string with \0 */
s[i] = '\0';
return i;
}
A:
I'm not sure whether I understand the question. c !='\n' is used to stop reading the line when the end of line (linefeed) occurs. Otherwise we would always read it until the limit even if it ends before. The first s[i++] = c; in the while-loop doesn't occur if a linefeed has been reached. That's why there is the special test afterwards and the other s[i++] = c; in case it was a linefeed which broke the loop.
A:
Not answering your question, but I'll write some comments anyway:
I don't remember all K&R rules, but the function you've listed will fail if lim is equal to one. Then you won't run the loop which leaves c unintialised, but you'll still use the variable in the if (c == '\n') check.
Also the while (--lm > 0; ...) thing will not go through the compiler. Remove the ';' and it does.
|
Why do we use extra expression?
|
Here is a sample from Kernighan & Ritchie's "The C Programming Language":
int getline(char s[], int lim)
{
int c, i = 0;
while (--lim > 0; && (c=getchar()) !=EOF && c !='\n')
{
s[i++] = c;
}
if (c =='\n')
{
s[i++] = c;
}
s[i] = '\0';
return i;
}
Why do we should check if c != '\n', despite we use s[i++] = c after that?
|
[
"The functions reads characters from the standard input until either EOF or a newline characters is found. \nThe second check ensures that the only newline character is put into the char array. EOF shouldn't occur in a proper c-string. Also, if the character isn't newline that means that we might have filled up our c-string, in which case we shouldn't put any more characters into it. \nNotice we still append the '\\0'. We've ensured that theres still room for one more character in our c-string, as we use the pre-fix decrementor, which evaluates before the comparison.\n",
"The comparison is to ensure readline terminates when it encounters a newline character (the '\\n'). On the iteration where it does, it terminates without adding the newline to the string, so the statement after that ensures that the string is always newline terminated, even if one of the other termination conditions was reached.\n",
"There is a bug in the code.\nIf the size of s is N bytes and the user types a newline as the (N-1)th character, the Nth character will become a '\\n' and the (N+1)th character (which is not allocated) will become a '\\0'.\n",
"You do that just to exit the while loop on new line. Else you would have to check it in while body and use break.\n",
"That ensures that you stop at the end of the line even if it's not the end of the input. Then if there is a newline the \\n is added to the end of the line and i incremented one more time to avoid overwriting it with the \\0.\n",
"int getline(char s[], int lim)\n{\n int c, i;\n i=0;\n /* While staying withing limit and there is a char in stdin and it's not new line sign */\n while (--lim > 0; && (c=getchar()) !=EOF && c !='\\n')\n /* Store char at the current position in array, advance current pos by one */\n s[i++] = c;\n /* If While loop stopped on new-line, store it in array, advance current pos by one */\n if (c =='\\n') \n s[i++] = c;\n /* finally terminate string with \\0 */\n s[i] = '\\0';\n return i;\n}\n\n",
"I'm not sure whether I understand the question. c !='\\n' is used to stop reading the line when the end of line (linefeed) occurs. Otherwise we would always read it until the limit even if it ends before. The first s[i++] = c; in the while-loop doesn't occur if a linefeed has been reached. That's why there is the special test afterwards and the other s[i++] = c; in case it was a linefeed which broke the loop.\n",
"Not answering your question, but I'll write some comments anyway:\nI don't remember all K&R rules, but the function you've listed will fail if lim is equal to one. Then you won't run the loop which leaves c unintialised, but you'll still use the variable in the if (c == '\\n') check.\nAlso the while (--lm > 0; ...) thing will not go through the compiler. Remove the ';' and it does.\n"
] |
[
3,
1,
1,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"c"
] |
stackoverflow_0000071913_c.txt
|
Q:
Are there any guidelines for designing user interface for mobile devices?
I am creating an application for a Windows Mobile computer. The catch is that the device (Motorola MC17) does not have a touch screen or universal keys - there are only six programmable hardware keys. Fitt's law is not applicable here, most Microsoft guidelines are also moot. For now I'm mimicking Nokia's S60 keyboard layout as close as possible, since it's the most popular phone platform among my target audience.
Are there any guidelines for creating a simple, discoverable user interface on such a constrained device? What fonts and colours should I use to make my UI readable? How do I measure if the items on-screen are big enough? What conventions should I follow?
A:
Guidelines for Handheld & Mobile Device User Interface:
While there has been much successful work in developing rules to guide the design and implementation of interfaces for desktop machines and their applications, the design of mobile device interfaces is still relatively unexplored and unproven. This paper discusses the characteristics and limitations of current mobile device interfaces, especially compared to the desktop environment. Using existing interface guidelines as a starting point, a set of practical design guidelines for mobile device interface is proposed.
A:
Microsoft has an official set of Guidelines for getting the "Designed for Windows Mobile" logo. These are a reasonable start as they not only cover one-handed (no touchscreen) operation, they also help your app to maintain familiarity for users.
Some other resources discussing the topic:
The WinMo team blog entry on one-handed navigation
Mark Arteaga's article on stylus-free apps
|
Are there any guidelines for designing user interface for mobile devices?
|
I am creating an application for a Windows Mobile computer. The catch is that the device (Motorola MC17) does not have a touch screen or universal keys - there are only six programmable hardware keys. Fitt's law is not applicable here, most Microsoft guidelines are also moot. For now I'm mimicking Nokia's S60 keyboard layout as close as possible, since it's the most popular phone platform among my target audience.
Are there any guidelines for creating a simple, discoverable user interface on such a constrained device? What fonts and colours should I use to make my UI readable? How do I measure if the items on-screen are big enough? What conventions should I follow?
|
[
"Guidelines for Handheld & Mobile Device User Interface:\n\nWhile there has been much successful work in developing rules to guide the design and implementation of interfaces for desktop machines and their applications, the design of mobile device interfaces is still relatively unexplored and unproven. This paper discusses the characteristics and limitations of current mobile device interfaces, especially compared to the desktop environment. Using existing interface guidelines as a starting point, a set of practical design guidelines for mobile device interface is proposed.\n\n",
"Microsoft has an official set of Guidelines for getting the \"Designed for Windows Mobile\" logo. These are a reasonable start as they not only cover one-handed (no touchscreen) operation, they also help your app to maintain familiarity for users.\nSome other resources discussing the topic:\n\nThe WinMo team blog entry on one-handed navigation\nMark Arteaga's article on stylus-free apps\n\n"
] |
[
3,
3
] |
[] |
[] |
[
"usability",
"user_interface",
"windows_mobile"
] |
stackoverflow_0000037783_usability_user_interface_windows_mobile.txt
|
Q:
VS 2008 vs VS 2008 Express
I'm using Visual Studio Team System 2008 at work to do web development. I've gotten quite used to it but can't really afford to purchase even VS 2008 Standard at this time.
I have never used any of the Express editions before but I was thinking about downloading VS C# Express and VS Web Developer Express.
Am I wasting my time or can I do some serious development with these tools?
A:
You can do serious development on the express editions. They have taken out a few things most notably the plug in system. If you are use to using a bunch of plug ins you may find that not being able to use them is a deterrent.
Here is a link to a comparison of the express edition and the other editions.
http://msdn.microsoft.com/en-us/library/zcbsd3cz(VS.80).aspx
A:
You can indeed do serious development using the Visual Studio 2008 express editions, this includes commercial products see question number 7 in the FAQ which says:
Seven) Can I use Express Editions for commercial use?
Yes, there are no
licensing restrictions for
applications built using Visual Studio
Express Editions.
The feature matrix shows that whilst you do lose some functionality between the Pro and Express Editions. The single biggest issue being that there is no add-in support (and adding it is forbidden by the EULA) which limits many nice additions to the environment such as ReSharper, VisualAssist, etc.
You also don't get a "Studio" but four individual editions, Web Developer, VB, VC++ and C#, if you wish to mix and match languages/projects in the way that the Standard/Professional Editions support then you are out of luck. Under the surface however, MSBuild is available and can provide you with multi-language solutions.
A:
Express Editions works fine if you do not want to have different project types/languages in a solution, and you have no need for builtin source control.
Else, it's pretty much the same.
A:
Here is a detailed list of available features on different editions of Visual Studio : Product Comparison
A:
You can find a comparison of the features in the various editions of Visual Studio 2008 here. The things that I find most annoying about the express edition are that you can't have multiple projects in a solution file, and you can't use add-ons like Resharper.
A:
It depends how you define "serious development". One big thing missing from the express (and even standard) editions is the lack of support for mobile development. You also miss the convenience of grouping different project types in a solution.
I think you also miss some of the project types (windows services, Sql Server/CLR projects come to mind) in the Express edition.
A:
I haven't afforded the full version of VS2008 at home yet so I have Express and use it for some intermediate application development (no web stuff). I find it quite good enough, it's got most of the stuff I use. I tried SharpDevelop but it wouldn't allow more than one start up project so I ditched it for Express.
Most plugins don't seem to work in the Express versions if that's an issue for you.
A:
You actually CAN do commercial work with the VS 2008 Express editions.
See the answer to question #7 of the FAQ at this link:
http://www.microsoft.com/express/support/faq/
A:
You can download the Professional version of VS2008 for free (if you have a .edu address) via Microsoft Dreamspark.
For that matter, the (fully-functional) 90-day trial of both VS2005 and VS2008 Pro... can be "extended"... indefinitely... by setting your system clock back, but no real reason to do that.
Express is fine for being a "lite" version but is hobbled in all sorts of ways. For anything serious, get the real thing.
A:
I do serious work using the Express editions. I'm not a professional programmer since I moved into management, but I still keep my hand in writing the occasional utility or web page. The only thing I've missed from the professional versions is remote web debugging.
|
VS 2008 vs VS 2008 Express
|
I'm using Visual Studio Team System 2008 at work to do web development. I've gotten quite used to it but can't really afford to purchase even VS 2008 Standard at this time.
I have never used any of the Express editions before but I was thinking about downloading VS C# Express and VS Web Developer Express.
Am I wasting my time or can I do some serious development with these tools?
|
[
"You can do serious development on the express editions. They have taken out a few things most notably the plug in system. If you are use to using a bunch of plug ins you may find that not being able to use them is a deterrent.\nHere is a link to a comparison of the express edition and the other editions.\nhttp://msdn.microsoft.com/en-us/library/zcbsd3cz(VS.80).aspx\n",
"You can indeed do serious development using the Visual Studio 2008 express editions, this includes commercial products see question number 7 in the FAQ which says:\n\nSeven) Can I use Express Editions for commercial use?\n Yes, there are no\n licensing restrictions for\n applications built using Visual Studio\n Express Editions.\n\nThe feature matrix shows that whilst you do lose some functionality between the Pro and Express Editions. The single biggest issue being that there is no add-in support (and adding it is forbidden by the EULA) which limits many nice additions to the environment such as ReSharper, VisualAssist, etc.\nYou also don't get a \"Studio\" but four individual editions, Web Developer, VB, VC++ and C#, if you wish to mix and match languages/projects in the way that the Standard/Professional Editions support then you are out of luck. Under the surface however, MSBuild is available and can provide you with multi-language solutions.\n",
"Express Editions works fine if you do not want to have different project types/languages in a solution, and you have no need for builtin source control. \nElse, it's pretty much the same.\n",
"Here is a detailed list of available features on different editions of Visual Studio : Product Comparison\n",
"You can find a comparison of the features in the various editions of Visual Studio 2008 here. The things that I find most annoying about the express edition are that you can't have multiple projects in a solution file, and you can't use add-ons like Resharper.\n",
"It depends how you define \"serious development\". One big thing missing from the express (and even standard) editions is the lack of support for mobile development. You also miss the convenience of grouping different project types in a solution.\nI think you also miss some of the project types (windows services, Sql Server/CLR projects come to mind) in the Express edition.\n",
"I haven't afforded the full version of VS2008 at home yet so I have Express and use it for some intermediate application development (no web stuff). I find it quite good enough, it's got most of the stuff I use. I tried SharpDevelop but it wouldn't allow more than one start up project so I ditched it for Express.\nMost plugins don't seem to work in the Express versions if that's an issue for you.\n",
"You actually CAN do commercial work with the VS 2008 Express editions. \nSee the answer to question #7 of the FAQ at this link:\nhttp://www.microsoft.com/express/support/faq/\n",
"You can download the Professional version of VS2008 for free (if you have a .edu address) via Microsoft Dreamspark.\nFor that matter, the (fully-functional) 90-day trial of both VS2005 and VS2008 Pro... can be \"extended\"... indefinitely... by setting your system clock back, but no real reason to do that.\nExpress is fine for being a \"lite\" version but is hobbled in all sorts of ways. For anything serious, get the real thing.\n",
"I do serious work using the Express editions. I'm not a professional programmer since I moved into management, but I still keep my hand in writing the occasional utility or web page. The only thing I've missed from the professional versions is remote web debugging. \n"
] |
[
13,
3,
2,
1,
1,
1,
1,
1,
1,
1
] |
[] |
[] |
[
"visual_studio_2008",
"visual_studio_express"
] |
stackoverflow_0000083086_visual_studio_2008_visual_studio_express.txt
|
Q:
What is the best strategy for code chunks and macros in vim?
As I develop more with vim, I find myself wanting to copy in blocks of useful code, similar to "templates" in Eclipse.
I was thinking of making a separate file for each code chunk and just reading them in with
:r code-fornext
but that just seems kind of primitive. Googling around I find vim macros mentioned and something about "maps" but nothing that seems straightforward.
What I am looking for are e.g. something like Eclipse's "Templates" so I pop in a code chunk with the cursor sitting in the middle of it.
Or JEdit's "Macros" which I can record doing complicated deletes and renaming on one line, then I can play it again on 10 other lines so it does the same to them.
Does vim have anything like these two functionalities?
A:
To record macros in Vim, in the command mode, hit the q key and another key you want to assign the macro to. For quick throw away macros I usually just hit qq and assign the macro to the q key. Once you are in recording mode, run through your key strokes. When you are done make sure you are back in command mode and hit q again to stop recording. Then to replay the macro manually, you can type @q. To replay the previously run macro you can type @@ or to run it 10 times you could type 10@q or 20@q, etc..
In summary:
+----------------------------------+-------------------------------------+
| start recording a macro | qX (X = key to assign macro to) |
+----------------------------------+-------------------------------------+
| stop recording a macro | q |
+----------------------------------+-------------------------------------+
| playback macro | @X (X = key macro was assigned to) |
+----------------------------------+-------------------------------------+
| replay previously played macro | @@ |
+----------------------------------+-------------------------------------+
In regards to code chunks, I have found and started using a Vim plug-in called snipMate, which mimics TextMate's snippets feature. You can get the plug-in here:
http://www.vim.org/scripts/script.php?script_id=2540
And a short article on using snipMate (along with a short screencast showing it in use):
http://www.catonmat.net/blog/vim-plugins-snipmate-vim/
Hope you find this helpful!
A:
On vim.wikia, you will find a category of tips dedicated to snippets and abbreviations expansion. You will also see a list of plugins that ease the definition of complex snippets/templates-files.
HTH,
|
What is the best strategy for code chunks and macros in vim?
|
As I develop more with vim, I find myself wanting to copy in blocks of useful code, similar to "templates" in Eclipse.
I was thinking of making a separate file for each code chunk and just reading them in with
:r code-fornext
but that just seems kind of primitive. Googling around I find vim macros mentioned and something about "maps" but nothing that seems straightforward.
What I am looking for are e.g. something like Eclipse's "Templates" so I pop in a code chunk with the cursor sitting in the middle of it.
Or JEdit's "Macros" which I can record doing complicated deletes and renaming on one line, then I can play it again on 10 other lines so it does the same to them.
Does vim have anything like these two functionalities?
|
[
"To record macros in Vim, in the command mode, hit the q key and another key you want to assign the macro to. For quick throw away macros I usually just hit qq and assign the macro to the q key. Once you are in recording mode, run through your key strokes. When you are done make sure you are back in command mode and hit q again to stop recording. Then to replay the macro manually, you can type @q. To replay the previously run macro you can type @@ or to run it 10 times you could type 10@q or 20@q, etc..\nIn summary:\n+----------------------------------+-------------------------------------+\n| start recording a macro | qX (X = key to assign macro to) |\n+----------------------------------+-------------------------------------+\n| stop recording a macro | q | \n+----------------------------------+-------------------------------------+\n| playback macro | @X (X = key macro was assigned to) |\n+----------------------------------+-------------------------------------+\n| replay previously played macro | @@ |\n+----------------------------------+-------------------------------------+\n\nIn regards to code chunks, I have found and started using a Vim plug-in called snipMate, which mimics TextMate's snippets feature. You can get the plug-in here:\nhttp://www.vim.org/scripts/script.php?script_id=2540\nAnd a short article on using snipMate (along with a short screencast showing it in use):\nhttp://www.catonmat.net/blog/vim-plugins-snipmate-vim/\nHope you find this helpful! \n",
"On vim.wikia, you will find a category of tips dedicated to snippets and abbreviations expansion. You will also see a list of plugins that ease the definition of complex snippets/templates-files.\nHTH,\n"
] |
[
8,
0
] |
[] |
[] |
[
"vi",
"vim"
] |
stackoverflow_0000054401_vi_vim.txt
|
Q:
Computationally efficient three dimensional arrays in C
I am trying to solve numerically a set of partial differential equations in three dimensions. In each of the equations the next value of the unknown in a point depends on the current value of each unknown in the closest points.
To write an efficient code I need to keep the points close in the three dimensions close in the (one-dimensional) memory space, so that each value is called from memory just once.
I was thinking of using octtrees, but I was wondering if someone knows a better method.
A:
Octtrees are the way to go. You subdivide the array into 8 octants:
1 2
3 4
---
5 6
7 8
And then lay them out in memory in the order 1, 2, 3, 4, 5, 6, 7, 8 as above. You repeat this recursively within each octant until you get down to some base size, probably around 128 bytes or so (this is just a guess -- make sure to profile to determine the optimal cutoff point). This has much, much better cache coherency and locality of reference than the naive layout.
A:
One alternative to the tree-method: Use the Morton-Order to encode your data.
In three dimension it goes like this: Take the coordinate components and interleave each bit two zero bits. Here shown in binary: 11111b becomes 1001001001b
A C-function to do this looks like this (shown for clarity and only for 11 bits):
int morton3 (int a)
{
int result = 0;
int i;
for (i=0; i<11; i++)
{
// check if the i'th bit is set.
int bit = a&(1<<i);
if (bit)
{
// if so set the 3*i'th bit in the result:
result |= 1<<(i*3);
}
}
return result;
}
You can use this function to combine your positions like this:
index = morton3 (position.x) +
morton3 (position.y)*2 +
morton3 (position.z)*4;
This turns your three dimensional index into a one dimensional one. Best part of it: Values that are close in 3D space are close in 1D space as well. If you access values close to each other frequently you will also get a very nice speed-up because the morton-order encoding is optimal in terms of cache locality.
For morton3 you better not use the code above. Use a small table to look up 4 or 8 bits at a time and combine them together.
Hope it helps,
Nils
A:
The book Foundations of Multidimensional and Metric Data Structures can help you decide which data structure is fastest for range queries: octrees, kd-trees, R-trees, ...
It also describes data layouts for keeping points together in memory.
|
Computationally efficient three dimensional arrays in C
|
I am trying to solve numerically a set of partial differential equations in three dimensions. In each of the equations the next value of the unknown in a point depends on the current value of each unknown in the closest points.
To write an efficient code I need to keep the points close in the three dimensions close in the (one-dimensional) memory space, so that each value is called from memory just once.
I was thinking of using octtrees, but I was wondering if someone knows a better method.
|
[
"Octtrees are the way to go. You subdivide the array into 8 octants:\n\n1 2\n3 4\n\n---\n\n5 6\n7 8\n\nAnd then lay them out in memory in the order 1, 2, 3, 4, 5, 6, 7, 8 as above. You repeat this recursively within each octant until you get down to some base size, probably around 128 bytes or so (this is just a guess -- make sure to profile to determine the optimal cutoff point). This has much, much better cache coherency and locality of reference than the naive layout.\n",
"One alternative to the tree-method: Use the Morton-Order to encode your data.\nIn three dimension it goes like this: Take the coordinate components and interleave each bit two zero bits. Here shown in binary: 11111b becomes 1001001001b\nA C-function to do this looks like this (shown for clarity and only for 11 bits):\nint morton3 (int a)\n{\n int result = 0;\n int i;\n for (i=0; i<11; i++)\n {\n // check if the i'th bit is set.\n int bit = a&(1<<i);\n if (bit)\n {\n // if so set the 3*i'th bit in the result:\n result |= 1<<(i*3);\n }\n }\n return result;\n}\n\nYou can use this function to combine your positions like this:\nindex = morton3 (position.x) + \n morton3 (position.y)*2 +\n morton3 (position.z)*4;\n\nThis turns your three dimensional index into a one dimensional one. Best part of it: Values that are close in 3D space are close in 1D space as well. If you access values close to each other frequently you will also get a very nice speed-up because the morton-order encoding is optimal in terms of cache locality.\nFor morton3 you better not use the code above. Use a small table to look up 4 or 8 bits at a time and combine them together. \nHope it helps,\n Nils\n",
"The book Foundations of Multidimensional and Metric Data Structures can help you decide which data structure is fastest for range queries: octrees, kd-trees, R-trees, ...\nIt also describes data layouts for keeping points together in memory.\n"
] |
[
5,
5,
3
] |
[] |
[] |
[
"arrays",
"c",
"multidimensional_array",
"numerical"
] |
stackoverflow_0000076076_arrays_c_multidimensional_array_numerical.txt
|
Q:
Removing static file cachebusting in rails
I have a rails application which is still showing the cachebusting numeric string at the end of the URL for static mode, even though I have put it into the production environment. Can someone tell me what config option I need to set to prevent this behaviour...
A:
That file isn't there to break the cache during day-to-day operations. At least in theory, proxy servers are allowed to cache HTTP GET requests (as long the parameters remain the same).
Instead, that number is there to allow you to smoothly upgrade your CSS and JavaScript files from one version to the next. As I understand it, it's supposed to remain on in production mode. The numbers should only change when the timestamps on your files change.
Are you seeing common proxy servers that completely fail to cache any HTTP GET request with a single parameter?
A:
To disable the ?timestamp cache busting in production add this to your config/environments/production.rb
ENV['RAILS_ASSET_ID'] = ''
If you want to dig deeper into what this does, check out asset_tag_helper.rb in the ActionPack gem, line 527 (ish)
|
Removing static file cachebusting in rails
|
I have a rails application which is still showing the cachebusting numeric string at the end of the URL for static mode, even though I have put it into the production environment. Can someone tell me what config option I need to set to prevent this behaviour...
|
[
"That file isn't there to break the cache during day-to-day operations. At least in theory, proxy servers are allowed to cache HTTP GET requests (as long the parameters remain the same).\nInstead, that number is there to allow you to smoothly upgrade your CSS and JavaScript files from one version to the next. As I understand it, it's supposed to remain on in production mode. The numbers should only change when the timestamps on your files change.\nAre you seeing common proxy servers that completely fail to cache any HTTP GET request with a single parameter?\n",
"To disable the ?timestamp cache busting in production add this to your config/environments/production.rb\nENV['RAILS_ASSET_ID'] = ''\n\nIf you want to dig deeper into what this does, check out asset_tag_helper.rb in the ActionPack gem, line 527 (ish)\n"
] |
[
4,
2
] |
[] |
[] |
[
"caching",
"ruby_on_rails"
] |
stackoverflow_0000083058_caching_ruby_on_rails.txt
|
Q:
Which database table Schema is more efficient?
Which Database table Schema is more efficient and why?
"Users (UserID, UserName, CompamyId)"
"Companies (CompamyId, CompanyName)"
OR
"Users (UserID, UserName)"
"Companies (CompamyId, CompanyName)"
"UserCompanies (UserID, CompamyId)"
Given the fact that user and company have one-to-one relation.
A:
well that's a bit of an open ended question and depends on your business rules. The first option you have only allows one company to be mapped to one user. you're defining a many-to-one relationship.
The second schema defines a many-to-many relationship which allows multiple users to be mapped to multiple companies.
They solve different problems and depending on what you're trying to solve will determine what schema you should use.
Strictly speaking from a "transactions" point of view, the first schema will be quicker because you only have to commit one row for a user object to be associated to a company and to retrieve the company that your user works for requires only one join, however the second solution will scale better if your business requirements change and require you to have multiple companies assigend to a user.
A:
For sure, the earlier one is more efficient given that constraint. For getting the same information, you will have less number of joins in your queries.
A:
As always it depends. I would personally go with answer number one since it would have less joins and would be easier to maintain. Less joins should mean that it requires less table and index scans.
SELECT userid, username, companyid, companyname
FROM companies c, users u
WHERE userid = companyid
Is much better than...
SELECT userid, username, companyid, companyname
FROM companies c, users u, usercompanies uc
WHERE u.userid = uc.userid
AND c.companyid = uc.companyid
A:
The two schemas cannot be compared, as they have different relationships, you should proablly look at what the spec is for the tables and then work out which one fits the relationship needed.
The first one implies that a User can only be a member of one company (a belongs_to relationship). Whereas the second schema implies that a User can be a member of many companies (a has_many relationship)
If you are looking for a schema that can (or will later) support a has_many relationship then you want to go with the second one. For the reason compare:
//select all users in company x with schema 1
select username, companyname from companies
inner join users on users.companyid = companies.companyid
where companies.companyid = __some_id__;
and
//select all users in company x with schema 2
select username, companyname from companies
inner join usercompanies on usercompanies.companyid = companies.companyid
inner join users on usercompanies.userid = users.userid
where companies.companyid = __some_id__;
You have an extra join on the select table. If you only want the belongs_to relationship then the second query does more work than it should - and so makes it less efficient.
A:
I think you mean "many to one" when it comes to users and companies - unless you plan on having a unique company for each user.
To answer your question, go with the first approach. One less table to store reduces space and will make your queries use less JOIN commands. Also, and more importantly, it correctly matches your desired input. The database schema should describe the format for all valid data - if it fits the format it should be considered valid. Since a user can only have one company it's possible to have incorrect data in your database if you use the second schema.
A:
If User and Company really have a one-to-one relationship, then you only need one table:
(ID, UserName, CompanyName)
But I suspect you really meant that there is a one-to-many relationship between user and company - one or more users pr company but only one company pr user. In that case the two-table solution is correct.
If there is a many-to-many relationship (a company can have several users and a user can be attached to several companies), then the three-table solution is correct.
Note that efficiency is not really the issue here. Its the nature of the data that dictates which solution you should use.
|
Which database table Schema is more efficient?
|
Which Database table Schema is more efficient and why?
"Users (UserID, UserName, CompamyId)"
"Companies (CompamyId, CompanyName)"
OR
"Users (UserID, UserName)"
"Companies (CompamyId, CompanyName)"
"UserCompanies (UserID, CompamyId)"
Given the fact that user and company have one-to-one relation.
|
[
"well that's a bit of an open ended question and depends on your business rules. The first option you have only allows one company to be mapped to one user. you're defining a many-to-one relationship.\nThe second schema defines a many-to-many relationship which allows multiple users to be mapped to multiple companies. \nThey solve different problems and depending on what you're trying to solve will determine what schema you should use.\nStrictly speaking from a \"transactions\" point of view, the first schema will be quicker because you only have to commit one row for a user object to be associated to a company and to retrieve the company that your user works for requires only one join, however the second solution will scale better if your business requirements change and require you to have multiple companies assigend to a user.\n",
"For sure, the earlier one is more efficient given that constraint. For getting the same information, you will have less number of joins in your queries.\n",
"As always it depends. I would personally go with answer number one since it would have less joins and would be easier to maintain. Less joins should mean that it requires less table and index scans.\nSELECT userid, username, companyid, companyname\nFROM companies c, users u\nWHERE userid = companyid\n\nIs much better than...\nSELECT userid, username, companyid, companyname\nFROM companies c, users u, usercompanies uc\nWHERE u.userid = uc.userid\nAND c.companyid = uc.companyid\n\n",
"The two schemas cannot be compared, as they have different relationships, you should proablly look at what the spec is for the tables and then work out which one fits the relationship needed.\nThe first one implies that a User can only be a member of one company (a belongs_to relationship). Whereas the second schema implies that a User can be a member of many companies (a has_many relationship)\nIf you are looking for a schema that can (or will later) support a has_many relationship then you want to go with the second one. For the reason compare:\n\n//select all users in company x with schema 1\nselect username, companyname from companies\ninner join users on users.companyid = companies.companyid\nwhere companies.companyid = __some_id__;\n\nand\n\n//select all users in company x with schema 2\nselect username, companyname from companies\ninner join usercompanies on usercompanies.companyid = companies.companyid\ninner join users on usercompanies.userid = users.userid\nwhere companies.companyid = __some_id__;\n\nYou have an extra join on the select table. If you only want the belongs_to relationship then the second query does more work than it should - and so makes it less efficient.\n",
"I think you mean \"many to one\" when it comes to users and companies - unless you plan on having a unique company for each user.\nTo answer your question, go with the first approach. One less table to store reduces space and will make your queries use less JOIN commands. Also, and more importantly, it correctly matches your desired input. The database schema should describe the format for all valid data - if it fits the format it should be considered valid. Since a user can only have one company it's possible to have incorrect data in your database if you use the second schema.\n",
"If User and Company really have a one-to-one relationship, then you only need one table:\n(ID, UserName, CompanyName)\n\nBut I suspect you really meant that there is a one-to-many relationship between user and company - one or more users pr company but only one company pr user. In that case the two-table solution is correct. \nIf there is a many-to-many relationship (a company can have several users and a user can be attached to several companies), then the three-table solution is correct.\nNote that efficiency is not really the issue here. Its the nature of the data that dictates which solution you should use.\n"
] |
[
7,
6,
1,
1,
0,
0
] |
[] |
[] |
[
"database_design"
] |
stackoverflow_0000038791_database_design.txt
|
Q:
How can I get the unique values of an array in .net?
Say I've got this array:
MyArray(0)="aaa"
MyArray(1)="bbb"
MyArray(2)="aaa"
Is there a .net function which can give me the unique values? I would like something like this as an output of the function:
OutputArray(0)="aaa"
OutputArray(1)="bbb"
A:
Assuming you have .Net 3.5/LINQ:
string[] OutputArray = MyArray.Distinct().ToArray();
A:
A solution could be to use LINQ as in the following example:
int[] test = { 1, 2, 1, 3, 3, 4, 5 };
var res = (from t in test select t).Distinct<int>();
foreach (var i in res)
{
Console.WriteLine(i);
}
That would print the expected:
1
2
3
4
5
A:
You could use a dictionary to add them with a key, and when you add them check if the key already exists.
string[] myarray = new string[] { "aaa", "bbb", "aaa" };
Dictionary mydict = new Dictionary();
foreach (string s in myarray) {
if (!mydict.ContainsKey(s)) mydict.Add(s, s);
}
A:
Use the HashSet class included in .NET 3.5.
|
How can I get the unique values of an array in .net?
|
Say I've got this array:
MyArray(0)="aaa"
MyArray(1)="bbb"
MyArray(2)="aaa"
Is there a .net function which can give me the unique values? I would like something like this as an output of the function:
OutputArray(0)="aaa"
OutputArray(1)="bbb"
|
[
"Assuming you have .Net 3.5/LINQ:\nstring[] OutputArray = MyArray.Distinct().ToArray();\n\n",
"A solution could be to use LINQ as in the following example:\nint[] test = { 1, 2, 1, 3, 3, 4, 5 };\nvar res = (from t in test select t).Distinct<int>();\nforeach (var i in res)\n{\n Console.WriteLine(i);\n}\n\nThat would print the expected:\n1\n2\n3\n4\n5\n\n",
"You could use a dictionary to add them with a key, and when you add them check if the key already exists.\nstring[] myarray = new string[] { \"aaa\", \"bbb\", \"aaa\" };\n Dictionary mydict = new Dictionary();\n foreach (string s in myarray) {\n if (!mydict.ContainsKey(s)) mydict.Add(s, s);\n }\n",
"Use the HashSet class included in .NET 3.5.\n"
] |
[
9,
8,
2,
1
] |
[] |
[] |
[
".net",
"arrays",
"unique"
] |
stackoverflow_0000083260_.net_arrays_unique.txt
|
Q:
What is the best way to print screens from an ASP.NET page .NET1.1/.NET2.0
I have seen examples of printing from a windows application but I have not been able to find a good example of any way of doing this.
A:
I've used the print style sheet
here's and article http://alistapart.com/stories/goingtoprint/ that will go through the way to set that up. Rather than setting up a special page that would need to be maintained as well.
A:
If you just need to print your web page from the client-side use window.print(). Sample could be found here: http://www.javascriptkit.com/howto/newtech2.shtml. I would suggest preparing a special version of your page first with no dynamic content and with a layout which would look nice on print.
If you need to send something to printer on the server-side that would be a little bit more complicated. Check out this MSDN article on how to do the basic printing.
A:
The browser prints your pages. If you need to tweak the page so it looks better on the printer, use CSS @media selectors.
A:
Restating what others have said, you just need to call window.print() in javascript. That and build a separate css for print.
|
What is the best way to print screens from an ASP.NET page .NET1.1/.NET2.0
|
I have seen examples of printing from a windows application but I have not been able to find a good example of any way of doing this.
|
[
"I've used the print style sheet\nhere's and article http://alistapart.com/stories/goingtoprint/ that will go through the way to set that up. Rather than setting up a special page that would need to be maintained as well.\n",
"If you just need to print your web page from the client-side use window.print(). Sample could be found here: http://www.javascriptkit.com/howto/newtech2.shtml. I would suggest preparing a special version of your page first with no dynamic content and with a layout which would look nice on print.\nIf you need to send something to printer on the server-side that would be a little bit more complicated. Check out this MSDN article on how to do the basic printing.\n",
"The browser prints your pages. If you need to tweak the page so it looks better on the printer, use CSS @media selectors.\n",
"Restating what others have said, you just need to call window.print() in javascript. That and build a separate css for print.\n"
] |
[
1,
0,
0,
0
] |
[] |
[] |
[
"asp.net_1.1",
"asp.net_2.0",
"printing"
] |
stackoverflow_0000082654_asp.net_1.1_asp.net_2.0_printing.txt
|
Q:
Processing Javascript RegEx submatches
I am trying to write some JavaScript RegEx to replace user inputed tags with real html tags, so [b] will become <b> and so forth. the RegEx I am using looks like so
var exptags = /\[(b|u|i|s|center|code){1}]((.){1,}?)\[\/(\1){1}]/ig;
with the following JavaScript
s.replace(exptags,"<$1>$2</$1>");
this works fine for single nested tags, for example:
[b]hello[/b] [u]world[/u]
but if the tags are nested inside each other it will only match the outer tags, for example
[b]foo [u]to the[/u] bar[/b]
this will only match the b tags. how can I fix this? should i just loop until the starting string is the same as the outcome? I have a feeling that the ((.){1,}?) patten is wrong also?
Thanks
A:
The easiest solution would be to to replace all the tags, whether they are closed or not and let .innerHTML work out if they are matched or not it will much more resilient that way..
var tagreg = /\[(\/?)(b|u|i|s|center|code)]/ig
div.innerHTML="[b][i]helloworld[/b]".replace(tagreg, "<$1$2>") //no closing i
//div.inerHTML=="<b><i>helloworld</i></b>"
A:
AFAIK you can't express recursion with regular expressions.
You can however do that with .NET's System.Text.RegularExpressions using balanced matching. See more here: http://blogs.msdn.com/bclteam/archive/2005/03/15/396452.aspx
If you're using .NET you can probably implement what you need with a callback.
If not, you may have to roll your own little javascript parser.
Then again, if you can afford to hit the server you can use the full parser. :)
What do you need this for, anyway? If it is for anything other than a preview I highly recommend doing the processing server-side.
A:
Yes, you will have to loop. Alternatively since your tags looks so much like HTML ones you could replace [b] for <b> and [/b] for </b> separately. (.){1,}? is the same as (.*?) - that is, any symbols, least possible sequence length.
Updated: Thanks to MrP, (.){1,}? is (.)+?, my bad.
A:
You are right about the inner pattern being troublesome.
((.){1,}?)
That is doing a captured match at least once and then the whole thing is captured. Every character inside your tag will be captured as a group.
You are also capturing your closing element name when you don't need it and are using {1} when that is implied. Below is a cleanup up version:
/\[(b|u|i|s|center|code)](.+?)\[\/\1]/ig
Not sure about the other problem.
A:
You could just repeatedly apply the regexp until it no longer matches. That would do odd things like "[b][b]foo[/b][/b]" => "<b>[b]foo</b>[/b]" => "<b><b>foo</b></b>", but as far as I can see the end result will still be a sensible string with matching (though not necessarily properly nested) tags.
Or if you want to do it 'right', just write a simple recursive descent parser. Though people might expect "[b]foo[u]bar[/b]baz[/u]" to work, which is tricky to recognise with a parser.
A:
The reason the nested block doesn't get replaced is because the match, for [b], places the position after [/b]. Thus, everything that ((.){1,}?) matches is then ignored.
It is possible to write a recursive parser in server-side -- Perl uses qr// and Ruby probably has something similar.
Though, you don't necessarily need true recursive. You can use a relatively simple loop to handle the string equivalently:
var s = '[b]hello[/b] [u]world[/u] [b]foo [u]to the[/u] bar[/b]';
var exptags = /\[(b|u|i|s|center|code){1}]((.){1,}?)\[\/(\1){1}]/ig;
while (s.match(exptags)) {
s = s.replace(exptags, "<$1>$2</$1>");
}
document.writeln('<div>' + s + '</div>'); // after
In this case, it'll make 2 passes:
0: [b]hello[/b] [u]world[/u] [b]foo [u]to the[/u] bar[/b]
1: <b>hello</b> <u>world</u> <b>foo [u]to the[/u] bar</b>
2: <b>hello</b> <u>world</u> <b>foo <u>to the</u> bar</b>
Also, a few suggestions for cleaning up the RegEx:
var exptags = /\[(b|u|i|s|center|code)\](.+?)\[\/(\1)\]/ig;
{1} is assumed when no other count specifiers exist
{1,} can be shortened to +
A:
Agree with Richard Szalay, but his regex didn't get quoted right:
var exptags = /\[(b|u|i|s|center|code)](.*)\[\/\1]/ig;
is cleaner. Note that I also change .+? to .*. There are two problems with .+?:
you won't match [u][/u], since there isn't at least one character between them (+)
a non-greedy match won't deal as nicely with the same tag nested inside itself (?)
A:
How about:
tagreg=/\[(.?)?(b|u|i|s|center|code)\]/gi;
"[b][i]helloworld[/i][/b]".replace(tagreg, "<$1$2>");
"[b]helloworld[/b]".replace(tagreg, "<$1$2>");
For me the above produces:
<b><i>helloworld</i></b>
<b>helloworld</b>
This appears to do what you want, and has the advantage of needing only a single pass.
Disclaimer: I don't code often in JS, so if I made any mistakes please feel free to point them out :-)
|
Processing Javascript RegEx submatches
|
I am trying to write some JavaScript RegEx to replace user inputed tags with real html tags, so [b] will become <b> and so forth. the RegEx I am using looks like so
var exptags = /\[(b|u|i|s|center|code){1}]((.){1,}?)\[\/(\1){1}]/ig;
with the following JavaScript
s.replace(exptags,"<$1>$2</$1>");
this works fine for single nested tags, for example:
[b]hello[/b] [u]world[/u]
but if the tags are nested inside each other it will only match the outer tags, for example
[b]foo [u]to the[/u] bar[/b]
this will only match the b tags. how can I fix this? should i just loop until the starting string is the same as the outcome? I have a feeling that the ((.){1,}?) patten is wrong also?
Thanks
|
[
"The easiest solution would be to to replace all the tags, whether they are closed or not and let .innerHTML work out if they are matched or not it will much more resilient that way..\nvar tagreg = /\\[(\\/?)(b|u|i|s|center|code)]/ig\ndiv.innerHTML=\"[b][i]helloworld[/b]\".replace(tagreg, \"<$1$2>\") //no closing i\n//div.inerHTML==\"<b><i>helloworld</i></b>\"\n\n",
"AFAIK you can't express recursion with regular expressions. \nYou can however do that with .NET's System.Text.RegularExpressions using balanced matching. See more here: http://blogs.msdn.com/bclteam/archive/2005/03/15/396452.aspx \nIf you're using .NET you can probably implement what you need with a callback. \nIf not, you may have to roll your own little javascript parser.\nThen again, if you can afford to hit the server you can use the full parser. :)\nWhat do you need this for, anyway? If it is for anything other than a preview I highly recommend doing the processing server-side.\n",
"Yes, you will have to loop. Alternatively since your tags looks so much like HTML ones you could replace [b] for <b> and [/b] for </b> separately. (.){1,}? is the same as (.*?) - that is, any symbols, least possible sequence length.\nUpdated: Thanks to MrP, (.){1,}? is (.)+?, my bad.\n",
"You are right about the inner pattern being troublesome.\n((.){1,}?)\n\nThat is doing a captured match at least once and then the whole thing is captured. Every character inside your tag will be captured as a group.\nYou are also capturing your closing element name when you don't need it and are using {1} when that is implied. Below is a cleanup up version:\n/\\[(b|u|i|s|center|code)](.+?)\\[\\/\\1]/ig\n\nNot sure about the other problem.\n",
"You could just repeatedly apply the regexp until it no longer matches. That would do odd things like \"[b][b]foo[/b][/b]\" => \"<b>[b]foo</b>[/b]\" => \"<b><b>foo</b></b>\", but as far as I can see the end result will still be a sensible string with matching (though not necessarily properly nested) tags.\nOr if you want to do it 'right', just write a simple recursive descent parser. Though people might expect \"[b]foo[u]bar[/b]baz[/u]\" to work, which is tricky to recognise with a parser.\n",
"The reason the nested block doesn't get replaced is because the match, for [b], places the position after [/b]. Thus, everything that ((.){1,}?) matches is then ignored.\nIt is possible to write a recursive parser in server-side -- Perl uses qr// and Ruby probably has something similar.\nThough, you don't necessarily need true recursive. You can use a relatively simple loop to handle the string equivalently:\nvar s = '[b]hello[/b] [u]world[/u] [b]foo [u]to the[/u] bar[/b]';\nvar exptags = /\\[(b|u|i|s|center|code){1}]((.){1,}?)\\[\\/(\\1){1}]/ig;\n\nwhile (s.match(exptags)) {\n s = s.replace(exptags, \"<$1>$2</$1>\");\n}\n\ndocument.writeln('<div>' + s + '</div>'); // after\n\nIn this case, it'll make 2 passes:\n0: [b]hello[/b] [u]world[/u] [b]foo [u]to the[/u] bar[/b]\n1: <b>hello</b> <u>world</u> <b>foo [u]to the[/u] bar</b>\n2: <b>hello</b> <u>world</u> <b>foo <u>to the</u> bar</b>\n\n\nAlso, a few suggestions for cleaning up the RegEx:\nvar exptags = /\\[(b|u|i|s|center|code)\\](.+?)\\[\\/(\\1)\\]/ig;\n\n\n{1} is assumed when no other count specifiers exist\n{1,} can be shortened to +\n\n",
"Agree with Richard Szalay, but his regex didn't get quoted right:\nvar exptags = /\\[(b|u|i|s|center|code)](.*)\\[\\/\\1]/ig;\n\nis cleaner. Note that I also change .+? to .*. There are two problems with .+?:\n\nyou won't match [u][/u], since there isn't at least one character between them (+)\na non-greedy match won't deal as nicely with the same tag nested inside itself (?)\n\n",
"How about:\ntagreg=/\\[(.?)?(b|u|i|s|center|code)\\]/gi;\n\"[b][i]helloworld[/i][/b]\".replace(tagreg, \"<$1$2>\");\n\"[b]helloworld[/b]\".replace(tagreg, \"<$1$2>\");\n\nFor me the above produces:\n<b><i>helloworld</i></b>\n<b>helloworld</b>\n\nThis appears to do what you want, and has the advantage of needing only a single pass.\nDisclaimer: I don't code often in JS, so if I made any mistakes please feel free to point them out :-)\n"
] |
[
3,
1,
0,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"javascript",
"markdown",
"regex"
] |
stackoverflow_0000080963_javascript_markdown_regex.txt
|
Q:
Execute program from within a C program
How should I run another program from within my C program? I need to be able to write data into STDIN of the launched program (and maybe read from it's STDOUT)
I am not sure if this is a standard C function. I need the solution that should work under Linux.
A:
You want to use popen. It gives you a unidirectional pipe with which you can access stdin and stdout of the program.
popen is standard on modern unix and unix-like OS, of which Linux is one :-)
Type
man popen
in a terminal to read more about it.
EDIT
Whether popen produces unidirectional or bidirectional pipes depends on the implementation. In Linux and OpenBSD, popen produces unidirectional pipes, which are read-only or write-only. On OS X, FreeBSD and NetBSD popen produces bidirectional pipes.
A:
I wrote some example C code for someone else a while back that shows how to do this. Here it is for you:
#include <sys/types.h>
#include <unistd.h>
#include <stdio.h>
void error(char *s);
char *data = "Some input data\n";
main()
{
int in[2], out[2], n, pid;
char buf[255];
/* In a pipe, xx[0] is for reading, xx[1] is for writing */
if (pipe(in) < 0) error("pipe in");
if (pipe(out) < 0) error("pipe out");
if ((pid=fork()) == 0) {
/* This is the child process */
/* Close stdin, stdout, stderr */
close(0);
close(1);
close(2);
/* make our pipes, our new stdin,stdout and stderr */
dup2(in[0],0);
dup2(out[1],1);
dup2(out[1],2);
/* Close the other ends of the pipes that the parent will use, because if
* we leave these open in the child, the child/parent will not get an EOF
* when the parent/child closes their end of the pipe.
*/
close(in[1]);
close(out[0]);
/* Over-write the child process with the hexdump binary */
execl("/usr/bin/hexdump", "hexdump", "-C", (char *)NULL);
error("Could not exec hexdump");
}
printf("Spawned 'hexdump -C' as a child process at pid %d\n", pid);
/* This is the parent process */
/* Close the pipe ends that the child uses to read from / write to so
* the when we close the others, an EOF will be transmitted properly.
*/
close(in[0]);
close(out[1]);
printf("<- %s", data);
/* Write some data to the childs input */
write(in[1], data, strlen(data));
/* Because of the small amount of data, the child may block unless we
* close it's input stream. This sends an EOF to the child on it's
* stdin.
*/
close(in[1]);
/* Read back any output */
n = read(out[0], buf, 250);
buf[n] = 0;
printf("-> %s",buf);
exit(0);
}
void error(char *s)
{
perror(s);
exit(1);
}
A:
Create two pipes with pipe(...), one for stdin, one for stdout.
fork(...) the process.
In the child process (the one where fork(...) returns 0) dup (...) the pipes to stdin/stdout.
exec[v][e] the to be started programm file in the child process.
In the parent process (the one where fork) returns the PID of the child) do a loop that reads from the child's stdout (select(...) or poll(...), read(...) ) into a buffer, until the
child terminates (waitpid(...)).
Eventually supply the child with input on stdin if it expects some.
When done close(...) the pipes.
A:
For simple unidirectional communication, popen() is a decent solution. It is no use for bi-directional communication, though.
IMO, imjorge (Jorge Ferreira) gave most of the answer (80%?) for bi-directional communication - but omitted a few key details.
It is crucial that the parent process close the read end of the pipe that is used to send messages to the child process.
It is crucial that the child process close the write end of the pipe that is used to send messages to the child process.
It is crucial that the parent process close the write end of the pipe that is used to send messages to the parent process.
It is crucial that the child process close the read end of the pipe that is used to send messages to the parent process.
If you do not close the unused ends of the pipes, you do not get sensible behaviour when one of the programs terminates; for example, the child might be reading from its standard input, but unless the write end of the pipe is closed in the child, it will never get EOF (zero bytes from read) because it still has the pipe open and the system thinks it might sometime get around to writing to that pipe, even though it is currently hung waiting for something to read from it.
The writing processes should consider whether to handle the SIGPIPE signal that is given when you write on a pipe where there is no reading process.
You have to be aware of pipe capacity (platform dependent, and might be as little as 4KB) and design the programs to avoid deadlock.
A:
You can use the system call, read manpage for system(3)
A:
I think you can use
freopen
for this .
|
Execute program from within a C program
|
How should I run another program from within my C program? I need to be able to write data into STDIN of the launched program (and maybe read from it's STDOUT)
I am not sure if this is a standard C function. I need the solution that should work under Linux.
|
[
"You want to use popen. It gives you a unidirectional pipe with which you can access stdin and stdout of the program.\npopen is standard on modern unix and unix-like OS, of which Linux is one :-)\nType\nman popen\n\nin a terminal to read more about it.\nEDIT\nWhether popen produces unidirectional or bidirectional pipes depends on the implementation. In Linux and OpenBSD, popen produces unidirectional pipes, which are read-only or write-only. On OS X, FreeBSD and NetBSD popen produces bidirectional pipes.\n",
"I wrote some example C code for someone else a while back that shows how to do this. Here it is for you:\n#include <sys/types.h>\n#include <unistd.h>\n#include <stdio.h>\n\nvoid error(char *s);\nchar *data = \"Some input data\\n\";\n\nmain()\n{\n int in[2], out[2], n, pid;\n char buf[255];\n\n /* In a pipe, xx[0] is for reading, xx[1] is for writing */\n if (pipe(in) < 0) error(\"pipe in\");\n if (pipe(out) < 0) error(\"pipe out\");\n\n if ((pid=fork()) == 0) {\n /* This is the child process */\n\n /* Close stdin, stdout, stderr */\n close(0);\n close(1);\n close(2);\n /* make our pipes, our new stdin,stdout and stderr */\n dup2(in[0],0);\n dup2(out[1],1);\n dup2(out[1],2);\n\n /* Close the other ends of the pipes that the parent will use, because if\n * we leave these open in the child, the child/parent will not get an EOF\n * when the parent/child closes their end of the pipe.\n */\n close(in[1]);\n close(out[0]);\n\n /* Over-write the child process with the hexdump binary */\n execl(\"/usr/bin/hexdump\", \"hexdump\", \"-C\", (char *)NULL);\n error(\"Could not exec hexdump\");\n }\n\n printf(\"Spawned 'hexdump -C' as a child process at pid %d\\n\", pid);\n\n /* This is the parent process */\n /* Close the pipe ends that the child uses to read from / write to so\n * the when we close the others, an EOF will be transmitted properly.\n */\n close(in[0]);\n close(out[1]);\n\n printf(\"<- %s\", data);\n /* Write some data to the childs input */\n write(in[1], data, strlen(data));\n\n /* Because of the small amount of data, the child may block unless we\n * close it's input stream. This sends an EOF to the child on it's\n * stdin.\n */\n close(in[1]);\n\n /* Read back any output */\n n = read(out[0], buf, 250);\n buf[n] = 0;\n printf(\"-> %s\",buf);\n exit(0);\n}\n\nvoid error(char *s)\n{\n perror(s);\n exit(1);\n}\n\n",
"\nCreate two pipes with pipe(...), one for stdin, one for stdout. \nfork(...) the process.\nIn the child process (the one where fork(...) returns 0) dup (...) the pipes to stdin/stdout.\nexec[v][e] the to be started programm file in the child process.\nIn the parent process (the one where fork) returns the PID of the child) do a loop that reads from the child's stdout (select(...) or poll(...), read(...) ) into a buffer, until the\nchild terminates (waitpid(...)). \nEventually supply the child with input on stdin if it expects some.\nWhen done close(...) the pipes.\n\n",
"For simple unidirectional communication, popen() is a decent solution. It is no use for bi-directional communication, though.\nIMO, imjorge (Jorge Ferreira) gave most of the answer (80%?) for bi-directional communication - but omitted a few key details.\n\nIt is crucial that the parent process close the read end of the pipe that is used to send messages to the child process.\nIt is crucial that the child process close the write end of the pipe that is used to send messages to the child process.\nIt is crucial that the parent process close the write end of the pipe that is used to send messages to the parent process.\nIt is crucial that the child process close the read end of the pipe that is used to send messages to the parent process.\n\nIf you do not close the unused ends of the pipes, you do not get sensible behaviour when one of the programs terminates; for example, the child might be reading from its standard input, but unless the write end of the pipe is closed in the child, it will never get EOF (zero bytes from read) because it still has the pipe open and the system thinks it might sometime get around to writing to that pipe, even though it is currently hung waiting for something to read from it.\nThe writing processes should consider whether to handle the SIGPIPE signal that is given when you write on a pipe where there is no reading process.\nYou have to be aware of pipe capacity (platform dependent, and might be as little as 4KB) and design the programs to avoid deadlock.\n",
"You can use the system call, read manpage for system(3)\n",
"I think you can use \n\nfreopen\n\nfor this .\n"
] |
[
17,
11,
8,
5,
1,
0
] |
[] |
[] |
[
"c",
"linux"
] |
stackoverflow_0000070842_c_linux.txt
|
Q:
Testing StarTeam operations
In a Java application I need to checkout files from Borland Starteam 2006 R2 using Starteam API by various parameters (date, label). Is there any framework that helps to write automatic tests for such functionality?
A:
I'm not aware of any; the approach i'd take is a project which has sample files you can checkout by various criteria, and then verify everything you expected arrived, and it is the right file (hash matches).
You're aware that they ship a command line client (stcmd) too, right? For a lot of things, you don't need to use the api at all.
|
Testing StarTeam operations
|
In a Java application I need to checkout files from Borland Starteam 2006 R2 using Starteam API by various parameters (date, label). Is there any framework that helps to write automatic tests for such functionality?
|
[
"I'm not aware of any; the approach i'd take is a project which has sample files you can checkout by various criteria, and then verify everything you expected arrived, and it is the right file (hash matches).\nYou're aware that they ship a command line client (stcmd) too, right? For a lot of things, you don't need to use the api at all.\n"
] |
[
2
] |
[] |
[] |
[
"java",
"starteam",
"unit_testing"
] |
stackoverflow_0000082245_java_starteam_unit_testing.txt
|
Q:
How do you convert 00:00:00 to hours, minutes, seconds in PHP?
I have video durations stored in HH:MM:SS format. I'd like to display it as HH hours, MM minutes, SS seconds. It shouldn't display hours if it's less than 1.
What would be the best approach?
A:
Something like this?
$vals = explode(':', $duration);
if ( $vals[0] == 0 )
$result = $vals[1] . ' minutes, ' . $vals[2] . ' seconds';
else
$result = $vals[0] . 'hours, ' . $vals[1] . ' minutes, ' . $vals[2] . ' seconds';
A:
try using split
list($hh,$mm,$ss)= split(':',$duration);
A:
One little change could be:
$vals = explode(':', $duration);
if ( $vals[0] == 0 )
$result = "{$vals[1]} minutes, {$vals[2]} seconds";
else
$result = "{$vals[0]} hours, {$vals[1]} minutes, {$vals[2]} seconds";
A:
Pretty simple:
list( $h, $m, $s) = explode(':', $hms);
echo ($h ? "$h hours, " : "").($m ? "$m minutes, " : "").(($h || $m) ? "and " : "")."$s seconds";
This will only display the hours or minutes if there are any, and inserts an "and" before the seconds if there are hours, minutes, or both to display. If you wanted to get really fancy, you could add some code to display "hour" vs. "hours" as appropriate, ditto for minutes and seconds.
A:
Why bother with regex or explodes when php handles time just fine?
$sTime = '04:20:00';
$oTime = new DateTime($sTime);
$aOutput = array();
if ($oTime->format('G') > 0) {
$aOutput[] = $oTime->format('G') . ' hours';
}
$aOutput[] = $oTime->format('i') . ' minutes';
$aOutput[] = $oTime->format('s') . ' seconds';
echo implode(', ', $aOutput);
The benefit is that you can reformat the time however you like (including am/pm, adjustments for timezone, addition / subtraction, etc).
A:
Heres a different way, with different functions which is more open and a more step by step for newbies. it also handles the 1 hour and many hours... you could try use the same logic to handle the 0 minutes and 0 seconds.
<?php
// your time
$var = "00:00:00";
if(substr($var, 0, 2) == 0){
$myTime = substr_replace(substr_replace($var, '', 0, 3), ' Minutes, ', 2, 1);
}
elseif(substr($var, 1, 1) == 1){
$myTime = substr_replace(substr_replace($var, ' Hour, ', 2, 1), ' Minutes, ', 11, 1);
}
else{
$myTime = substr_replace(substr_replace($var, ' Hours, ', 2, 1), ' Minutes, ', 12, 1);
}
// work with your variable
echo $myTime .' Seconds';
?>
A:
If you really want to use a built-in function, perhaps for robustness, you can try
date_default_timezone_set('UTC');
$date = strtotime($hms,0);
and use any of the date formatting functions (date(), strftime(), etc) to format the time in any way you wish. Or you can use the output of strptime($hms,'%T'). Either may be overkill for the simple scenario you have.
A:
I'll reply with a different approach of the problem. My approach is to store the lengths in seconds. Then depending the needs, it's easy to render these seconds as hh:mm:ss by using :
print gmdate($seconds >= 3600 ? 'H:i:s' : 'i:s', $seconds); (for your question)
or to search on the length in a database:
SELECT * FROM videos WHERE length > 300; for example, to search for video with a length higher than 5 minutes.
|
How do you convert 00:00:00 to hours, minutes, seconds in PHP?
|
I have video durations stored in HH:MM:SS format. I'd like to display it as HH hours, MM minutes, SS seconds. It shouldn't display hours if it's less than 1.
What would be the best approach?
|
[
"Something like this?\n$vals = explode(':', $duration);\n\nif ( $vals[0] == 0 )\n $result = $vals[1] . ' minutes, ' . $vals[2] . ' seconds';\nelse\n $result = $vals[0] . 'hours, ' . $vals[1] . ' minutes, ' . $vals[2] . ' seconds';\n\n",
"try using split \nlist($hh,$mm,$ss)= split(':',$duration);\n\n",
"One little change could be:\n$vals = explode(':', $duration);\n\nif ( $vals[0] == 0 )\n $result = \"{$vals[1]} minutes, {$vals[2]} seconds\";\nelse\n $result = \"{$vals[0]} hours, {$vals[1]} minutes, {$vals[2]} seconds\";\n\n",
"Pretty simple:\nlist( $h, $m, $s) = explode(':', $hms);\necho ($h ? \"$h hours, \" : \"\").($m ? \"$m minutes, \" : \"\").(($h || $m) ? \"and \" : \"\").\"$s seconds\";\n\nThis will only display the hours or minutes if there are any, and inserts an \"and\" before the seconds if there are hours, minutes, or both to display. If you wanted to get really fancy, you could add some code to display \"hour\" vs. \"hours\" as appropriate, ditto for minutes and seconds.\n",
"Why bother with regex or explodes when php handles time just fine?\n$sTime = '04:20:00';\n$oTime = new DateTime($sTime);\n$aOutput = array();\nif ($oTime->format('G') > 0) {\n $aOutput[] = $oTime->format('G') . ' hours';\n}\n$aOutput[] = $oTime->format('i') . ' minutes';\n$aOutput[] = $oTime->format('s') . ' seconds';\necho implode(', ', $aOutput);\n\nThe benefit is that you can reformat the time however you like (including am/pm, adjustments for timezone, addition / subtraction, etc).\n",
"Heres a different way, with different functions which is more open and a more step by step for newbies. it also handles the 1 hour and many hours... you could try use the same logic to handle the 0 minutes and 0 seconds.\n<?php\n// your time\n$var = \"00:00:00\";\n\nif(substr($var, 0, 2) == 0){\n $myTime = substr_replace(substr_replace($var, '', 0, 3), ' Minutes, ', 2, 1);\n}\nelseif(substr($var, 1, 1) == 1){\n$myTime = substr_replace(substr_replace($var, ' Hour, ', 2, 1), ' Minutes, ', 11, 1); \n }\nelse{\n$myTime = substr_replace(substr_replace($var, ' Hours, ', 2, 1), ' Minutes, ', 12, 1);\n}\n// work with your variable\necho $myTime .' Seconds';\n\n?>\n\n",
"If you really want to use a built-in function, perhaps for robustness, you can try \n date_default_timezone_set('UTC'); \n $date = strtotime($hms,0); \n\nand use any of the date formatting functions (date(), strftime(), etc) to format the time in any way you wish. Or you can use the output of strptime($hms,'%T'). Either may be overkill for the simple scenario you have.\n",
"I'll reply with a different approach of the problem. My approach is to store the lengths in seconds. Then depending the needs, it's easy to render these seconds as hh:mm:ss by using :\nprint gmdate($seconds >= 3600 ? 'H:i:s' : 'i:s', $seconds); (for your question)\nor to search on the length in a database:\nSELECT * FROM videos WHERE length > 300; for example, to search for video with a length higher than 5 minutes.\n"
] |
[
3,
2,
1,
1,
1,
0,
0,
0
] |
[
"explode() is for pansies. This is a job for regular expressions!\n<?php\npreg_match('/^(\\d\\d):(\\d\\d):(\\d\\d)$/', $video_duration, $parts);\nif ($parts[1] !== '00') {\n echo(\"{$parts[1]} hours, {$parts[2]} minutes, {$parts[3]} seconds\");\n}\nelse {\n echo(\"{$parts[2]} minutes, {$parts[3]} seconds\");\n}\n\nTotally untested, but something like that ought to work. Note that this code assumes that the hour fragment will always be two digits (eg, a three-hour video would be 03:00:00 instead of 3:00:00).\nEDIT: In retrospect, using regular expressions for this is probably a case of over-engineering; explode() will do the job just as well and probably even be faster in this case. But it was the first method to come to mind when I read the question.\n",
"Converting 00:00:00 to hours, minutes, and seconds in PHP is really easy.\n$hours = 0; \n$minutes = 0;\n$seconds = 0; \n"
] |
[
-1,
-1
] |
[
"date",
"php"
] |
stackoverflow_0000080319_date_php.txt
|
Q:
How to know whether a given client-side startup script is already registered in an asp.net page?
I have a asp.net page, and would like to know whether "script1" is already registered as a startup script or not?
A:
Me.ClientScript.IsStartupScriptRegistered("clientScript")
|
How to know whether a given client-side startup script is already registered in an asp.net page?
|
I have a asp.net page, and would like to know whether "script1" is already registered as a startup script or not?
|
[
"Me.ClientScript.IsStartupScriptRegistered(\"clientScript\")\n"
] |
[
7
] |
[] |
[] |
[
"asp.net",
"startupscript"
] |
stackoverflow_0000083558_asp.net_startupscript.txt
|
Q:
How to tie into a domain server's login for program access rights
I need to write a program used internally where different users will have different abilities within the program.
Rather than making users have a new username and password, how do I tie into an existing domain server's login system?
Assume .NET (C#, VB, ASP, etc)
-Adam
A:
For WinForms, use System.Threading.Thread.CurrentPrincipal with the IsInRole() method to check which groups they are a member of. You do need to set the principal policy of the AppDomain to WindowsPrincipal first.
Use this to get the current user name:
private string getWindowsUsername()
{
AppDomain.CurrentDomain.SetPrincipalPolicy(PrincipalPolicy.WindowsPrincipal);
return Thread.CurrentPrincipal.Identity.Name;
}
And then something like this to check a role:
if (Thread.CurrentPrincipal.IsInRole("Domain Users") == true)
{}
In ASP.NET, the thread will belong to IIS, so instead you should
Set the virtual folder or website to require authentication
Get the user name supplied by the browser with Request.ServerVariables("LOGON_USER")
Use the DirectorySearcher class to find the users groups
A:
I would use LDAP
and the DirectorySearcher Class:
http://msdn.microsoft.com/en-us/library/system.directoryservices.directorysearcher.aspx
A:
Assuming this is served through IIS, I would tell IIS to authenticate via the domain, but I would keep authorization (what roles a user is associated with, accessible functionality, etc) within the application itself.
You can retreive the username used to authenticate via
Trim(Request.ServerVariables("LOGON_USER")).Replace("/", "\").Replace("'", "''")
OR
CStr(Session("User")).Substring(CStr(Session("User")).LastIndexOf("\") + 1)
|
How to tie into a domain server's login for program access rights
|
I need to write a program used internally where different users will have different abilities within the program.
Rather than making users have a new username and password, how do I tie into an existing domain server's login system?
Assume .NET (C#, VB, ASP, etc)
-Adam
|
[
"For WinForms, use System.Threading.Thread.CurrentPrincipal with the IsInRole() method to check which groups they are a member of. You do need to set the principal policy of the AppDomain to WindowsPrincipal first.\nUse this to get the current user name:\nprivate string getWindowsUsername()\n{\n AppDomain.CurrentDomain.SetPrincipalPolicy(PrincipalPolicy.WindowsPrincipal);\n return Thread.CurrentPrincipal.Identity.Name;\n}\n\nAnd then something like this to check a role:\nif (Thread.CurrentPrincipal.IsInRole(\"Domain Users\") == true)\n{}\n\nIn ASP.NET, the thread will belong to IIS, so instead you should \n\nSet the virtual folder or website to require authentication\nGet the user name supplied by the browser with Request.ServerVariables(\"LOGON_USER\")\nUse the DirectorySearcher class to find the users groups\n\n",
"I would use LDAP\nand the DirectorySearcher Class:\nhttp://msdn.microsoft.com/en-us/library/system.directoryservices.directorysearcher.aspx\n",
"Assuming this is served through IIS, I would tell IIS to authenticate via the domain, but I would keep authorization (what roles a user is associated with, accessible functionality, etc) within the application itself. \nYou can retreive the username used to authenticate via \nTrim(Request.ServerVariables(\"LOGON_USER\")).Replace(\"/\", \"\\\").Replace(\"'\", \"''\")\n\nOR\nCStr(Session(\"User\")).Substring(CStr(Session(\"User\")).LastIndexOf(\"\\\") + 1)\n\n"
] |
[
4,
2,
1
] |
[] |
[] |
[
"c#"
] |
stackoverflow_0000083553_c#.txt
|
Q:
strpos function issue in PHP not finding the needle
In php I have open a .php file and want to evaluate certain lines. Specifically when the $table_id and $line variables are assigned a value.
Within the text file I have:
...
$table_id = 'crs_class'; // table name
$screen = 'crs_class.detail.screen.inc'; // file identifying screen structure
...
amongst other lines. The if statement below never detects the occurance of $table_id or $screen (even without the $ prepended). I can't understand why it won't work as the strpos statement below looking for 'require' works fine.
So, why isn't this if statement getting a hit?
while ($line=fgets($fh)) {
//echo "Evaluating... $line <br>";
**if ((($pos = stripos($line, '$table_id')) === true) || (($pos = stripos($line, '$screen'))===true))**
{
// TODO: Not evaluating tableid and screen lines correctly fix.
// Set $table_id and $screen variables from task scripts
eval($line);
}
if (($pos=stripos($line, 'require')) === true) {
$controller = $line;
}
}
A:
use !==false instead of ===true
stripos returns the position as an integer if the needle is found. And that's never ===bool.
You might also be interested in PHP's tokenizer module or the lexer package in the pear repository.
A:
I think VolkerK already has the answer - stripos() does not return a boolean, it returns the position within the string, or false if it's not found - so you want to be checking that the return is not false using !== (not != as you want to check the type as well).
Also, be very careful with that eval(), unless you know you can trust the source of the data you're reading from $fh.
Otherwise, there could be anything else on that line that you unwittingly eval() - the line could be something like:
$table_id = 'foo'; exec('/bin/rm -rf /');
A:
According to the PHP docs, strpos() and stripos() will return an integer for the position, OR a boolean FALSE.
Since 0 (zero) is a valid, and very expect-able index, this function should be used with extreme caution.
Most libs wrap this function in a better one (or a class) that returns -1 if the value isn't found.
e.g. like Javascript's
String.indexOf(str)
A:
Variable interpolation is only performed on "strings", not 'strings' (note the quotes). i.e.
<?php
$foo = "bar";
print '$foo';
print "$foo";
?>
prints $foobar. Change your quotes, and all should be well.
|
strpos function issue in PHP not finding the needle
|
In php I have open a .php file and want to evaluate certain lines. Specifically when the $table_id and $line variables are assigned a value.
Within the text file I have:
...
$table_id = 'crs_class'; // table name
$screen = 'crs_class.detail.screen.inc'; // file identifying screen structure
...
amongst other lines. The if statement below never detects the occurance of $table_id or $screen (even without the $ prepended). I can't understand why it won't work as the strpos statement below looking for 'require' works fine.
So, why isn't this if statement getting a hit?
while ($line=fgets($fh)) {
//echo "Evaluating... $line <br>";
**if ((($pos = stripos($line, '$table_id')) === true) || (($pos = stripos($line, '$screen'))===true))**
{
// TODO: Not evaluating tableid and screen lines correctly fix.
// Set $table_id and $screen variables from task scripts
eval($line);
}
if (($pos=stripos($line, 'require')) === true) {
$controller = $line;
}
}
|
[
"use !==false instead of ===true\nstripos returns the position as an integer if the needle is found. And that's never ===bool.\n\nYou might also be interested in PHP's tokenizer module or the lexer package in the pear repository.\n",
"I think VolkerK already has the answer - stripos() does not return a boolean, it returns the position within the string, or false if it's not found - so you want to be checking that the return is not false using !== (not != as you want to check the type as well).\nAlso, be very careful with that eval(), unless you know you can trust the source of the data you're reading from $fh.\nOtherwise, there could be anything else on that line that you unwittingly eval() - the line could be something like:\n\n$table_id = 'foo'; exec('/bin/rm -rf /');\n\n",
"According to the PHP docs, strpos() and stripos() will return an integer for the position, OR a boolean FALSE.\nSince 0 (zero) is a valid, and very expect-able index, this function should be used with extreme caution.\nMost libs wrap this function in a better one (or a class) that returns -1 if the value isn't found.\ne.g. like Javascript's\nString.indexOf(str)\n\n",
"Variable interpolation is only performed on \"strings\", not 'strings' (note the quotes). i.e.\n<?php\n $foo = \"bar\";\n\n print '$foo';\n print \"$foo\";\n?>\n\nprints $foobar. Change your quotes, and all should be well.\n"
] |
[
8,
3,
3,
2
] |
[
"Why are you using the === Argument?\nIf it is anywhere in the line, it will be an integer. You're comparing the type also by using ====\nFrom my understand you're asking it \"If the position is equal and of the same type as true\" which will never work.\n"
] |
[
-1
] |
[
"php",
"string"
] |
stackoverflow_0000083397_php_string.txt
|
Q:
Trying to convert bunch of jpegs into a movie
I have 28,000 images I need to convert into a movie.
I tried
mencoder mf://*.jpg -mf w=640:h=480:fps=30:type=jpg -ovc lavc -lavcopts vcodec=msmpeg4v2 -nosound -o ../output-msmpeg4v2.avi
But it seems to crap out at 7500 frames.
The files are named
webcam_2007-04-16_070804.jpg
webcam_2007-04-16_071004.jpg
webcam_2007-04-16_071204.jpg
webcam_2007-04-16_071404.jpg
Up to march 2008 or so.
Is there another way I can pass the filenames to mencoder so it doesn't stop part way?
MEncoder 2:1.0~rc2-0ubuntu13 (C) 2000-2007 MPlayer Team
CPU: Intel(R) Pentium(R) 4 CPU 2.40GHz (Family: 15, Model: 2, Stepping: 7)
CPUflags: Type: 15 MMX: 1 MMX2: 1 3DNow: 0 3DNow2: 0 SSE: 1 SSE2: 1
Compiled with runtime CPU detection.
success: format: 16 data: 0x0 - 0x0
MF file format detected.
[mf] search expr: *.jpg
[mf] number of files: 28617 (114468)
VIDEO: [IJPG] 640x480 24bpp 30.000 fps 0.0 kbps ( 0.0 kbyte/s)
[V] filefmt:16 fourcc:0x47504A49 size:640x480 fps:30.00 ftime:=0.0333
Opening video filter: [expand osd=1]
Expand: -1 x -1, -1 ; -1, osd: 1, aspect: 0.000000, round: 1
==========================================================================
Opening video decoder: [ffmpeg] FFmpeg's libavcodec codec family
Selected video codec: [ffmjpeg] vfm: ffmpeg (FFmpeg MJPEG decoder)
==========================================================================
VDec: vo config request - 640 x 480 (preferred colorspace: Planar YV12)
VDec: using Planar YV12 as output csp (no 3)
Movie-Aspect is 1.33:1 - prescaling to correct movie aspect.
videocodec: libavcodec (640x480 fourcc=3234504d [MP42])
Writing header...
ODML: Aspect information not (yet?) available or unspecified, not writing vprp header.
Writing header...
ODML: Aspect information not (yet?) available or unspecified, not writing vprp header.
Pos: 251.3s 7539f ( 0%) 47.56fps Trem: 0min 0mb A-V:0.000 [1202:0]
Flushing video frames.
Writing index...
Writing header...
ODML: Aspect information not (yet?) available or unspecified, not writing vprp header.
Video stream: 1202.480 kbit/s (150310 B/s) size: 37772908 bytes 251.300 secs 7539 frames
A:
Shove the list of images in a file, one per line. Then use mf://@filename
A:
You might be better off going to #mplayer or #ffmpeg on Freenode IRC for specific help with those programs.
A:
You can create a video from a sequence of images using LiVES:
http://lives.sourceforge.net
Simply place all of the images in a directory, and make sure they are in alphanumeric order.
Then in LiVES, just go to File/Open File or Directory, and double-click on the image directory.
Once the images have loaded you can edit the clip and save it in a variety of formats.
A:
another alternative is to bypass mencoder and use ffmpeg directly
A:
Kinda of an odd answer... but I thought that Blender could construct videos from sequences of images. Just a thought.
|
Trying to convert bunch of jpegs into a movie
|
I have 28,000 images I need to convert into a movie.
I tried
mencoder mf://*.jpg -mf w=640:h=480:fps=30:type=jpg -ovc lavc -lavcopts vcodec=msmpeg4v2 -nosound -o ../output-msmpeg4v2.avi
But it seems to crap out at 7500 frames.
The files are named
webcam_2007-04-16_070804.jpg
webcam_2007-04-16_071004.jpg
webcam_2007-04-16_071204.jpg
webcam_2007-04-16_071404.jpg
Up to march 2008 or so.
Is there another way I can pass the filenames to mencoder so it doesn't stop part way?
MEncoder 2:1.0~rc2-0ubuntu13 (C) 2000-2007 MPlayer Team
CPU: Intel(R) Pentium(R) 4 CPU 2.40GHz (Family: 15, Model: 2, Stepping: 7)
CPUflags: Type: 15 MMX: 1 MMX2: 1 3DNow: 0 3DNow2: 0 SSE: 1 SSE2: 1
Compiled with runtime CPU detection.
success: format: 16 data: 0x0 - 0x0
MF file format detected.
[mf] search expr: *.jpg
[mf] number of files: 28617 (114468)
VIDEO: [IJPG] 640x480 24bpp 30.000 fps 0.0 kbps ( 0.0 kbyte/s)
[V] filefmt:16 fourcc:0x47504A49 size:640x480 fps:30.00 ftime:=0.0333
Opening video filter: [expand osd=1]
Expand: -1 x -1, -1 ; -1, osd: 1, aspect: 0.000000, round: 1
==========================================================================
Opening video decoder: [ffmpeg] FFmpeg's libavcodec codec family
Selected video codec: [ffmjpeg] vfm: ffmpeg (FFmpeg MJPEG decoder)
==========================================================================
VDec: vo config request - 640 x 480 (preferred colorspace: Planar YV12)
VDec: using Planar YV12 as output csp (no 3)
Movie-Aspect is 1.33:1 - prescaling to correct movie aspect.
videocodec: libavcodec (640x480 fourcc=3234504d [MP42])
Writing header...
ODML: Aspect information not (yet?) available or unspecified, not writing vprp header.
Writing header...
ODML: Aspect information not (yet?) available or unspecified, not writing vprp header.
Pos: 251.3s 7539f ( 0%) 47.56fps Trem: 0min 0mb A-V:0.000 [1202:0]
Flushing video frames.
Writing index...
Writing header...
ODML: Aspect information not (yet?) available or unspecified, not writing vprp header.
Video stream: 1202.480 kbit/s (150310 B/s) size: 37772908 bytes 251.300 secs 7539 frames
|
[
"Shove the list of images in a file, one per line. Then use mf://@filename\n",
"You might be better off going to #mplayer or #ffmpeg on Freenode IRC for specific help with those programs.\n",
"You can create a video from a sequence of images using LiVES:\nhttp://lives.sourceforge.net\nSimply place all of the images in a directory, and make sure they are in alphanumeric order.\nThen in LiVES, just go to File/Open File or Directory, and double-click on the image directory.\nOnce the images have loaded you can edit the clip and save it in a variety of formats.\n",
"another alternative is to bypass mencoder and use ffmpeg directly\n",
"Kinda of an odd answer... but I thought that Blender could construct videos from sequences of images. Just a thought.\n"
] |
[
2,
1,
1,
0,
0
] |
[] |
[] |
[
"encoding",
"video"
] |
stackoverflow_0000075508_encoding_video.txt
|
Q:
Initializing an array on arbitrary starting index in c#
Is it possible in c# to initialize an array in, for example, subindex 1?
I'm working with Office interop, and every property is an object array that starts in 1 (I assume it was originally programed in VB.NET), and you cannot modify it, you have to set the entire array for it to accept the changes.
As a workaround I am cloning the original array, modifying that one, and setting it as a whole when I'm done.
But, I was wondering if it was possible to create a new non-zero based array
A:
It is possible to do as you request see the code below.
// Construct an array containing ints that has a length of 10 and a lower bound of 1
Array lowerBoundArray = Array.CreateInstance(typeof(int), new int[1] { 10 }, new int[1] { 1 });
// insert 1 into position 1
lowerBoundArray.SetValue(1, 1);
//insert 2 into position 2
lowerBoundArray.SetValue(2, 2);
// IndexOutOfRangeException the lower bound of the array
// is 1 and we are attempting to write into 0
lowerBoundArray.SetValue(1, 0);
A:
You can use Array.CreateInstance.
See Array Types in .NET
A:
Not simply. But you can certainly write your own class. It would have an array as a private variable, and the user would think his array starts at 1, but really it starts at zero and you're subtracting 1 from all of his array accesses.
A:
You can write your own array class
A:
I don't think if it's possible to modify the starting index of arrays.
I would create my own array using generics and handle it inside.
A:
Just keep of const int named 'offset' with a value of one, and always add that to your subscripts in your code.
A:
I don't think you can create non-zero based arrays in C#, but you could easily write a wrapper class of your own around the built in data structures.This wrapper class would hold a private instance of the array type you required; overloading the [] indexing operator is not allowed, but you can add an indexer to a class to make it behave like an indexable array, see here. The index function you write could then add (or subtract) 1, to all index's passed in.
You could then use your object as follows, and it would behave correctly:
myArrayObject[1]; //would return the zeroth element.
|
Initializing an array on arbitrary starting index in c#
|
Is it possible in c# to initialize an array in, for example, subindex 1?
I'm working with Office interop, and every property is an object array that starts in 1 (I assume it was originally programed in VB.NET), and you cannot modify it, you have to set the entire array for it to accept the changes.
As a workaround I am cloning the original array, modifying that one, and setting it as a whole when I'm done.
But, I was wondering if it was possible to create a new non-zero based array
|
[
"It is possible to do as you request see the code below.\n// Construct an array containing ints that has a length of 10 and a lower bound of 1\nArray lowerBoundArray = Array.CreateInstance(typeof(int), new int[1] { 10 }, new int[1] { 1 });\n\n// insert 1 into position 1\nlowerBoundArray.SetValue(1, 1);\n\n//insert 2 into position 2\nlowerBoundArray.SetValue(2, 2);\n\n// IndexOutOfRangeException the lower bound of the array \n// is 1 and we are attempting to write into 0\nlowerBoundArray.SetValue(1, 0);\n\n",
"You can use Array.CreateInstance.\nSee Array Types in .NET\n",
"Not simply. But you can certainly write your own class. It would have an array as a private variable, and the user would think his array starts at 1, but really it starts at zero and you're subtracting 1 from all of his array accesses.\n",
"You can write your own array class\n",
"I don't think if it's possible to modify the starting index of arrays.\nI would create my own array using generics and handle it inside.\n",
"Just keep of const int named 'offset' with a value of one, and always add that to your subscripts in your code.\n",
"I don't think you can create non-zero based arrays in C#, but you could easily write a wrapper class of your own around the built in data structures.This wrapper class would hold a private instance of the array type you required; overloading the [] indexing operator is not allowed, but you can add an indexer to a class to make it behave like an indexable array, see here. The index function you write could then add (or subtract) 1, to all index's passed in.\nYou could then use your object as follows, and it would behave correctly:\nmyArrayObject[1]; //would return the zeroth element.\n\n"
] |
[
19,
8,
1,
0,
0,
0,
0
] |
[
"In VB6 you could change the array to start with 0 or 1, so I think VBScript can do the same. For C#, it's not possible but you can simply add NULL value in the first [0] and start real value at [1]. Of course, this is a little dangerous... \n"
] |
[
-1
] |
[
"arrays",
"c#",
"initialization",
"interop",
"ms_office"
] |
stackoverflow_0000082943_arrays_c#_initialization_interop_ms_office.txt
|
Q:
What is the easiest way to get total number for lines of code (LOC) in SQL Server?
I need to provide statistics on how many lines of code (LOC) associated with a system. The application part is easy but I need to also include any code residing within the SQL Server database. This would apply to stored procedures, functions, triggers, etc.
How can I easily get that info? Can it be done (accurately) with TSQL by querying the system tables\sprocs, etc?
A:
In Management Studio, right click the database you want a line count for... select Tasks -> Generate Scripts, you can select script options in the Scripts Wizard to include or exclude objects, when you have it set the way you like it can generate to a new query window
A:
Just select all the text from syscomments and count how many lines you have. The text column is text, which you can't really see in Management studio, so I would write a program or power shell script like this:
$conn = new-object System.Data.SqlClient.SqlConnection("Server=server;Database=database;Integrated Security=SSPI")
$cmd = new-object System.Data.SqlClient.SqlCommand("select text from syscomments", $conn)
$conn.Open()
$reader = $cmd.ExecuteReader()
$reader.Read() | out-null
$reader.GetString(0) | clip
$reader.Close()
$conn.Close()
Paste into an editor that has a line count, and you're done.
A:
Personally you might just script the objects to file using SQL Server Management tools, it will get a few extras in there for the checks to do the drop first incase the object exists.
|
What is the easiest way to get total number for lines of code (LOC) in SQL Server?
|
I need to provide statistics on how many lines of code (LOC) associated with a system. The application part is easy but I need to also include any code residing within the SQL Server database. This would apply to stored procedures, functions, triggers, etc.
How can I easily get that info? Can it be done (accurately) with TSQL by querying the system tables\sprocs, etc?
|
[
"In Management Studio, right click the database you want a line count for... select Tasks -> Generate Scripts, you can select script options in the Scripts Wizard to include or exclude objects, when you have it set the way you like it can generate to a new query window \n",
"Just select all the text from syscomments and count how many lines you have. The text column is text, which you can't really see in Management studio, so I would write a program or power shell script like this:\n$conn = new-object System.Data.SqlClient.SqlConnection(\"Server=server;Database=database;Integrated Security=SSPI\")\n$cmd = new-object System.Data.SqlClient.SqlCommand(\"select text from syscomments\", $conn)\n$conn.Open()\n$reader = $cmd.ExecuteReader()\n\n$reader.Read() | out-null\n$reader.GetString(0) | clip\n$reader.Close()\n$conn.Close()\n\nPaste into an editor that has a line count, and you're done.\n",
"Personally you might just script the objects to file using SQL Server Management tools, it will get a few extras in there for the checks to do the drop first incase the object exists.\n"
] |
[
5,
2,
1
] |
[] |
[] |
[
"database",
"sql_server",
"statistics",
"tsql"
] |
stackoverflow_0000075159_database_sql_server_statistics_tsql.txt
|
Q:
Read Firefox 3 bookmarks
Firefox 3 stores the bookmarks in a sqlite database.
There are several hacked sqlite java libraries available.
Is there a way to hack the sqlite database in java(not using libraries) to read bookmarks reliably?
Does someone know how the sqlite DB is stored and access programmatically (from java)?
A:
You need the SQLite JDBC driver (this page explains how to run queries on a SQLite database using that driver from within Java).
A:
I don't know why you need NOT to use a JDBC driver, but there's another possible "solution" depending on your software requirements. In FF3, type in the address bar about:config
Alter the value of property: browser.bookmarks.autoExportHTML to true.
This will export your bookmarks in an HTML whenever you close FF. You can then read the HTML. It may or may not solve your problem....
|
Read Firefox 3 bookmarks
|
Firefox 3 stores the bookmarks in a sqlite database.
There are several hacked sqlite java libraries available.
Is there a way to hack the sqlite database in java(not using libraries) to read bookmarks reliably?
Does someone know how the sqlite DB is stored and access programmatically (from java)?
|
[
"You need the SQLite JDBC driver (this page explains how to run queries on a SQLite database using that driver from within Java).\n",
"I don't know why you need NOT to use a JDBC driver, but there's another possible \"solution\" depending on your software requirements. In FF3, type in the address bar about:config\nAlter the value of property: browser.bookmarks.autoExportHTML to true.\nThis will export your bookmarks in an HTML whenever you close FF. You can then read the HTML. It may or may not solve your problem....\n"
] |
[
5,
2
] |
[] |
[] |
[
"bookmarks",
"firefox_3",
"java",
"parsing",
"sqlite"
] |
stackoverflow_0000081132_bookmarks_firefox_3_java_parsing_sqlite.txt
|
Q:
What is the best way to change the encoding of text in PHP
I want to run text through a filter to ensure it is all UTF-8 encoded. What is the recommended way to do this with PHP?
A:
Your question is unclear, are you trying to encode something? If so utf8_encode is your friend. Are you trying to determine if it doesn't need to be encoded? If so, utf8_encode is still your friend, because you can check that the result is the same as the input!
A:
Check the multi-byte string functions here
A:
You need to know in what character set your input string is encoded, or this will go nowhere fast.
If you want to do it correctly, this article may be helpful: The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)
A:
Given a stream of bytes, you have to know what encoding it is to begin with - email use mime headers to specify the encoding, http uses http headers to specify the encoding. Also, you can specify the encoding in a meta tag in a web page, but it is not always honored.
Anyway, once you know what encoding you want to convert from, use iconv to convert it to utf8. look at the iconv section of the php docs, there's lots of good info there.
Ah, Thomas posted the link I was looking for. A must read.
A:
The easiest way to check for UTF-8 validity:
If only one line allowed:
preg_match('/^.*$/Du', $value)
If multiple lines allowed:
preg_match('/^.*$/sDu', $value)
This works for PHP >= 4.3.5 and does not require any non-default PHP modules.
|
What is the best way to change the encoding of text in PHP
|
I want to run text through a filter to ensure it is all UTF-8 encoded. What is the recommended way to do this with PHP?
|
[
"Your question is unclear, are you trying to encode something? If so utf8_encode is your friend. Are you trying to determine if it doesn't need to be encoded? If so, utf8_encode is still your friend, because you can check that the result is the same as the input!\n",
"Check the multi-byte string functions here\n",
"You need to know in what character set your input string is encoded, or this will go nowhere fast.\nIf you want to do it correctly, this article may be helpful: The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)\n",
"Given a stream of bytes, you have to know what encoding it is to begin with - email use mime headers to specify the encoding, http uses http headers to specify the encoding. Also, you can specify the encoding in a meta tag in a web page, but it is not always honored.\nAnyway, once you know what encoding you want to convert from, use iconv to convert it to utf8. look at the iconv section of the php docs, there's lots of good info there.\nAh, Thomas posted the link I was looking for. A must read.\n",
"The easiest way to check for UTF-8 validity:\n\nIf only one line allowed:\npreg_match('/^.*$/Du', $value)\n\nIf multiple lines allowed:\npreg_match('/^.*$/sDu', $value)\n\n\nThis works for PHP >= 4.3.5 and does not require any non-default PHP modules.\n"
] |
[
2,
1,
0,
0,
0
] |
[] |
[] |
[
"encoding",
"php",
"utf_8"
] |
stackoverflow_0000079928_encoding_php_utf_8.txt
|
Q:
Clean implementation of the strategy pattern in Perl
How do I write a clean implementation of the strategy pattern in Perl? I want to do it in a way that leverages Perl's features.
A:
It really depends on what you mean by "clean implementation". As in any other language, you can use Perl's object system with polymorphism to do this for you. However, since Perl has first class functions, this pattern isn't normally coded explicitly. Leon Timmermans' example of
sort { lc($a) cmp lc($b) } @items
demonstrates this quite elegantly.
However, if you're looking for a "formal" implementation as you would do in C++, here's what it may look like using Perl+Moose. This is just a translation of the C++ code from Wikipedia -- Strategy pattern, except I'm using Moose's support for delegation.
package StrategyInterface;
use Moose::Role;
requires 'run';
package Context;
use Moose;
has 'strategy' => (
is => 'rw',
isa => 'StrategyInterface',
handles => [ 'run' ],
);
package SomeStrategy;
use Moose;
with 'StrategyInterface';
sub run { warn "applying SomeStrategy!\n"; }
package AnotherStrategy;
use Moose;
with 'StrategyInterface';
sub run { warn "applying AnotherStrategy!\n"; }
###############
package main;
my $contextOne = Context->new(
strategy => SomeStrategy->new()
);
my $contextTwo = Context->new(
strategy => AnotherStrategy->new()
);
$contextOne->run();
$contextTwo->run();
A:
Use sub references, and closures. A good perlish example of this
sort { lc($a) cmp lc($b) } @items
|
Clean implementation of the strategy pattern in Perl
|
How do I write a clean implementation of the strategy pattern in Perl? I want to do it in a way that leverages Perl's features.
|
[
"It really depends on what you mean by \"clean implementation\". As in any other language, you can use Perl's object system with polymorphism to do this for you. However, since Perl has first class functions, this pattern isn't normally coded explicitly. Leon Timmermans' example of\nsort { lc($a) cmp lc($b) } @items\n\ndemonstrates this quite elegantly.\nHowever, if you're looking for a \"formal\" implementation as you would do in C++, here's what it may look like using Perl+Moose. This is just a translation of the C++ code from Wikipedia -- Strategy pattern, except I'm using Moose's support for delegation.\npackage StrategyInterface;\nuse Moose::Role;\nrequires 'run';\n\n\npackage Context;\nuse Moose;\nhas 'strategy' => (\n is => 'rw',\n isa => 'StrategyInterface',\n handles => [ 'run' ],\n);\n\n\npackage SomeStrategy;\nuse Moose;\nwith 'StrategyInterface';\nsub run { warn \"applying SomeStrategy!\\n\"; }\n\n\npackage AnotherStrategy;\nuse Moose;\nwith 'StrategyInterface';\nsub run { warn \"applying AnotherStrategy!\\n\"; }\n\n\n###############\npackage main;\nmy $contextOne = Context->new(\n strategy => SomeStrategy->new()\n);\n\nmy $contextTwo = Context->new(\n strategy => AnotherStrategy->new()\n);\n\n$contextOne->run();\n$contextTwo->run();\n\n",
"Use sub references, and closures. A good perlish example of this\nsort { lc($a) cmp lc($b) } @items\n\n"
] |
[
5,
4
] |
[] |
[] |
[
"design_patterns",
"perl",
"strategy_pattern"
] |
stackoverflow_0000078278_design_patterns_perl_strategy_pattern.txt
|
Q:
what is the easiest way to lookup function names of a c binary in a cross-platform manner?
I want to write a small utility to call arbitrary functions from a C shared library. User should be able to list all the exported functions similar to what objdump or nm does. I checked these utilities' source but they are intimidating. Couldn't find enough information on google, if dl library has this functionality either.
(Clarification edit: I don't want to just call a function which is known beforehand. I will appreciate an example fragment along your answer.)
A:
This might be near to what you're looking for:
http://python.net/crew/theller/ctypes/
A:
Well, I'll speak a little bit about Windows. The C functions exported from DLLs do not contain information about the types, names, or number of arguments -- nor do I believe you can determine what the calling convention is for a given function.
For comparison, take a look at National Instrument's LabVIEW programming environment. You can import functions from DLLs, but you have to manually type in the type and names of the arguments before you use a given function. If this limitation is OK, please edit your question to reflect that.
I don't know what is possible with *nix environments.
EDIT: Regarding your clarification. If you don't know what the function is ahead of time, you're pretty screwed on Windows because in general you won't be able to determine what the number and types of arguments the functions take.
A:
You could try ParaDyn's SymtabAPI. It lets you grab all the symbols in a shared library (or executable) and look at their types, offset, etc. It's all wrapped up in a reasonably nice C++ interface and runs on a lot of platforms. It also provides support for binary rewriting, which you could potentially use to do what you're talking about at runtime.
Webpage is here:
http://www.paradyn.org/html/symtab2.1-features.html
Documentation is here:
http://ftp.cs.wisc.edu/paradyn/releases/release5.2/doc/symtabProgGuide.21.pdf
A:
A standard-ish API is the dlopen/dlsym API; AFAIK it's implemented by GNU libc on Linux and Mac OS X's standard C library (libSystem), and it might be implemented on Windows by MinGW or other compatibility packages.
A:
Only sensible solution (without reinventing the wheel) seems to use libbfd. Downsides are its documentation is scarce and it is a bit bloated for my purposes.
A:
The source code for nm and objdump are available. If you want to start from specification then ELF is what you want to look into.
/Allan
A:
I've written something like this in Perl. On Win32 it runs dumpbin /exports, on POSIX it runs nm -gP. Then, since it's Perl, the results are interpreted using regular expressions: / _(\S+)@\d+/ for Win32 (stdcall functions) and /^(\S+) T/ for POSIX.
|
what is the easiest way to lookup function names of a c binary in a cross-platform manner?
|
I want to write a small utility to call arbitrary functions from a C shared library. User should be able to list all the exported functions similar to what objdump or nm does. I checked these utilities' source but they are intimidating. Couldn't find enough information on google, if dl library has this functionality either.
(Clarification edit: I don't want to just call a function which is known beforehand. I will appreciate an example fragment along your answer.)
|
[
"This might be near to what you're looking for:\nhttp://python.net/crew/theller/ctypes/\n",
"Well, I'll speak a little bit about Windows. The C functions exported from DLLs do not contain information about the types, names, or number of arguments -- nor do I believe you can determine what the calling convention is for a given function.\nFor comparison, take a look at National Instrument's LabVIEW programming environment. You can import functions from DLLs, but you have to manually type in the type and names of the arguments before you use a given function. If this limitation is OK, please edit your question to reflect that.\nI don't know what is possible with *nix environments.\nEDIT: Regarding your clarification. If you don't know what the function is ahead of time, you're pretty screwed on Windows because in general you won't be able to determine what the number and types of arguments the functions take.\n",
"You could try ParaDyn's SymtabAPI. It lets you grab all the symbols in a shared library (or executable) and look at their types, offset, etc. It's all wrapped up in a reasonably nice C++ interface and runs on a lot of platforms. It also provides support for binary rewriting, which you could potentially use to do what you're talking about at runtime.\nWebpage is here:\nhttp://www.paradyn.org/html/symtab2.1-features.html\nDocumentation is here:\nhttp://ftp.cs.wisc.edu/paradyn/releases/release5.2/doc/symtabProgGuide.21.pdf\n",
"A standard-ish API is the dlopen/dlsym API; AFAIK it's implemented by GNU libc on Linux and Mac OS X's standard C library (libSystem), and it might be implemented on Windows by MinGW or other compatibility packages.\n",
"Only sensible solution (without reinventing the wheel) seems to use libbfd. Downsides are its documentation is scarce and it is a bit bloated for my purposes.\n",
"The source code for nm and objdump are available. If you want to start from specification then ELF is what you want to look into.\n/Allan\n",
"I've written something like this in Perl. On Win32 it runs dumpbin /exports, on POSIX it runs nm -gP. Then, since it's Perl, the results are interpreted using regular expressions: / _(\\S+)@\\d+/ for Win32 (stdcall functions) and /^(\\S+) T/ for POSIX.\n"
] |
[
2,
2,
1,
1,
1,
0,
0
] |
[
"Eek! You've touched on one of the very platform-dependent topics of programming. On windows, you have DLLs, on linux, you have ld.so, ld-linux.so, and mac os x's dyld.\n"
] |
[
-1
] |
[
"c",
"symbols"
] |
stackoverflow_0000066544_c_symbols.txt
|
Q:
In Javascript, why is the "this" operator inconsistent?
In JavaScript, the "this" operator can refer to different things under different scenarios.
Typically in a method within a JavaScript "object", it refers to the current object.
But when used as a callback, it becomes a reference to the calling object.
I have found that this causes problems in code, because if you use a method within a JavaScript "object" as a callback function you can't tell whether "this" refers to the current "object" or whether "this" refers to the calling object.
Can someone clarify usage and best practices regarding how to get around this problem?
function TestObject() {
TestObject.prototype.firstMethod = function(){
this.callback();
YAHOO.util.Connect.asyncRequest(method, uri, callBack);
}
TestObject.prototype.callBack = function(o){
// do something with "this"
//when method is called directly, "this" resolves to the current object
//when invoked by the asyncRequest callback, "this" is not the current object
//what design patterns can make this consistent?
this.secondMethod();
}
TestObject.prototype.secondMethod = function() {
alert('test');
}
}
A:
Quick advice on best practices before I babble on about the magic this variable. If you want Object-oriented programming (OOP) in Javascript that closely mirrors more traditional/classical inheritance patterns, pick a framework, learn its quirks, and don't try to get clever. If you want to get clever, learn javascript as a functional language, and avoid thinking about things like classes.
Which brings up one of the most important things to keep in mind about Javascript, and to repeat to yourself when it doesn't make sense. Javascript does not have classes. If something looks like a class, it's a clever trick. Javascript has objects (no derisive quotes needed) and functions. (that's not 100% accurate, functions are just objects, but it can sometimes be helpful to think of them as separate things)
The this variable is attached to functions. Whenever you invoke a function, this is given a certain value, depending on how you invoke the function. This is often called the invocation pattern.
There are four ways to invoke functions in javascript. You can invoke the function as a method, as a function, as a constructor, and with apply.
As a Method
A method is a function that's attached to an object
var foo = {};
foo.someMethod = function(){
alert(this);
}
When invoked as a method, this will be bound to the object the function/method is a part of. In this example, this will be bound to foo.
As A Function
If you have a stand alone function, the this variable will be bound to the "global" object, almost always the window object in the context of a browser.
var foo = function(){
alert(this);
}
foo();
This may be what's tripping you up, but don't feel bad. Many people consider this a bad design decision. Since a callback is invoked as a function and not as a method, that's why you're seeing what appears to be inconsistent behaviour.
Many people get around the problem by doing something like, um, this
var foo = {};
foo.someMethod = function (){
var that=this;
function bar(){
alert(that);
}
}
You define a variable that which points to this. Closure (a topic all it's own) keeps that around, so if you call bar as a callback, it still has a reference.
As a Constructor
You can also invoke a function as a constructor. Based on the naming convention you're using (TestObject) this also may be what you're doing and is what's tripping you up.
You invoke a function as a Constructor with the new keyword.
function Foo(){
this.confusing = 'hell yeah';
}
var myObject = new Foo();
When invoked as a constructor, a new Object will be created, and this will be bound to that object. Again, if you have inner functions and they're used as callbacks, you'll be invoking them as functions, and this will be bound to the global object. Use that var that = this; trick/pattern.
Some people think the constructor/new keyword was a bone thrown to Java/traditional OOP programmers as a way to create something similar to classes.
With the Apply Method.
Finally, every function has a method (yes, functions are objects in Javascript) named apply. Apply lets you determine what the value of this will be, and also lets you pass in an array of arguments. Here's a useless example.
function foo(a,b){
alert(a);
alert(b);
alert(this);
}
var args = ['ah','be'];
foo.apply('omg',args);
A:
In JavaScript, this always refers to the object invoking the function that is being executed. So if the function is being used as an event handler, this will refer to the node that fired the event. But if you have an object and call a function on it like:
myObject.myFunction();
Then this inside myFunction will refer to myObject. Does it make sense?
To get around it you need to use closures. You can change your code as follows:
function TestObject() {
TestObject.prototype.firstMethod = function(){
this.callback();
YAHOO.util.Connect.asyncRequest(method, uri, callBack);
}
var that = this;
TestObject.prototype.callBack = function(o){
that.secondMethod();
}
TestObject.prototype.secondMethod = function() {
alert('test');
}
}
A:
this corresponds to the context for the function call. For functions not called as part of an object (no . operator), this is the global context (window in web pages). For functions called as object methods (via the . operator), it's the object.
But, you can make it whatever you want. All functions have .call() and .apply() methods that can be used to invoke them with a custom context. So if i set up an object Chile like so:
var Chile = { name: 'booga', stuff: function() { console.log(this.name); } };
...and invoke Chile.stuff(), it'll produce the obvious result:
booga
But if i want, i can take and really screw with it:
Chile.stuff.apply({ name: 'supercalifragilistic' });
This is actually quite useful...
A:
If you're using a javascript framework, there may be a handy method for dealing with this. In Prototype, for example, you can call a method and scope it to a particular "this" object:
var myObject = new TestObject();
myObject.firstMethod.bind(myObject);
Note: bind() returns a function, so you can also use it to pre-scope callbacks inside your class:
callBack.bind(this);
http://www.prototypejs.org/api/function/bind
A:
I believe this may be due to how the idea of [closures](http://en.wikipedia.org/wiki/Closure_(computer_science) work in Javascript.
I am just getting to grips with closures myself. Have a read of the linked wikipedia article.
Here's another article with more information.
Anyone out there able to confirm this?
A:
As soon as callback methods are called from other context I'm usually using something that I'm call callback context:
var ctx = function CallbackContext()
{
_callbackSender
...
}
function DoCallback(_sender, delegate, callbackFunc)
{
ctx = _callbackSender = _sender;
delegate();
}
function TestObject()
{
test = function()
{
DoCallback(otherFunc, callbackHandler);
}
callbackHandler = function()
{
ctx._callbackSender;
//or this = ctx._callbacjHandler;
}
}
A:
You can also use Function.Apply(thisArg, argsArray)... Where thisArg determines the value of this inside your function...the second parameter is an optional arguments array that you can also pass to your function.
If you don't plan on using the second argument, don't pass anything to it. Internet Explorer will throw a TypeError at you if you pass null (or anything that is not an array) to function.apply()'s second argument...
With the example code you gave it would look something like:
YAHOO.util.Connect.asyncRequest(method, uri, callBack.Apply(this));
A:
If you're using Prototype you can use bind() and bindAsEventListener() to get around that problem.
|
In Javascript, why is the "this" operator inconsistent?
|
In JavaScript, the "this" operator can refer to different things under different scenarios.
Typically in a method within a JavaScript "object", it refers to the current object.
But when used as a callback, it becomes a reference to the calling object.
I have found that this causes problems in code, because if you use a method within a JavaScript "object" as a callback function you can't tell whether "this" refers to the current "object" or whether "this" refers to the calling object.
Can someone clarify usage and best practices regarding how to get around this problem?
function TestObject() {
TestObject.prototype.firstMethod = function(){
this.callback();
YAHOO.util.Connect.asyncRequest(method, uri, callBack);
}
TestObject.prototype.callBack = function(o){
// do something with "this"
//when method is called directly, "this" resolves to the current object
//when invoked by the asyncRequest callback, "this" is not the current object
//what design patterns can make this consistent?
this.secondMethod();
}
TestObject.prototype.secondMethod = function() {
alert('test');
}
}
|
[
"Quick advice on best practices before I babble on about the magic this variable. If you want Object-oriented programming (OOP) in Javascript that closely mirrors more traditional/classical inheritance patterns, pick a framework, learn its quirks, and don't try to get clever. If you want to get clever, learn javascript as a functional language, and avoid thinking about things like classes.\nWhich brings up one of the most important things to keep in mind about Javascript, and to repeat to yourself when it doesn't make sense. Javascript does not have classes. If something looks like a class, it's a clever trick. Javascript has objects (no derisive quotes needed) and functions. (that's not 100% accurate, functions are just objects, but it can sometimes be helpful to think of them as separate things)\nThe this variable is attached to functions. Whenever you invoke a function, this is given a certain value, depending on how you invoke the function. This is often called the invocation pattern.\nThere are four ways to invoke functions in javascript. You can invoke the function as a method, as a function, as a constructor, and with apply.\nAs a Method\nA method is a function that's attached to an object\nvar foo = {};\nfoo.someMethod = function(){\n alert(this);\n}\n\nWhen invoked as a method, this will be bound to the object the function/method is a part of. In this example, this will be bound to foo.\nAs A Function\nIf you have a stand alone function, the this variable will be bound to the \"global\" object, almost always the window object in the context of a browser.\n var foo = function(){\n alert(this);\n }\n foo();\n\nThis may be what's tripping you up, but don't feel bad. Many people consider this a bad design decision. Since a callback is invoked as a function and not as a method, that's why you're seeing what appears to be inconsistent behaviour.\nMany people get around the problem by doing something like, um, this\nvar foo = {};\nfoo.someMethod = function (){\n var that=this;\n function bar(){\n alert(that);\n }\n}\n\nYou define a variable that which points to this. Closure (a topic all it's own) keeps that around, so if you call bar as a callback, it still has a reference.\nAs a Constructor\nYou can also invoke a function as a constructor. Based on the naming convention you're using (TestObject) this also may be what you're doing and is what's tripping you up.\nYou invoke a function as a Constructor with the new keyword.\nfunction Foo(){\n this.confusing = 'hell yeah';\n}\nvar myObject = new Foo();\n\nWhen invoked as a constructor, a new Object will be created, and this will be bound to that object. Again, if you have inner functions and they're used as callbacks, you'll be invoking them as functions, and this will be bound to the global object. Use that var that = this; trick/pattern.\nSome people think the constructor/new keyword was a bone thrown to Java/traditional OOP programmers as a way to create something similar to classes.\nWith the Apply Method.\nFinally, every function has a method (yes, functions are objects in Javascript) named apply. Apply lets you determine what the value of this will be, and also lets you pass in an array of arguments. Here's a useless example.\nfunction foo(a,b){\n alert(a);\n alert(b);\n alert(this);\n}\nvar args = ['ah','be'];\nfoo.apply('omg',args);\n\n",
"In JavaScript, this always refers to the object invoking the function that is being executed. So if the function is being used as an event handler, this will refer to the node that fired the event. But if you have an object and call a function on it like:\nmyObject.myFunction();\n\nThen this inside myFunction will refer to myObject. Does it make sense?\nTo get around it you need to use closures. You can change your code as follows:\nfunction TestObject() {\n TestObject.prototype.firstMethod = function(){\n this.callback();\n YAHOO.util.Connect.asyncRequest(method, uri, callBack);\n } \n\n var that = this;\n TestObject.prototype.callBack = function(o){\n that.secondMethod();\n }\n\n TestObject.prototype.secondMethod = function() {\n alert('test');\n }\n}\n\n",
"this corresponds to the context for the function call. For functions not called as part of an object (no . operator), this is the global context (window in web pages). For functions called as object methods (via the . operator), it's the object.\nBut, you can make it whatever you want. All functions have .call() and .apply() methods that can be used to invoke them with a custom context. So if i set up an object Chile like so:\nvar Chile = { name: 'booga', stuff: function() { console.log(this.name); } };\n\n...and invoke Chile.stuff(), it'll produce the obvious result:\nbooga\n\nBut if i want, i can take and really screw with it:\nChile.stuff.apply({ name: 'supercalifragilistic' });\n\nThis is actually quite useful...\n",
"If you're using a javascript framework, there may be a handy method for dealing with this. In Prototype, for example, you can call a method and scope it to a particular \"this\" object:\nvar myObject = new TestObject();\nmyObject.firstMethod.bind(myObject);\n\nNote: bind() returns a function, so you can also use it to pre-scope callbacks inside your class:\ncallBack.bind(this);\n\nhttp://www.prototypejs.org/api/function/bind\n",
"I believe this may be due to how the idea of [closures](http://en.wikipedia.org/wiki/Closure_(computer_science) work in Javascript.\nI am just getting to grips with closures myself. Have a read of the linked wikipedia article.\nHere's another article with more information.\nAnyone out there able to confirm this?\n",
"As soon as callback methods are called from other context I'm usually using something that I'm call callback context:\nvar ctx = function CallbackContext()\n{\n_callbackSender\n...\n}\n\nfunction DoCallback(_sender, delegate, callbackFunc)\n{\n ctx = _callbackSender = _sender;\n delegate();\n}\n\nfunction TestObject()\n{\n test = function()\n {\n DoCallback(otherFunc, callbackHandler);\n }\n\n callbackHandler = function()\n{\n ctx._callbackSender;\n //or this = ctx._callbacjHandler;\n}\n}\n\n",
"You can also use Function.Apply(thisArg, argsArray)... Where thisArg determines the value of this inside your function...the second parameter is an optional arguments array that you can also pass to your function. \nIf you don't plan on using the second argument, don't pass anything to it. Internet Explorer will throw a TypeError at you if you pass null (or anything that is not an array) to function.apply()'s second argument...\nWith the example code you gave it would look something like:\nYAHOO.util.Connect.asyncRequest(method, uri, callBack.Apply(this));\n\n",
"If you're using Prototype you can use bind() and bindAsEventListener() to get around that problem.\n"
] |
[
87,
12,
3,
1,
0,
0,
0,
0
] |
[] |
[] |
[
"javascript"
] |
stackoverflow_0000080084_javascript.txt
|
Q:
Whose responsibility is it, anyway?
In the application I am writing I have a Policy class. There are 4 different types of Policy. Each Policy is weighted against the other Policies such that PolicyA > PolicyB > PolicyC > PolicyD.
Who's responsibility is it to implement the logic to determine whether one Policy is greather than another? My initial thought is to overload the > and < operators and implement the logic in the Policy type itself.
Does that violate the SRP?
A:
I would think that a PolicyComparer class should do the evaluation.
A:
I think you are on the right track with the overload however the extension of this is obviously a lot longer
if (A > B || B > C || C > D)
...
A:
You could also store a PolicyWeight attribute in your class, that being a simple built-in type ( int, unsigned int, ... ) which can then be easily compared.
A:
Certianly a dedicated comparer class. If you ever need to provide additional logic (e.g. have two or three different ways of comparing policies), this approach allows you more flexibility (not achievable through operator overloads).
A:
You want a PolicyComparator class. If you want to override < and >, that's fine, but do that overriding in the Policy base class, and have those implementations utilize PolicyComparator to do it.
|
Whose responsibility is it, anyway?
|
In the application I am writing I have a Policy class. There are 4 different types of Policy. Each Policy is weighted against the other Policies such that PolicyA > PolicyB > PolicyC > PolicyD.
Who's responsibility is it to implement the logic to determine whether one Policy is greather than another? My initial thought is to overload the > and < operators and implement the logic in the Policy type itself.
Does that violate the SRP?
|
[
"I would think that a PolicyComparer class should do the evaluation.\n",
"I think you are on the right track with the overload however the extension of this is obviously a lot longer\nif (A > B || B > C || C > D)\n ...\n",
"You could also store a PolicyWeight attribute in your class, that being a simple built-in type ( int, unsigned int, ... ) which can then be easily compared.\n",
"Certianly a dedicated comparer class. If you ever need to provide additional logic (e.g. have two or three different ways of comparing policies), this approach allows you more flexibility (not achievable through operator overloads).\n",
"You want a PolicyComparator class. If you want to override < and >, that's fine, but do that overriding in the Policy base class, and have those implementations utilize PolicyComparator to do it.\n"
] |
[
6,
0,
0,
0,
0
] |
[] |
[] |
[
"single_responsibility_principle",
"solid_principles"
] |
stackoverflow_0000083709_single_responsibility_principle_solid_principles.txt
|
Q:
How to get date picture created in java
I would like to extract the date a jpg file was created. Java has the lastModified method for the File object, but appears to provide no support for extracting the created date from the file. I believe the information is stored within the file as the date I see when I hover the mouse pointer over the file in Win XP is different than what I can get by using JNI with "dir /TC" on the file in DOS.
A:
The information is stored within the image in a format called EXIF or link text. There several libraries out there capable of reading this format, like this one
A:
The date is stored in the EXIF data in the jpeg. There's a java library and a viewer in java that might be helpful.
A:
I use this metadata library: http://www.drewnoakes.com/code/exif/
Seems to work pretty well, although bear in mind that not all JPEG images have this information, so it can't be 100% fool-proof.
If the EXIF metadata doesn't contain the created date, then you'll probably have to make do with Java's lastUpdated - unless you want to resort to Runtime.exec(...) and using system functions to find out (I wouldn't recommend this, though!)
A:
You probably need something to access the exif data. Google suggests this library.
A:
The code example below asks the user for a file path and then outputs the creation date and time:
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
public class Main {
public static void main(final String[] args) {
try {
// get runtime environment and execute child process
Runtime systemShell = Runtime.getRuntime();
BufferedReader br1=new BufferedReader(new InputStreamReader(System.in));
System.out.println("Enter filename: ");
String fname=(String)br1.readLine();
Process output = systemShell.exec("cmd /c dir /a "+fname);
// open reader to get output from process
BufferedReader br = new BufferedReader (new InputStreamReader(output.getInputStream()));
String out="";
String line = null;
int step=1;
while((line = br.readLine()) != null )
{
if(step==6)
{
out=line;
}
step++;
} // display process output
try{
out=out.replaceAll(" ","");
System.out.println("CreationDate: "+out.substring(0,10));
System.out.println("CreationTime: "+out.substring(10,15));
}
catch(StringIndexOutOfBoundsException se)
{
System.out.println("File not found");
}
}
catch (IOException ioe){ System.err.println(ioe); }
catch (Throwable t) { t.printStackTrace();}
}
}
|
How to get date picture created in java
|
I would like to extract the date a jpg file was created. Java has the lastModified method for the File object, but appears to provide no support for extracting the created date from the file. I believe the information is stored within the file as the date I see when I hover the mouse pointer over the file in Win XP is different than what I can get by using JNI with "dir /TC" on the file in DOS.
|
[
"The information is stored within the image in a format called EXIF or link text. There several libraries out there capable of reading this format, like this one\n",
"The date is stored in the EXIF data in the jpeg. There's a java library and a viewer in java that might be helpful.\n",
"I use this metadata library: http://www.drewnoakes.com/code/exif/\nSeems to work pretty well, although bear in mind that not all JPEG images have this information, so it can't be 100% fool-proof.\nIf the EXIF metadata doesn't contain the created date, then you'll probably have to make do with Java's lastUpdated - unless you want to resort to Runtime.exec(...) and using system functions to find out (I wouldn't recommend this, though!)\n",
"You probably need something to access the exif data. Google suggests this library.\n",
"The code example below asks the user for a file path and then outputs the creation date and time:\nimport java.io.BufferedReader;\nimport java.io.IOException;\nimport java.io.InputStreamReader;\n\npublic class Main {\n\n public static void main(final String[] args) {\n try {\n // get runtime environment and execute child process\n Runtime systemShell = Runtime.getRuntime();\n BufferedReader br1=new BufferedReader(new InputStreamReader(System.in));\n System.out.println(\"Enter filename: \");\n String fname=(String)br1.readLine();\n Process output = systemShell.exec(\"cmd /c dir /a \"+fname);\n // open reader to get output from process\n BufferedReader br = new BufferedReader (new InputStreamReader(output.getInputStream()));\n\n String out=\"\";\n String line = null;\n\n int step=1;\n while((line = br.readLine()) != null ) \n {\n if(step==6)\n {\n out=line;\n }\n step++;\n } // display process output\n\n try{\n out=out.replaceAll(\" \",\"\");\n System.out.println(\"CreationDate: \"+out.substring(0,10));\n System.out.println(\"CreationTime: \"+out.substring(10,15));\n }\n catch(StringIndexOutOfBoundsException se)\n {\n System.out.println(\"File not found\");\n }\n }\n catch (IOException ioe){ System.err.println(ioe); }\n catch (Throwable t) { t.printStackTrace();}\n }\n}\n\n"
] |
[
12,
6,
4,
0,
0
] |
[] |
[] |
[
"date",
"java"
] |
stackoverflow_0000083787_date_java.txt
|
Q:
Replacing plain text password for app
We are currently storing plain text passwords for a web app that we have.
I keep advocating moving to a password hash but another developer said that this would be less secure -- more passwords could match the hash and a dictionary/hash attack would be faster.
Is there any truth to this argument?
A:
Absolutely none. But it doesn't matter. I've posted a similar response before:
It's unfortunate, but people, even programmers, are just too emotional to be easily be swayed by argument. Once he's invested in his position (and, if you're posting here, he is) you're not likely to convince him with facts alone. What you need to do is switch the burden of proof. You need to get him out looking for data that he hopes will convince you, and in so doing learn the truth. Unfortunately, he has the benefit of the status quo, so you've got a tough road there.
A:
From Wikipedia
Some computer systems store user
passwords, against which to compare
user log on attempts, as cleartext. If
an attacker gains access to such an
internal password store, all passwords
and so all user accounts will be
compromised. If some users employ the
same password for accounts on
different systems, those will be
compromised as well.
More secure systems store each
password in a cryptographically
protected form, so access to the
actual password will still be
difficult for a snooper who gains
internal access to the system, while
validation of user access attempts
remains possible.
A common approache stores only a
"hashed" form of the plaintext
password. When a user types in a
password on such a system, the
password handling software runs
through a cryptographic hash
algorithm, and if the hash value
generated from the user's entry
matches the hash stored in the
password database, the user is
permitted access. The hash value is
created by applying a cryptographic
hash function to a string consisting
of the submitted password and,
usually, another value known as a
salt. The salt prevents attackers from
building a list of hash values for
common passwords. MD5 and SHA1 are
frequently used cryptographic hash
functions.
There is much more that you can read on the subject on that page. In my opinion, and in everything I've read and worked with, hashing is a better scenario unless you use a very small (< 256 bit) algorithm.
A:
There is absolutely no excuse to keeping plain text passwords on the web app. Use a standard hashing algorithm (SHA-1, not MD5!) with a salt value, so that rainbow attacks are impossible.
A:
If you do not salt your Password, you're suspect to Rainbow Table attacks (precompiled Dictionaries that have valid inputs for a given hash)
The other developer should stop talking about security if you're storing passwords in plaintext and start reading about security.
Collisions are possible, but not a big problem for password apps usually (they are mainly a problem in areas where hashes are used as a way to verify the integrity of files).
So: Salt your passwords (by adding the Salt to the right side of the password*) and use a good hashing algorhithm like SHA-1 or preferably SHA-256 or SHA-512.
PS: A bit more detail about Hashes here.
*i'm a bit unsure whether or not the Salt should to to the beginning or to the end of the string. The problem is that if you have a collisions (two inputs with the same hash), adding the Salt to the "wrong" side will not change the resulting hash. In any way, you won't have big problems with Rainbow Tables, only with collisions
A:
I don't understand how your other developer things 'more passwords could match the hash'.
There is argument to a 'hash attack would be faster', but only if you're not salting the passwords as they're hashed. Normally, hashing functions allow you to provide a salt which makes the use of known hash table a waste of time.
Personally, I'd say 'no'. Based on the above, as well as the fact that if you do somehow get clear-text expose, a salted, hashed value is of little value to someone trying to get in. Hashing also provides the benefit of making all passwords 'look' the same length.
ie, if hashing any string always results in a 20 character hash, then if you have only the hash to look at, you can't tell whether the original password was eight characters or sixteen for example.
A:
I encountered this exact same issue in my workplace. What I did to convince him that hashing was more secure was to write a SQL injection that returned the list of users and passwords from the public section of our site. It was escalated right away as a major security issue :)
To prevent against dictionary/hash attacks be sure to hash against a token that's unique to each user and static (username/join date/userguid works well)
A:
There is an old saying about programmers pretending to be cryptographers :)
Jeff Atwood has a good post on the subject: You're Probably Storing Passwords Incorrectly
To reply more extensively, I agree with all of the above, the hash makes it easier in theory to get the user's password since multiple passwords match the same hash. However,
this is much less likely to happen than someone getting access to your database.
A:
There is truth in that if you hash something, yes, there will be collisions so it would be possible for two different passwords to unlock the same account.
From a practical standpoint though, that's a poor argument - A good hashing function (md5 or sha1 would be fine) can pretty much guarantee that for all meaningfully strings, especially short ones, there will be no collisions. Even if there were, having two passwords match for one account isn't a huge problem - If someone is in a position to randomly guess passwords fast enough that they are likely to be able to get in, you've got bigger problems.
I would argue that storing the passwords in plain text represents a much greater security risk than hash collisions in the password matching.
A:
I'm not a security expert but I have a feeling that if plain text were more secure, hashing wouldnt exist in the first place.
A:
In theory, yes. Passwords can be longer (more information) than a hash, so there is a possibility of hash collisions. However, most attacks are dictionary-based, and the probability of collisions is infinitely smaller than a successful direct match.
A:
It depends on what you're defending against. If it's an attacker pulling down your database (or tricking your application into displaying the database), then plaintext passwords are useless. There are many attacks that rely on convincing the application to disgorge it's private data- SQL injection, session hijack, etc. It's often better not to keep the data at all, but to keep the hashed version so bad guys can't easily use it.
As your co-worker suggests, this can be trivially defeated by running the same hash algorithm against a dictionary and using rainbow tables to pull the info out. The usual solution is to use a secret salt plus additional user information to make the hashed results unique- something like:
String hashedPass=CryptUtils.MD5("alsdl;ksahglhkjfsdkjhkjhkfsdlsdf" + user.getCreateDate().toString() + user.getPassword);
As long as your salt is secret, or your attacker doesn't know the precise creation date of the user's record, a dictionary attack will fail- even in the event that they are able to pull down the password field.
A:
Nothing is less secure than storing plain-text passwords. If you're using a decent hashing algorithm (at least SHA-256, but even SHA-1 is better than nothing) then yes, collisions are possible, but it doesn't matter because given a hash, it's impossible* to calculate what strings hash to it. If you hash the username WITH the password, then that possibility goes out the window as well.
* - technically not impossible, but "computationally infeasible"
If the username is "graeme" and the password is "stackoverflow", then create a string "graeme-stackoverflow-1234" where 1234 is a random number, then hash it and store "hashoutput1234" in the database. When it comes to validating a password, take the username, the supplied password and the number from the end of the stored value (the hash has a fixed length so you can always do this) and hash them together, and compare it with the hash part of the stored value.
A:
more passwords could match the hash and a dictionary/hash attack would be faster.
Yes and no. Use a modern hashing algorithm, like an SHA variant, and that argument gets very, very week. Do you really need to be worried if that brute force attack is going to take only 352 years instead of 467 years? (Anecdotal joke there.) The value to be gained (not having the password stored in plain text on the system) far outstrips your colleague's concern.
A:
Hope you forgive me for plugging a solution I wrote on this, using client side JavaScript to hash the password before it's transmitted: http://blog.asgeirnilsen.com/2005/11/password-authentication-without.html
|
Replacing plain text password for app
|
We are currently storing plain text passwords for a web app that we have.
I keep advocating moving to a password hash but another developer said that this would be less secure -- more passwords could match the hash and a dictionary/hash attack would be faster.
Is there any truth to this argument?
|
[
"Absolutely none. But it doesn't matter. I've posted a similar response before:\nIt's unfortunate, but people, even programmers, are just too emotional to be easily be swayed by argument. Once he's invested in his position (and, if you're posting here, he is) you're not likely to convince him with facts alone. What you need to do is switch the burden of proof. You need to get him out looking for data that he hopes will convince you, and in so doing learn the truth. Unfortunately, he has the benefit of the status quo, so you've got a tough road there.\n",
"From Wikipedia\n\nSome computer systems store user\n passwords, against which to compare\n user log on attempts, as cleartext. If\n an attacker gains access to such an\n internal password store, all passwords\n and so all user accounts will be\n compromised. If some users employ the\n same password for accounts on\n different systems, those will be\n compromised as well.\nMore secure systems store each\n password in a cryptographically\n protected form, so access to the\n actual password will still be\n difficult for a snooper who gains\n internal access to the system, while\n validation of user access attempts\n remains possible.\nA common approache stores only a\n \"hashed\" form of the plaintext\n password. When a user types in a\n password on such a system, the\n password handling software runs\n through a cryptographic hash\n algorithm, and if the hash value\n generated from the user's entry\n matches the hash stored in the\n password database, the user is\n permitted access. The hash value is\n created by applying a cryptographic\n hash function to a string consisting\n of the submitted password and,\n usually, another value known as a\n salt. The salt prevents attackers from\n building a list of hash values for\n common passwords. MD5 and SHA1 are\n frequently used cryptographic hash\n functions.\n\nThere is much more that you can read on the subject on that page. In my opinion, and in everything I've read and worked with, hashing is a better scenario unless you use a very small (< 256 bit) algorithm.\n",
"There is absolutely no excuse to keeping plain text passwords on the web app. Use a standard hashing algorithm (SHA-1, not MD5!) with a salt value, so that rainbow attacks are impossible.\n",
"If you do not salt your Password, you're suspect to Rainbow Table attacks (precompiled Dictionaries that have valid inputs for a given hash)\nThe other developer should stop talking about security if you're storing passwords in plaintext and start reading about security.\nCollisions are possible, but not a big problem for password apps usually (they are mainly a problem in areas where hashes are used as a way to verify the integrity of files).\nSo: Salt your passwords (by adding the Salt to the right side of the password*) and use a good hashing algorhithm like SHA-1 or preferably SHA-256 or SHA-512.\nPS: A bit more detail about Hashes here.\n*i'm a bit unsure whether or not the Salt should to to the beginning or to the end of the string. The problem is that if you have a collisions (two inputs with the same hash), adding the Salt to the \"wrong\" side will not change the resulting hash. In any way, you won't have big problems with Rainbow Tables, only with collisions\n",
"I don't understand how your other developer things 'more passwords could match the hash'.\nThere is argument to a 'hash attack would be faster', but only if you're not salting the passwords as they're hashed. Normally, hashing functions allow you to provide a salt which makes the use of known hash table a waste of time. \nPersonally, I'd say 'no'. Based on the above, as well as the fact that if you do somehow get clear-text expose, a salted, hashed value is of little value to someone trying to get in. Hashing also provides the benefit of making all passwords 'look' the same length.\nie, if hashing any string always results in a 20 character hash, then if you have only the hash to look at, you can't tell whether the original password was eight characters or sixteen for example.\n",
"I encountered this exact same issue in my workplace. What I did to convince him that hashing was more secure was to write a SQL injection that returned the list of users and passwords from the public section of our site. It was escalated right away as a major security issue :)\nTo prevent against dictionary/hash attacks be sure to hash against a token that's unique to each user and static (username/join date/userguid works well)\n",
"There is an old saying about programmers pretending to be cryptographers :)\nJeff Atwood has a good post on the subject: You're Probably Storing Passwords Incorrectly\nTo reply more extensively, I agree with all of the above, the hash makes it easier in theory to get the user's password since multiple passwords match the same hash. However,\nthis is much less likely to happen than someone getting access to your database.\n",
"There is truth in that if you hash something, yes, there will be collisions so it would be possible for two different passwords to unlock the same account.\nFrom a practical standpoint though, that's a poor argument - A good hashing function (md5 or sha1 would be fine) can pretty much guarantee that for all meaningfully strings, especially short ones, there will be no collisions. Even if there were, having two passwords match for one account isn't a huge problem - If someone is in a position to randomly guess passwords fast enough that they are likely to be able to get in, you've got bigger problems.\nI would argue that storing the passwords in plain text represents a much greater security risk than hash collisions in the password matching.\n",
"I'm not a security expert but I have a feeling that if plain text were more secure, hashing wouldnt exist in the first place. \n",
"In theory, yes. Passwords can be longer (more information) than a hash, so there is a possibility of hash collisions. However, most attacks are dictionary-based, and the probability of collisions is infinitely smaller than a successful direct match. \n",
"It depends on what you're defending against. If it's an attacker pulling down your database (or tricking your application into displaying the database), then plaintext passwords are useless. There are many attacks that rely on convincing the application to disgorge it's private data- SQL injection, session hijack, etc. It's often better not to keep the data at all, but to keep the hashed version so bad guys can't easily use it.\nAs your co-worker suggests, this can be trivially defeated by running the same hash algorithm against a dictionary and using rainbow tables to pull the info out. The usual solution is to use a secret salt plus additional user information to make the hashed results unique- something like:\nString hashedPass=CryptUtils.MD5(\"alsdl;ksahglhkjfsdkjhkjhkfsdlsdf\" + user.getCreateDate().toString() + user.getPassword);\n\nAs long as your salt is secret, or your attacker doesn't know the precise creation date of the user's record, a dictionary attack will fail- even in the event that they are able to pull down the password field.\n",
"Nothing is less secure than storing plain-text passwords. If you're using a decent hashing algorithm (at least SHA-256, but even SHA-1 is better than nothing) then yes, collisions are possible, but it doesn't matter because given a hash, it's impossible* to calculate what strings hash to it. If you hash the username WITH the password, then that possibility goes out the window as well. \n* - technically not impossible, but \"computationally infeasible\"\nIf the username is \"graeme\" and the password is \"stackoverflow\", then create a string \"graeme-stackoverflow-1234\" where 1234 is a random number, then hash it and store \"hashoutput1234\" in the database. When it comes to validating a password, take the username, the supplied password and the number from the end of the stored value (the hash has a fixed length so you can always do this) and hash them together, and compare it with the hash part of the stored value.\n",
"\nmore passwords could match the hash and a dictionary/hash attack would be faster.\n\nYes and no. Use a modern hashing algorithm, like an SHA variant, and that argument gets very, very week. Do you really need to be worried if that brute force attack is going to take only 352 years instead of 467 years? (Anecdotal joke there.) The value to be gained (not having the password stored in plain text on the system) far outstrips your colleague's concern. \n",
"Hope you forgive me for plugging a solution I wrote on this, using client side JavaScript to hash the password before it's transmitted: http://blog.asgeirnilsen.com/2005/11/password-authentication-without.html\n"
] |
[
15,
6,
5,
3,
3,
3,
3,
2,
1,
1,
1,
1,
0,
0
] |
[] |
[] |
[
"passwords"
] |
stackoverflow_0000059022_passwords.txt
|
Q:
Shorthand if + nullable types (C#)
The following returns
Type of conditional expression cannot be determined because there is no implicit conversion between 'double' and '<null>'
aNullableDouble = (double.TryParse(aString, out aDouble) ? aDouble : null)
The reason why I can't just use aNullableBool instead of the roundtrip with aDouble is because aNullableDouble is a property of a generated EntityFramework class which cannot be used as an out par.
A:
aNullableDouble = double.TryParse(aString, out aDouble) ? (double?)aDouble : null;
A:
Just blow the syntax out into the full syntax instead of the shorthand ... it'll be easier to read:
aNullableDouble = null;
if (double.TryParse(aString, out aDouble))
{
aNullableDouble = aDouble;
}
A:
The interesting side-effect of using nullable types is that you can't really use a shorthand IF. Shorthand IF has to return the same Type from both conditions, and it can't be null in either case. So, cast or write it out :)
A:
aNullableDouble = (double.TryParse(aString, out aDouble)?new Nullable<double>(aDouble):null)
A:
.NET supports nullable types, but by declaring them as such you have to treat them a bit differently (as, understandably, something which is normally a value type now is sort of reference-ish).
This also might not help much if you end up having to do too much converting between nullable doubles and regular doubles... as might easily be the case with an auto-generated set of classes.
|
Shorthand if + nullable types (C#)
|
The following returns
Type of conditional expression cannot be determined because there is no implicit conversion between 'double' and '<null>'
aNullableDouble = (double.TryParse(aString, out aDouble) ? aDouble : null)
The reason why I can't just use aNullableBool instead of the roundtrip with aDouble is because aNullableDouble is a property of a generated EntityFramework class which cannot be used as an out par.
|
[
"aNullableDouble = double.TryParse(aString, out aDouble) ? (double?)aDouble : null;\n\n",
"Just blow the syntax out into the full syntax instead of the shorthand ... it'll be easier to read:\naNullableDouble = null;\nif (double.TryParse(aString, out aDouble))\n{\n aNullableDouble = aDouble;\n}\n\n",
"The interesting side-effect of using nullable types is that you can't really use a shorthand IF. Shorthand IF has to return the same Type from both conditions, and it can't be null in either case. So, cast or write it out :) \n",
"aNullableDouble = (double.TryParse(aString, out aDouble)?new Nullable<double>(aDouble):null)\n\n",
".NET supports nullable types, but by declaring them as such you have to treat them a bit differently (as, understandably, something which is normally a value type now is sort of reference-ish).\nThis also might not help much if you end up having to do too much converting between nullable doubles and regular doubles... as might easily be the case with an auto-generated set of classes.\n"
] |
[
9,
7,
2,
1,
0
] |
[] |
[] |
[
"c#",
"conditional_operator"
] |
stackoverflow_0000083653_c#_conditional_operator.txt
|
Q:
Adobe Reader Error Codes
I am programmatically creating PDFs, and a recent change to my generator is creating documents that crash both Mac Preview and Adobe Reader on my Mac. Before Adobe Reader crashes, it reports:
There was an error processing a page.
There was a problem reading this document (18).
I suspect that that "18" might give me some information on what is wrong with the PDF I've created. Is there a document explaining the meaning of these status codes?
A:
Hold down the Ctrl key while pressing OK and you should be able to load past this point in the document and possibly get more details.
What tool are you using to create the PDF (Aspose)?
A:
I wasn't able to locate any info on the Adobe error code, so I ended up installing xpdf via Darwinports. Loading my PDF with xpdf spit out much more useful error information and I was able to track down the problem. (I was creating a circular reference in a form when I copied content from one document to another.)
|
Adobe Reader Error Codes
|
I am programmatically creating PDFs, and a recent change to my generator is creating documents that crash both Mac Preview and Adobe Reader on my Mac. Before Adobe Reader crashes, it reports:
There was an error processing a page.
There was a problem reading this document (18).
I suspect that that "18" might give me some information on what is wrong with the PDF I've created. Is there a document explaining the meaning of these status codes?
|
[
"Hold down the Ctrl key while pressing OK and you should be able to load past this point in the document and possibly get more details.\nWhat tool are you using to create the PDF (Aspose)?\n",
"I wasn't able to locate any info on the Adobe error code, so I ended up installing xpdf via Darwinports. Loading my PDF with xpdf spit out much more useful error information and I was able to track down the problem. (I was creating a circular reference in a form when I copied content from one document to another.) \n"
] |
[
5,
2
] |
[] |
[] |
[
"pdf",
"pdf_generation"
] |
stackoverflow_0000077102_pdf_pdf_generation.txt
|
Q:
Dense pixelwise reverse projection
I saw a question on reverse projecting 4 2D points to derive the corners of a rectangle in 3D space. I have a kind of more general version of the same problem:
Given either a focal length (which can be solved to produce arcseconds / pixel) or the intrinsic camera matrix (a 3x2 matrix that defines the properties of the pinhole camera model being used - it's directly related to focal length), compute the camera ray that goes through each pixel.
I'd like to take a series of frames, derive the candidate light rays from each frame, and use some sort of iterative solving approach to derive the camera pose from each frame (given a sufficiently large sample, of course)... All of that is really just massively-parallel implementations of a generalized Hough algorithm... it's getting the candidate rays in the first place that I'm having the problem with...
A:
A friend of mine found the source code from a university for the camera matching in PhotoSynth. I'd Google around for it, if I were you.
A:
That's a good suggestion... and I will definitely look into it (photosynth kind of resparked my interest in this subject - but I've been working on it for months for robochamps) - but it's a sparse implementation - it looks for "good" features (points in the image that should be easily identifiable in other views of the same image), and while I certainly plan to score each match based on how good the feature it's matching is, I want the full dense algorithm to derive every pixel... or should I say voxel lol?
A:
After a little poking around, isn't it the extrinsic matrix that tells you where the camera actually is in 3-space?
I worked at a company that did a lot of this, but I always used the tools that the algorithm guys wrote. :)
|
Dense pixelwise reverse projection
|
I saw a question on reverse projecting 4 2D points to derive the corners of a rectangle in 3D space. I have a kind of more general version of the same problem:
Given either a focal length (which can be solved to produce arcseconds / pixel) or the intrinsic camera matrix (a 3x2 matrix that defines the properties of the pinhole camera model being used - it's directly related to focal length), compute the camera ray that goes through each pixel.
I'd like to take a series of frames, derive the candidate light rays from each frame, and use some sort of iterative solving approach to derive the camera pose from each frame (given a sufficiently large sample, of course)... All of that is really just massively-parallel implementations of a generalized Hough algorithm... it's getting the candidate rays in the first place that I'm having the problem with...
|
[
"A friend of mine found the source code from a university for the camera matching in PhotoSynth. I'd Google around for it, if I were you.\n",
"That's a good suggestion... and I will definitely look into it (photosynth kind of resparked my interest in this subject - but I've been working on it for months for robochamps) - but it's a sparse implementation - it looks for \"good\" features (points in the image that should be easily identifiable in other views of the same image), and while I certainly plan to score each match based on how good the feature it's matching is, I want the full dense algorithm to derive every pixel... or should I say voxel lol?\n",
"After a little poking around, isn't it the extrinsic matrix that tells you where the camera actually is in 3-space?\nI worked at a company that did a lot of this, but I always used the tools that the algorithm guys wrote. :)\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"3d",
"computer_vision",
"image",
"photogrammetry"
] |
stackoverflow_0000076718_3d_computer_vision_image_photogrammetry.txt
|
Q:
How to update a field with random data?
I've got a new varchar(10) field in a database with 1000+ records. I'd like to update the table so I can have random data in the field. I'm looking for a SQL solution.
I know I can use a cursor, but that seems inelegant.
MS-SQL 2000,BTW
A:
update MyTable Set RandomFld = CONVERT(varchar(10), NEWID())
A:
You might be able to adapt something like this to load a test dataset of values, depending on what you are looking for
A:
Additionally, if you are just doing this for testing or one time use I would say that an elegant solution is not really necessary.
A:
Why not use the first 10 characters of an md5 checksum of the current timestamp and a random number?
A:
Something like (untested code):
UPDATE yourtable
SET yourfield= CHAR(32+ROUND(RAND()*95,0));
Obviously, concatenate more random characters if you want up to ten chars.
It's possible that the query optimizer might set all fields to the same value; in that case, I would try
SET yourfield=LEFT(yourfield,0)+CHAR…
to trick the optimizer into recalculating each time the expression.
A:
If this is a one time thing just to get data into the system I really see no issue with using a cursor as much as I hate cursors they do have their place.
A:
How about this:
UPDATE TBL SET Field = LEFT( CONVERT(varchar(255), @myid),10)
A:
if you are in SQL Server you can use
CAST(RAND() as varchar(10))
EDIT: This will only work inside an iteration. As part of a multi-row insert it will use the same RAND() result for each row.
|
How to update a field with random data?
|
I've got a new varchar(10) field in a database with 1000+ records. I'd like to update the table so I can have random data in the field. I'm looking for a SQL solution.
I know I can use a cursor, but that seems inelegant.
MS-SQL 2000,BTW
|
[
"update MyTable Set RandomFld = CONVERT(varchar(10), NEWID())\n\n",
"You might be able to adapt something like this to load a test dataset of values, depending on what you are looking for\n",
"Additionally, if you are just doing this for testing or one time use I would say that an elegant solution is not really necessary.\n",
"Why not use the first 10 characters of an md5 checksum of the current timestamp and a random number?\n",
"Something like (untested code):\nUPDATE yourtable\nSET yourfield= CHAR(32+ROUND(RAND()*95,0));\n\nObviously, concatenate more random characters if you want up to ten chars.\nIt's possible that the query optimizer might set all fields to the same value; in that case, I would try\nSET yourfield=LEFT(yourfield,0)+CHAR…\n\nto trick the optimizer into recalculating each time the expression.\n",
"If this is a one time thing just to get data into the system I really see no issue with using a cursor as much as I hate cursors they do have their place. \n",
"How about this:\nUPDATE TBL SET Field = LEFT( CONVERT(varchar(255), @myid),10)\n\n",
"if you are in SQL Server you can use \nCAST(RAND() as varchar(10))\n\nEDIT: This will only work inside an iteration. As part of a multi-row insert it will use the same RAND() result for each row.\n"
] |
[
5,
1,
1,
1,
1,
0,
0,
0
] |
[] |
[] |
[
"sql",
"sql_server"
] |
stackoverflow_0000083914_sql_sql_server.txt
|
Q:
Common lisp idiom - is there a better way?
I find myself doing this sort of thing all the time. I've been considering writing a macro/function to make this sort of thing easier, but it occurs to me that I'm probably reinventing the wheel.
Is there an existing function that will let me accomplish this same sort of thing more succinctly?
(defun remove-low-words (word-list)
"Return a list with words of insufficient score removed."
(let ((result nil))
(dolist (word word-list)
(when (good-enough-score-p word) (push word result)))
result))
A:
There are several built-in ways of doing this. One way would be:
(remove-if-not 'good-enough-score-p word-list)
And another:
(loop for word in word-list
when (good-enough-score-p word)
collect word)
And yet another:
(mapcan (lambda (word)
(when (good-enough-score-p word)
(list word)))
word-list)
Etc... There's also SERIES and Iterate. The Iterate version is identical to the LOOP version, but the SERIES version is interesting:
(collect (choose-if 'good-enough-score-p (scan word-list))))
So, yes, you're very likely to reinvent some wheel. :-)
A:
The function you want is remove-if-not, which is built-in.
(defun remove-low-words (word-list)
(remove-if-not #'good-enough-score-p word-list))
If you feel like you are re-inventing something to do with lists, you probably are. Check the Hyperspec to see.
The Hyperspec documentation on remove-if-not
All sequence functions
All list-specific functions
|
Common lisp idiom - is there a better way?
|
I find myself doing this sort of thing all the time. I've been considering writing a macro/function to make this sort of thing easier, but it occurs to me that I'm probably reinventing the wheel.
Is there an existing function that will let me accomplish this same sort of thing more succinctly?
(defun remove-low-words (word-list)
"Return a list with words of insufficient score removed."
(let ((result nil))
(dolist (word word-list)
(when (good-enough-score-p word) (push word result)))
result))
|
[
"There are several built-in ways of doing this. One way would be:\n(remove-if-not 'good-enough-score-p word-list)\n\nAnd another:\n(loop for word in word-list \n when (good-enough-score-p word)\n collect word)\n\nAnd yet another:\n(mapcan (lambda (word)\n (when (good-enough-score-p word)\n (list word)))\n word-list)\n\nEtc... There's also SERIES and Iterate. The Iterate version is identical to the LOOP version, but the SERIES version is interesting:\n(collect (choose-if 'good-enough-score-p (scan word-list))))\n\nSo, yes, you're very likely to reinvent some wheel. :-)\n",
"The function you want is remove-if-not, which is built-in.\n(defun remove-low-words (word-list)\n (remove-if-not #'good-enough-score-p word-list))\n\nIf you feel like you are re-inventing something to do with lists, you probably are. Check the Hyperspec to see.\n\nThe Hyperspec documentation on remove-if-not\nAll sequence functions\nAll list-specific functions\n\n"
] |
[
22,
6
] |
[
"There are a couple ways you can do this. First, and probably most easily, you can do it recursively.\n(defun remove-low-words (word-list)\n (if (good-enough-score-p (car word-list))\n (list word (remove-low-words (cdr word-list)))\n (remove-low-words (cdr word-list))))\n\nYou could also do it with mapcar and reduce, where the former can construct you a list with failing elements replaced by nil and the latter can be used to filter out the nil.\nEither would be a good candidate for a \"filter\" macro or function that takes a list and returns the list filtered by some predicate.\n"
] |
[
-2
] |
[
"common_lisp"
] |
stackoverflow_0000038571_common_lisp.txt
|
Q:
How often do you use System.Component.BackgroundWorker in your UIs ? (if ever)
I am sure a responsive UI is something that everyone strives for and the reccomended way to do stuff is to use the BackgroundWorker for this.
Do you find it easy to work with ? Do you use it often ? Or do you have your own frameworks for lengthy tasks and reporting process.
I have found that I am using it quite a lot and even using its delegates wherever I need some sort of progress reporting.
A:
BackgroundWorker makes things a lot easier. One thing I found the hard way is Backgroundworker itself has thread affinity even though it is supposed to hide the thread switching problem. It does not automatically switch to the UI thread in every case. It needs to be created and run from the UI thread for thread switching to happen properly.
A:
Multithreaded programming is hard to grasp in the beginning (and veterans still fail sometimes) and BackgroundWorker makes it a bit easier to use. I like the fact that BackgroundWorker has functionality which is easy to implement but even easier to wrongly implement in a subtle way, like cancellation.
I use it if I have and need a progress update, so I can display a meaningful progress bar.
If not, I use a Thread (or borrow from the ThreadPool), because I don't need all the functionality of BackgroundWorker and am proficient enough with threads to start a Thread and wait for it to stop.
As for delegates for non-related tasks, I use those of the Thread classes, like plain void ThreadStart(), or I create my own.
A:
I use it quite often for tasks such as progress indication and background data loading\processing.
Recently i found use case that is not supported out of box. It's "Overridable Task". However Patric Smacchia come up with nice solution.
A:
I've used it once and was quite happy with it. Often, there is no need for "big" multithreading, but only for 2 Threads (UI and Worker), and it works really well without having to worry too much about the underlying Threading Logic.
A:
@Gulzar Thank you for this piece of info : It needs to be created and run from the UI thread for thread switching to happen properly.
One thing to watch for when using a background worker that I have found is exception handlings.
If an exception is thrown on the async process it will not throw an exception to the main thread, the process will finish and BackgroundWorker RunWorkerCompleted event will fire with the error being hidden in the RunWorkerCompletedEventArgs.Error.
I like the fact that BackgroundWorker has functionality which is easy to implement but even easier to wrongly implement in a subtle way, like cancellation.
|
How often do you use System.Component.BackgroundWorker in your UIs ? (if ever)
|
I am sure a responsive UI is something that everyone strives for and the reccomended way to do stuff is to use the BackgroundWorker for this.
Do you find it easy to work with ? Do you use it often ? Or do you have your own frameworks for lengthy tasks and reporting process.
I have found that I am using it quite a lot and even using its delegates wherever I need some sort of progress reporting.
|
[
"BackgroundWorker makes things a lot easier. One thing I found the hard way is Backgroundworker itself has thread affinity even though it is supposed to hide the thread switching problem. It does not automatically switch to the UI thread in every case. It needs to be created and run from the UI thread for thread switching to happen properly.\n",
"Multithreaded programming is hard to grasp in the beginning (and veterans still fail sometimes) and BackgroundWorker makes it a bit easier to use. I like the fact that BackgroundWorker has functionality which is easy to implement but even easier to wrongly implement in a subtle way, like cancellation.\nI use it if I have and need a progress update, so I can display a meaningful progress bar. \nIf not, I use a Thread (or borrow from the ThreadPool), because I don't need all the functionality of BackgroundWorker and am proficient enough with threads to start a Thread and wait for it to stop.\nAs for delegates for non-related tasks, I use those of the Thread classes, like plain void ThreadStart(), or I create my own.\n",
"I use it quite often for tasks such as progress indication and background data loading\\processing.\nRecently i found use case that is not supported out of box. It's \"Overridable Task\". However Patric Smacchia come up with nice solution.\n",
"I've used it once and was quite happy with it. Often, there is no need for \"big\" multithreading, but only for 2 Threads (UI and Worker), and it works really well without having to worry too much about the underlying Threading Logic.\n",
"@Gulzar Thank you for this piece of info : It needs to be created and run from the UI thread for thread switching to happen properly.\nOne thing to watch for when using a background worker that I have found is exception handlings.\nIf an exception is thrown on the async process it will not throw an exception to the main thread, the process will finish and BackgroundWorker RunWorkerCompleted event will fire with the error being hidden in the RunWorkerCompletedEventArgs.Error.\nI like the fact that BackgroundWorker has functionality which is easy to implement but even easier to wrongly implement in a subtle way, like cancellation.\n"
] |
[
3,
3,
1,
1,
0
] |
[
"My biggest issue with the background worker class is that there really is no way to know when the worker has finished due to cancellation. The BackgroundWorker does not expose the thread it uses so you can't use the standard techniques for synchronizing thread termination (join, etc.). You also can't just wait in a loop on the UI thread for it to end because the RunWorkerCompleted event will never end up firing. The hack I've always had to use is to simply set a flag and then start a timer that will continue checking for the background worker to end. But it's very messy and complicates the business logic.\nSo it is great as long as you don't need to support deterministic cancellation.\n"
] |
[
-1
] |
[
".net",
"multithreading",
"user_interface",
"winforms"
] |
stackoverflow_0000049799_.net_multithreading_user_interface_winforms.txt
|
Q:
How can I provide dynamic CSS styles or custom theme for web site?
There are plenty of ways to provide a dynamic style/theme for a web site, but I am looking for some help on some best practices or techniques that have worked well for others.
I am creating a web site that needs to provide the ability for customers to create or specify their own colors, style, theme, or layout. I'm not convinced how much flexibility I need yet, but basically I need to provide Branding capabilities.
I will be using ASP.NET, and am open to any ideas that will fit within the ASP.NET framework.
A:
Using Themes for ASP.NET 2 and greater will provide you everything you need for this.
A:
Best way to handle it would be to make a nice CSS document that will specify all the areas that you would like to offer customization, such as header background image, background and text colors, etc. Then build application code to allow specification of which theme to load, and bring up that CSS file.
A:
I'd personally go for a CSS-based solution.
You could define the elements' IDs and CSS classes for each page in the web application, so that customers can provide their own set of CSS files.
This approach is platform-agnostic, so that the developer who creates the custom themes is not forced to fit into the ASP.NET themes model - she might as well be a web designer with no programming knowledge.
A:
Themes might be a good solution but having re-read your question I think you might be asking for a method for allowing customers to submit their own branding dynamically, i.e. without you having to modify any files, a hands-off approach? How about having an admin interface consisting of web forms where the customer can upload images and CSS themselves? You could then retrieve that content using a HttpHandler or similar.
|
How can I provide dynamic CSS styles or custom theme for web site?
|
There are plenty of ways to provide a dynamic style/theme for a web site, but I am looking for some help on some best practices or techniques that have worked well for others.
I am creating a web site that needs to provide the ability for customers to create or specify their own colors, style, theme, or layout. I'm not convinced how much flexibility I need yet, but basically I need to provide Branding capabilities.
I will be using ASP.NET, and am open to any ideas that will fit within the ASP.NET framework.
|
[
"Using Themes for ASP.NET 2 and greater will provide you everything you need for this.\n",
"Best way to handle it would be to make a nice CSS document that will specify all the areas that you would like to offer customization, such as header background image, background and text colors, etc. Then build application code to allow specification of which theme to load, and bring up that CSS file.\n",
"I'd personally go for a CSS-based solution.\nYou could define the elements' IDs and CSS classes for each page in the web application, so that customers can provide their own set of CSS files.\nThis approach is platform-agnostic, so that the developer who creates the custom themes is not forced to fit into the ASP.NET themes model - she might as well be a web designer with no programming knowledge.\n",
"Themes might be a good solution but having re-read your question I think you might be asking for a method for allowing customers to submit their own branding dynamically, i.e. without you having to modify any files, a hands-off approach? How about having an admin interface consisting of web forms where the customer can upload images and CSS themselves? You could then retrieve that content using a HttpHandler or similar.\n"
] |
[
1,
0,
0,
0
] |
[] |
[] |
[
"asp.net",
"branding",
"themes"
] |
stackoverflow_0000083982_asp.net_branding_themes.txt
|
Q:
Programmable, secure FTP replacement
We need to move off traditional FTP for security purposes (it transmits it's passwords unencrypted). I am hearing SSH touted as the obvious alternative. However I have been driving FTP from an ASP.NET program interface to automate my web-site development, which is now quite a highly web-enabled process.
Can anyone recommend a secure way to transfer files around which has a program interface that I can drive from ASP.NET?
A:
sharpssh implements sending files via scp.
A:
the question has three subquestions:
1) choosing the secure transfer protocol
The secure version of old FTP exists - it's called FTP/SSL (plain old FTP over SSL encrypted channel). Maybe you can still use your old deployment infrastructure - just check whether it supports the FTPS or FTP/SSL.
You can check details about FTP, FTP/SSL and SFTP differences at http://www.rebex.net/secure-ftp.net/ page.
2) SFTP or FTP/SSL server for Windows
When you choose whether to use SFTP or FTPS you have to deploy the proper server. For FTP/SSL we use the Gene6 (http://www.g6ftpserver.com/) on several servers without problems. There is plenty of FTP/SSL Windows servers so use whatever you want. The situation is a bit more complicated with SFTP server for Windows - there is only a few working implementations. The Bitvise WinHTTPD looks quite promising (http://www.bitvise.com/winsshd).
3) Internet File Transfer Component for ASP.NET
Last part of the solution is secure file transfer from asp.net. There is several components on the market. I would recommend the Rebex File Transfer Pack - it supports both FTP (and FTP/SSL) and SFTP (SSH File Transfer).
Following code shows how to upload a file to the server via SFTP. The code is taken from our Rebex SFTP tutorial page.
// create client, connect and log in
Sftp client = new Sftp();
client.Connect(hostname);
client.Login(username, password);
// upload the 'test.zip' file to the current directory at the server
client.PutFile(@"c:\data\test.zip", "test.zip");
// upload the 'index.html' file to the specified directory at the server
client.PutFile(@"c:\data\index.html", "/wwwroot/index.html");
// download the 'test.zip' file from the current directory at the server
client.GetFile("test.zip", @"c:\data\test.zip");
// download the 'index.html' file from the specified directory at the server
client.GetFile("/wwwroot/index.html", @"c:\data\index.html");
// upload a text using a MemoryStream
string message = "Hello from Rebex SFTP for .NET!";
byte[] data = System.Text.Encoding.Default.GetBytes(message);
System.IO.MemoryStream ms = new System.IO.MemoryStream(data);
client.PutFile(ms, "message.txt");
Martin
A:
We have used a variation of this solution in the past which uses the SSH Factory for .NET
A:
The traditional secure replacement for FTP is SFTP, but if you have enough control over both endpoints, you might consider rsync instead: it is highly configurable, secure just by telling it to use ssh, and far more efficient for keeping two locations in sync.
A:
G'day,
You might like to look at ProFPD.
Heavily customisable. Based on Apache module structure.
From their web site:
ProFTPD grew out of the desire to have a secure and configurable FTP server, and out of a significant admiration of the Apache web server.
We use our adapted version for large scale transfer of web content. Typically 300,000 updates per day.
HTH
cheers,
Rob
|
Programmable, secure FTP replacement
|
We need to move off traditional FTP for security purposes (it transmits it's passwords unencrypted). I am hearing SSH touted as the obvious alternative. However I have been driving FTP from an ASP.NET program interface to automate my web-site development, which is now quite a highly web-enabled process.
Can anyone recommend a secure way to transfer files around which has a program interface that I can drive from ASP.NET?
|
[
"sharpssh implements sending files via scp.\n",
"the question has three subquestions:\n1) choosing the secure transfer protocol\nThe secure version of old FTP exists - it's called FTP/SSL (plain old FTP over SSL encrypted channel). Maybe you can still use your old deployment infrastructure - just check whether it supports the FTPS or FTP/SSL.\nYou can check details about FTP, FTP/SSL and SFTP differences at http://www.rebex.net/secure-ftp.net/ page.\n2) SFTP or FTP/SSL server for Windows\nWhen you choose whether to use SFTP or FTPS you have to deploy the proper server. For FTP/SSL we use the Gene6 (http://www.g6ftpserver.com/) on several servers without problems. There is plenty of FTP/SSL Windows servers so use whatever you want. The situation is a bit more complicated with SFTP server for Windows - there is only a few working implementations. The Bitvise WinHTTPD looks quite promising (http://www.bitvise.com/winsshd).\n3) Internet File Transfer Component for ASP.NET\nLast part of the solution is secure file transfer from asp.net. There is several components on the market. I would recommend the Rebex File Transfer Pack - it supports both FTP (and FTP/SSL) and SFTP (SSH File Transfer).\nFollowing code shows how to upload a file to the server via SFTP. The code is taken from our Rebex SFTP tutorial page.\n// create client, connect and log in \nSftp client = new Sftp();\nclient.Connect(hostname);\nclient.Login(username, password);\n\n// upload the 'test.zip' file to the current directory at the server \nclient.PutFile(@\"c:\\data\\test.zip\", \"test.zip\");\n\n// upload the 'index.html' file to the specified directory at the server \nclient.PutFile(@\"c:\\data\\index.html\", \"/wwwroot/index.html\");\n\n// download the 'test.zip' file from the current directory at the server \nclient.GetFile(\"test.zip\", @\"c:\\data\\test.zip\");\n\n// download the 'index.html' file from the specified directory at the server \nclient.GetFile(\"/wwwroot/index.html\", @\"c:\\data\\index.html\");\n\n// upload a text using a MemoryStream \nstring message = \"Hello from Rebex SFTP for .NET!\";\nbyte[] data = System.Text.Encoding.Default.GetBytes(message);\nSystem.IO.MemoryStream ms = new System.IO.MemoryStream(data);\nclient.PutFile(ms, \"message.txt\");\n\nMartin\n",
"We have used a variation of this solution in the past which uses the SSH Factory for .NET \n",
"The traditional secure replacement for FTP is SFTP, but if you have enough control over both endpoints, you might consider rsync instead: it is highly configurable, secure just by telling it to use ssh, and far more efficient for keeping two locations in sync.\n",
"G'day,\nYou might like to look at ProFPD.\nHeavily customisable. Based on Apache module structure.\nFrom their web site:\n\nProFTPD grew out of the desire to have a secure and configurable FTP server, and out of a significant admiration of the Apache web server.\n\nWe use our adapted version for large scale transfer of web content. Typically 300,000 updates per day.\nHTH\ncheers,\nRob\n"
] |
[
4,
3,
1,
1,
0
] |
[] |
[] |
[
"asp.net",
"ftp"
] |
stackoverflow_0000039070_asp.net_ftp.txt
|
Q:
.NET 3.5 Linq Datasource and Joins
Have been trying out the new Dynamic Data site create tool that shipped with .NET 3.5. The tool uses LINQ Datasources to get the data from the database using a .dmbl context file for a reference. I am interseted in customizing a data grid but I need to show data from more than one table. Does anyone know how to do this using the LINQ Datasource object?
A:
(EDIT misunderstood the question, revising my answer to the following)
Your LinqDataSource could point to a view, which allows you to overcome the problem of not being able to express a Join in the actual element. From "How to: Create LINQ to SQL Classes Mapped to Tables and Views (O/R Designer)":
The O/R Designer is a simple object relational mapper because it supports only 1:1 mapping relationships. In other words, an entity class can have only a 1:1 mapping relationship with a database table or view. Complex mapping, such as mapping an entity class to multiple tables, is not supported. However, you can map an entity class to a view that joins multiple related tables.
A:
If the tables are connected by a foreign key, you can easily reference both tables as they will be joined by linq automatically (you can see easily if you look in your dbml and there is an arrow connecting the tables) - if not, see if you can add one.
To do that, you can just use something like this:
<%# Bind("unit1.unit_name") %>
Where in the table, 'unit' has a foreign key that references another table and you pull that 'unit's property of 'unit_name'
I hope that makes sense.
A:
You cannot put more than one object/datasource on a datagrid. You will have to build a single ConceptObject that combines the exposed properties of the part Entities. Try to use DB -> L2S Entities -> ConceptObject. You must be very contrived if the DB model matches the ConceptObject field-for-field.
A:
You are best using a ObjectDataSource when you wnt to do more complex Linq and bind your Grid to the ObjectDataSource.
You do however need to watch out for Anonymous types that could give you some trouble, but anything is posible...
|
.NET 3.5 Linq Datasource and Joins
|
Have been trying out the new Dynamic Data site create tool that shipped with .NET 3.5. The tool uses LINQ Datasources to get the data from the database using a .dmbl context file for a reference. I am interseted in customizing a data grid but I need to show data from more than one table. Does anyone know how to do this using the LINQ Datasource object?
|
[
"(EDIT misunderstood the question, revising my answer to the following)\nYour LinqDataSource could point to a view, which allows you to overcome the problem of not being able to express a Join in the actual element. From \"How to: Create LINQ to SQL Classes Mapped to Tables and Views (O/R Designer)\":\n\nThe O/R Designer is a simple object relational mapper because it supports only 1:1 mapping relationships. In other words, an entity class can have only a 1:1 mapping relationship with a database table or view. Complex mapping, such as mapping an entity class to multiple tables, is not supported. However, you can map an entity class to a view that joins multiple related tables.\n\n",
"If the tables are connected by a foreign key, you can easily reference both tables as they will be joined by linq automatically (you can see easily if you look in your dbml and there is an arrow connecting the tables) - if not, see if you can add one.\nTo do that, you can just use something like this:\n<%# Bind(\"unit1.unit_name\") %>\n\nWhere in the table, 'unit' has a foreign key that references another table and you pull that 'unit's property of 'unit_name'\nI hope that makes sense.\n",
"You cannot put more than one object/datasource on a datagrid. You will have to build a single ConceptObject that combines the exposed properties of the part Entities. Try to use DB -> L2S Entities -> ConceptObject. You must be very contrived if the DB model matches the ConceptObject field-for-field.\n",
"You are best using a ObjectDataSource when you wnt to do more complex Linq and bind your Grid to the ObjectDataSource.\nYou do however need to watch out for Anonymous types that could give you some trouble, but anything is posible...\n"
] |
[
2,
2,
0,
0
] |
[] |
[] |
[
".net_3.5",
"dynamic_data",
"linq",
"linq_to_sql"
] |
stackoverflow_0000066094_.net_3.5_dynamic_data_linq_linq_to_sql.txt
|
Q:
C++ does begin/end/rbegin/rend execute in constant time for std::set, std::map, etc?
For data types such as std::set and std::map where lookup occurs in logarithmic time, is the implementation required to maintain the begin and end iterators? Does accessing begin and end imply a lookup that could occur in logarithmic time?
I have always assumed that begin and end always occur in constant time, however I can't find any confirmation of this in Josuttis. Now that I'm working on something where I need to be anal about performance, I want to make sure to cover my bases.
Thanks
A:
They happen in constant time. I'm looking at page 466 of the ISO/IEC 14882:2003 standard:
Table 65 - Container Requiments
a.begin(); (constant complexity)
a.end(); (constant complexity)
Table 66 - Reversible Container Requirements
a.rbegin(); (constant complexity)
a.rend(); (constant complexity)
A:
Yes, according to http://www.cplusplus.com/reference/stl/, begin(), end() etc are all O(1).
A:
In the C++ standard, Table 65 in 23.1 (Container Requirements) lists begin() and end() as requiring constant time. If your implementation violates this, it isn't conforming.
A:
Just look at the code, here you can see the iterators in the std::map in the GNU libstdc++
std::map
you'll see that all end rend cend ... are all implemented in constant time.
A:
Be careful with hash_map though. begin() is not constant.
A:
For std::set
begin: constant, end: constant,
rbegin: constant,
rend: constant,
For std::map
they are also constant (all of them)
if you have any doubt, just check www.cplusplus.com
|
C++ does begin/end/rbegin/rend execute in constant time for std::set, std::map, etc?
|
For data types such as std::set and std::map where lookup occurs in logarithmic time, is the implementation required to maintain the begin and end iterators? Does accessing begin and end imply a lookup that could occur in logarithmic time?
I have always assumed that begin and end always occur in constant time, however I can't find any confirmation of this in Josuttis. Now that I'm working on something where I need to be anal about performance, I want to make sure to cover my bases.
Thanks
|
[
"They happen in constant time. I'm looking at page 466 of the ISO/IEC 14882:2003 standard:\nTable 65 - Container Requiments\na.begin(); (constant complexity)\na.end(); (constant complexity)\nTable 66 - Reversible Container Requirements\na.rbegin(); (constant complexity)\na.rend(); (constant complexity)\n",
"Yes, according to http://www.cplusplus.com/reference/stl/, begin(), end() etc are all O(1).\n",
"In the C++ standard, Table 65 in 23.1 (Container Requirements) lists begin() and end() as requiring constant time. If your implementation violates this, it isn't conforming.\n",
"Just look at the code, here you can see the iterators in the std::map in the GNU libstdc++\nstd::map\nyou'll see that all end rend cend ... are all implemented in constant time.\n",
"Be careful with hash_map though. begin() is not constant.\n",
"For std::set\nbegin: constant, end: constant, \nrbegin: constant, \nrend: constant, \nFor std::map\nthey are also constant (all of them)\nif you have any doubt, just check www.cplusplus.com\n"
] |
[
9,
5,
4,
2,
1,
0
] |
[] |
[] |
[
"c++",
"stl"
] |
stackoverflow_0000083640_c++_stl.txt
|
Q:
WCF Oracle adaptor and UDT
Is there any way to work with Oracle UDT's with current WCF adaptor?
A:
If I understand you question correctly, you want to express Oracle user defined types in WCF services? This will really depend on the protocol to be used. For example, if you are using one of the SOAP protocols like the WS* protocols, then you are stuck with those data types that are defined in SOAP. Going from any data type, whether it be a built in type in your database, a custom type in C#, or a user defined type in SQL Server, Oracle, whatever, you will have this limitation. Your simple types will prolly map to something less complex like a numeric or a string. If you have a complex type you may opt to write your own serialization for the type.
|
WCF Oracle adaptor and UDT
|
Is there any way to work with Oracle UDT's with current WCF adaptor?
|
[
"If I understand you question correctly, you want to express Oracle user defined types in WCF services? This will really depend on the protocol to be used. For example, if you are using one of the SOAP protocols like the WS* protocols, then you are stuck with those data types that are defined in SOAP. Going from any data type, whether it be a built in type in your database, a custom type in C#, or a user defined type in SQL Server, Oracle, whatever, you will have this limitation. Your simple types will prolly map to something less complex like a numeric or a string. If you have a complex type you may opt to write your own serialization for the type.\n"
] |
[
1
] |
[] |
[] |
[
"biztalk",
"oracle",
"wcf"
] |
stackoverflow_0000083122_biztalk_oracle_wcf.txt
|
Q:
ChatFx Lite LicenseException on build server
I downloaded ChartFx Lite and am using it successfully in my windows forms application on my development machine. I have added the ChartFX.Lite.dll assembly to my source repository and am trying to build the project on my build server that does not have ChartFx Lite installed. I get the error:
Exception occurred creating type 'SoftwareFX.ChartFX.Lite.Chart, ChartFX.Lite, Version=6.0.839.0, Culture=neutral, PublicKeyToken=a1878e2052c08dce' System.ComponentModel.LicenseException: Couldn't get Design Time license for 'SoftwareFX.ChartFX.Lite.Chart'
What do I need to do to get this working without installing ChartFx Lite on my build server?
A:
If you just want to test the build, you can suppress the lines concerning ChartFx from the .licx file created by Visual Studio. It should build this way, but probably will not execute correctly, as the license will not be included.
The .licx file contains instructions to include binary license resource during build. I'm afraid that if you want a real build you have to install ChartFx on the build server.
|
ChatFx Lite LicenseException on build server
|
I downloaded ChartFx Lite and am using it successfully in my windows forms application on my development machine. I have added the ChartFX.Lite.dll assembly to my source repository and am trying to build the project on my build server that does not have ChartFx Lite installed. I get the error:
Exception occurred creating type 'SoftwareFX.ChartFX.Lite.Chart, ChartFX.Lite, Version=6.0.839.0, Culture=neutral, PublicKeyToken=a1878e2052c08dce' System.ComponentModel.LicenseException: Couldn't get Design Time license for 'SoftwareFX.ChartFX.Lite.Chart'
What do I need to do to get this working without installing ChartFx Lite on my build server?
|
[
"If you just want to test the build, you can suppress the lines concerning ChartFx from the .licx file created by Visual Studio. It should build this way, but probably will not execute correctly, as the license will not be included.\nThe .licx file contains instructions to include binary license resource during build. I'm afraid that if you want a real build you have to install ChartFx on the build server.\n"
] |
[
1
] |
[] |
[] |
[
"chartfx"
] |
stackoverflow_0000083104_chartfx.txt
|
Q:
Should a Log4J logger be declared as transient?
I am using Java 1.4 with Log4J.
Some of my code involves serializing and deserializing value objects (POJOs).
Each of my POJOs declares a logger with
private final Logger log = Logger.getLogger(getClass());
The serializer complains of org.apache.log4j.Logger not being Serializable.
Should I use
private final transient Logger log = Logger.getLogger(getClass());
instead?
A:
How about using a static logger? Or do you need a different logger reference for each instance of the class? Static fields are not serialized by default; you can explicitly declare fields to serialize with a private, static, final array of ObjectStreamField named serialPersistentFields. See Oracle documentation
Added content:
As you use getLogger(getClass()), you will use the same logger in each instance. If you want to use separate logger for each instance you have to differentiate on the name of the logger in the getLogger() -method. e.g. getLogger(getClass().getName() + hashCode()). You should then use the transient attribute to make sure that the logger is not serialized.
A:
The logger must be static; this would make it non-serializable.
There's no reason to make logger non-static, unless you have a strong reason to do it so.
A:
If you really want to go the transient approach you will need to reset the log when your object is deserialized. The way to do that is to implement the method:
private void readObject(java.io.ObjectInputStream in)
throws IOException, ClassNotFoundException;
The javadocs for Serializable has information on this method.
Your implementation of it will look something like:
private void readObject(java.io.ObjectInputStream in)
throws IOException, ClassNotFoundException {
log = Logger.getLogger(...);
in.defaultReadObject();
}
If you do not do this then log will be null after deserializing your object.
A:
Either declare your logger field as static or as transient.
Both ways ensure the writeObject() method will not attempt to write the field to the output stream during serialization.
Usually logger fields are declared static, but if you need it to be an instance field just declare it transient, as its usually done for any non-serializable field. Upon deserialization the logger field will be null, though, so you have to implement a readObject() method to initialize it properly.
A:
Try making the Logger static instead. Than you don't have to care about serialization because it is handled by the class loader.
A:
These kinds of cases, particularly in EJB, are generally best handled via thread local state. Usually the use case is something like you have a particular transaction which is encountering a problem and you need to elevate logging to debug for that operation so you can generate detailed logging on the problem operation. Carry some thread local state across the transaction and use that to select the correct logger. Frankly I don't know where it would be beneficial to set the level on an INSTANCE in this environment because the mapping of instances into the transaction should be a container level function, you won't actually have control of which instance is used in a given transaction anyway.
Even in cases where you're dealing with a DTO it is not generally a good idea to design your system in such a way that a given specific instance is required because the design can easily evolve in ways that make that a bad choice. You could come along a month from now and decide that efficiency considerations (caching or some other life cycle changing optimization) will break your assumption about the mapping of instances into units of work.
A:
If you want the Logger to be per-instance then yes, you would want to make it transient if you're going to serialize your objects. Log4J Loggers aren't serializable, not in the version of Log4J that I'm using anyway, so if you don't make your Logger fields transient you'll get exceptions on serialization.
A:
Loggers are not serializable so you must use transient when storing them in instance fields.
If you want to restore the logger after deserialization you can store the Level (String) indide your object which does get serialized.
A:
There are good reasons to use an instance logger. One very good use case is so you can declare the logger in a super-class and use it in all sub-classes (the only downside is that logs from the super-class are attributed to the sub-class but it is usually easy to see that).
(Like others have mentioned use static or transient).
|
Should a Log4J logger be declared as transient?
|
I am using Java 1.4 with Log4J.
Some of my code involves serializing and deserializing value objects (POJOs).
Each of my POJOs declares a logger with
private final Logger log = Logger.getLogger(getClass());
The serializer complains of org.apache.log4j.Logger not being Serializable.
Should I use
private final transient Logger log = Logger.getLogger(getClass());
instead?
|
[
"How about using a static logger? Or do you need a different logger reference for each instance of the class? Static fields are not serialized by default; you can explicitly declare fields to serialize with a private, static, final array of ObjectStreamField named serialPersistentFields. See Oracle documentation\nAdded content: \nAs you use getLogger(getClass()), you will use the same logger in each instance. If you want to use separate logger for each instance you have to differentiate on the name of the logger in the getLogger() -method. e.g. getLogger(getClass().getName() + hashCode()). You should then use the transient attribute to make sure that the logger is not serialized. \n",
"The logger must be static; this would make it non-serializable.\nThere's no reason to make logger non-static, unless you have a strong reason to do it so.\n",
"If you really want to go the transient approach you will need to reset the log when your object is deserialized. The way to do that is to implement the method:\n private void readObject(java.io.ObjectInputStream in) \n throws IOException, ClassNotFoundException;\n\nThe javadocs for Serializable has information on this method. \nYour implementation of it will look something like:\n private void readObject(java.io.ObjectInputStream in) \n throws IOException, ClassNotFoundException {\n log = Logger.getLogger(...);\n in.defaultReadObject();\n }\n\nIf you do not do this then log will be null after deserializing your object.\n",
"Either declare your logger field as static or as transient. \nBoth ways ensure the writeObject() method will not attempt to write the field to the output stream during serialization.\nUsually logger fields are declared static, but if you need it to be an instance field just declare it transient, as its usually done for any non-serializable field. Upon deserialization the logger field will be null, though, so you have to implement a readObject() method to initialize it properly.\n",
"Try making the Logger static instead. Than you don't have to care about serialization because it is handled by the class loader.\n",
"These kinds of cases, particularly in EJB, are generally best handled via thread local state. Usually the use case is something like you have a particular transaction which is encountering a problem and you need to elevate logging to debug for that operation so you can generate detailed logging on the problem operation. Carry some thread local state across the transaction and use that to select the correct logger. Frankly I don't know where it would be beneficial to set the level on an INSTANCE in this environment because the mapping of instances into the transaction should be a container level function, you won't actually have control of which instance is used in a given transaction anyway.\nEven in cases where you're dealing with a DTO it is not generally a good idea to design your system in such a way that a given specific instance is required because the design can easily evolve in ways that make that a bad choice. You could come along a month from now and decide that efficiency considerations (caching or some other life cycle changing optimization) will break your assumption about the mapping of instances into units of work. \n",
"If you want the Logger to be per-instance then yes, you would want to make it transient if you're going to serialize your objects. Log4J Loggers aren't serializable, not in the version of Log4J that I'm using anyway, so if you don't make your Logger fields transient you'll get exceptions on serialization.\n",
"Loggers are not serializable so you must use transient when storing them in instance fields.\nIf you want to restore the logger after deserialization you can store the Level (String) indide your object which does get serialized.\n",
"There are good reasons to use an instance logger. One very good use case is so you can declare the logger in a super-class and use it in all sub-classes (the only downside is that logs from the super-class are attributed to the sub-class but it is usually easy to see that).\n(Like others have mentioned use static or transient).\n"
] |
[
27,
11,
9,
5,
2,
2,
0,
0,
0
] |
[] |
[] |
[
"java",
"log4j",
"logging",
"serialization"
] |
stackoverflow_0000082109_java_log4j_logging_serialization.txt
|
Q:
VC++ and MapPoint OCX control dialog issue
I am writing a VC++ MFC dialog based app which requires Microsoft MapPoint embedding in it. To do this I'm using MS VC++ .NET 2003 and MapPoint Europe 2006 to do this but am having problems as when I select "Insert ActiveX Control" no MapPoint control appears in the list of options. I have tried manually registering mappointcontrol.ocx with regsvr32 which appears to succeed but still the control doesn't appear on the list.
Can anyone suggest what I am doing wrong here, and any possible solutions.
Thanks
Ian
A:
Have you tried using the ActiveX control test container? Is it in the list of controls? How about using the register button in the test container?
Also check the registry to see if it is registered. You should have an entry in HKEY-CLASSES-ROOT\controlName that has a CLSID element that points to a UUID. That UUID should also be in HKEY-CLASSES-ROOT\CLSID\uuid and have a LocalServer32 entry that points to the DLL and ProgID that points back to controlName.
A:
I have now got the Mappoint control working but in a slightly different way. The control does appear on the list of controls the test container can use. I have tried reregistering it and unregistering it but still it doesn't appear on the list of controls when I try a "Insert ActiveX Control". However if I use "Add/Remove Toolbox Items" I can add it to the toolbox and then drag it into my app where it works fine. I'm not sure why this method works but it does and I can get on with my coding.
Many thanks for all your help with this.
|
VC++ and MapPoint OCX control dialog issue
|
I am writing a VC++ MFC dialog based app which requires Microsoft MapPoint embedding in it. To do this I'm using MS VC++ .NET 2003 and MapPoint Europe 2006 to do this but am having problems as when I select "Insert ActiveX Control" no MapPoint control appears in the list of options. I have tried manually registering mappointcontrol.ocx with regsvr32 which appears to succeed but still the control doesn't appear on the list.
Can anyone suggest what I am doing wrong here, and any possible solutions.
Thanks
Ian
|
[
"Have you tried using the ActiveX control test container? Is it in the list of controls? How about using the register button in the test container?\nAlso check the registry to see if it is registered. You should have an entry in HKEY-CLASSES-ROOT\\controlName that has a CLSID element that points to a UUID. That UUID should also be in HKEY-CLASSES-ROOT\\CLSID\\uuid and have a LocalServer32 entry that points to the DLL and ProgID that points back to controlName.\n",
"I have now got the Mappoint control working but in a slightly different way. The control does appear on the list of controls the test container can use. I have tried reregistering it and unregistering it but still it doesn't appear on the list of controls when I try a \"Insert ActiveX Control\". However if I use \"Add/Remove Toolbox Items\" I can add it to the toolbox and then drag it into my app where it works fine. I'm not sure why this method works but it does and I can get on with my coding.\nMany thanks for all your help with this.\n"
] |
[
1,
1
] |
[] |
[] |
[
"mappoint",
"visual_c++"
] |
stackoverflow_0000051866_mappoint_visual_c++.txt
|
Q:
Consequences of changing USERPostMessageLimit
One of our legacy applications relies heavily on PostThreadMessage() for inter-thread communication, so we increased USERPostMessageLimit in the registry (way) beyond the normal 10.000.
However, documentation on MSDN states that "This limit should be sufficiently large. If your application exceeds the limit, it should be redesigned to avoid consuming so many system resources." [1]
Can anyone enlighten me as to how exactly consuming too many system resources manifests itself? What exactly are system resources? Can I somehow monitor an application's usage of system resources? Any information would be very helpful in deciding whether it is worth the time and effort to redesign this application.
A:
The resources it is refering to are those used by the threads for receiving/handling the messages. You can monitor the thread pool size & other resources using the Taskmanager (look at View->Select Columns). It it may help you identify the specific resource if the consumer is resource locked, look for a resource count that tops out even while your threads are increasing.
However; if you need to increase USERPostMessageLimit then message producer is simply overloading the message consumer; by increasing this limit you are compounding your problem not fixing it. Reducing USERPostMessageLimit back to the default, and if your message producer cannot post the message try sleeping before retrying, allowing the consuming thread to clear some messages.
|
Consequences of changing USERPostMessageLimit
|
One of our legacy applications relies heavily on PostThreadMessage() for inter-thread communication, so we increased USERPostMessageLimit in the registry (way) beyond the normal 10.000.
However, documentation on MSDN states that "This limit should be sufficiently large. If your application exceeds the limit, it should be redesigned to avoid consuming so many system resources." [1]
Can anyone enlighten me as to how exactly consuming too many system resources manifests itself? What exactly are system resources? Can I somehow monitor an application's usage of system resources? Any information would be very helpful in deciding whether it is worth the time and effort to redesign this application.
|
[
"The resources it is refering to are those used by the threads for receiving/handling the messages. You can monitor the thread pool size & other resources using the Taskmanager (look at View->Select Columns). It it may help you identify the specific resource if the consumer is resource locked, look for a resource count that tops out even while your threads are increasing.\nHowever; if you need to increase USERPostMessageLimit then message producer is simply overloading the message consumer; by increasing this limit you are compounding your problem not fixing it. Reducing USERPostMessageLimit back to the default, and if your message producer cannot post the message try sleeping before retrying, allowing the consuming thread to clear some messages. \n"
] |
[
1
] |
[] |
[] |
[
"winapi"
] |
stackoverflow_0000083126_winapi.txt
|
Q:
PythonWin's python interactive shell calling constructors twice?
While answering Static class variables in Python
I noticed that PythonWin PyWin32 build 209.2 interpreter seems to evaluate twice?
PythonWin 2.5 (r25:51908, Mar 9 2007, 17:40:28) [MSC v.1310 32 bit (Intel)] on win32.
Portions Copyright 1994-2006 Mark Hammond - see 'Help/About PythonWin' for further copyright information.
>>> class X:
... l = []
... def __init__(self):
... self.__class__.l.append(1)
...
>>> X().l
[1, 1]
>>>
while the python interpreter does the right thing
C:\>python
ActivePython 2.5.0.0 (ActiveState Software Inc.) based on
Python 2.5 (r25:51908, Mar 9 2007, 17:40:28) [MSC v.1310 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> class X:
... l = []
... def __init__(self):
... self.__class__.l.append(1)
...
>>> X().l
[1]
>>>
A:
My guess is as follows. The PythonWin editor offers autocomplete for an object, i.e. when you type myobject. it offers a little popup of all the availble method names. So I think when you type X(). it's creating an instance of X in the background and doing a dir or similar to find out the attributes of the object.
So the constructor is only being run once for each object but to give you the interactivity it's creating objects silently in the background without telling you about it.
A:
Dave Webb is correct, and you can see this by adding a print statement:
>>> class X:
... l = []
... def __init__(self):
... print 'inited'
... self.__class__.l.append(1)
...
Then as soon as you type the period in X(). it prints inited prior to offering you the completion popup.
A:
Two small additional points.
First, self.__class__.l.append(1) isn't really sensible.
Just say self.l.append(1). Python searches the instance before it searches the class for the reference.
More importantly, class-level variables are rarely useful. Class-level constants are sometimes sensible, but even then, they're hard to justify.
In C++ and Java, class-level ('static') variables seem handy, but don't do much of value. They're hard to teach to n00bz -- often wasting lots of classroom time on minutia -- and they aren't very practical. If you want to know all instances of an X that was created, it's probably better to create an XFactory class that doesn't rely on class variables.
class XFactory( object ):
def __init__( self ):
self.listOfX= []
def makeX( self, *args, **kw ):
newX= X(*args,**kw)
self.listOfX.append(newX)
return newX
No class-level variable anomalies. And, it doesn't conflate the X's with the collection of X's. In the long run, I find it confusing when a class is both some thing and also some collection of things.
Simpler is better than Complex.
|
PythonWin's python interactive shell calling constructors twice?
|
While answering Static class variables in Python
I noticed that PythonWin PyWin32 build 209.2 interpreter seems to evaluate twice?
PythonWin 2.5 (r25:51908, Mar 9 2007, 17:40:28) [MSC v.1310 32 bit (Intel)] on win32.
Portions Copyright 1994-2006 Mark Hammond - see 'Help/About PythonWin' for further copyright information.
>>> class X:
... l = []
... def __init__(self):
... self.__class__.l.append(1)
...
>>> X().l
[1, 1]
>>>
while the python interpreter does the right thing
C:\>python
ActivePython 2.5.0.0 (ActiveState Software Inc.) based on
Python 2.5 (r25:51908, Mar 9 2007, 17:40:28) [MSC v.1310 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> class X:
... l = []
... def __init__(self):
... self.__class__.l.append(1)
...
>>> X().l
[1]
>>>
|
[
"My guess is as follows. The PythonWin editor offers autocomplete for an object, i.e. when you type myobject. it offers a little popup of all the availble method names. So I think when you type X(). it's creating an instance of X in the background and doing a dir or similar to find out the attributes of the object.\nSo the constructor is only being run once for each object but to give you the interactivity it's creating objects silently in the background without telling you about it.\n",
"Dave Webb is correct, and you can see this by adding a print statement:\n>>> class X:\n... l = []\n... def __init__(self):\n... print 'inited'\n... self.__class__.l.append(1)\n... \n\nThen as soon as you type the period in X(). it prints inited prior to offering you the completion popup.\n",
"Two small additional points.\nFirst, self.__class__.l.append(1) isn't really sensible.\nJust say self.l.append(1). Python searches the instance before it searches the class for the reference.\nMore importantly, class-level variables are rarely useful. Class-level constants are sometimes sensible, but even then, they're hard to justify. \nIn C++ and Java, class-level ('static') variables seem handy, but don't do much of value. They're hard to teach to n00bz -- often wasting lots of classroom time on minutia -- and they aren't very practical. If you want to know all instances of an X that was created, it's probably better to create an XFactory class that doesn't rely on class variables.\nclass XFactory( object ):\n def __init__( self ):\n self.listOfX= []\n def makeX( self, *args, **kw ):\n newX= X(*args,**kw)\n self.listOfX.append(newX)\n return newX\n\nNo class-level variable anomalies. And, it doesn't conflate the X's with the collection of X's. In the long run, I find it confusing when a class is both some thing and also some collection of things.\nSimpler is better than Complex.\n"
] |
[
3,
2,
1
] |
[] |
[] |
[
"activestate",
"python",
"python_2.x"
] |
stackoverflow_0000081191_activestate_python_python_2.x.txt
|
Q:
Does an IIS worker process clear session variables when it recycles?
We're writing an asp.net web app on IIS 6 and are planning on storing our user login variables in a session. Will this be removed when the worker process recycles?
A:
If session is stored in-proc then YES worker process recycle will remove it. Use Out-of-proc model or sql server to store session value if you want to keep it stored.
A:
If you are using the default in-memory session management, the session variables will be cleared when worker process recycles
A:
yes, unless you are using out of process session state.
|
Does an IIS worker process clear session variables when it recycles?
|
We're writing an asp.net web app on IIS 6 and are planning on storing our user login variables in a session. Will this be removed when the worker process recycles?
|
[
"If session is stored in-proc then YES worker process recycle will remove it. Use Out-of-proc model or sql server to store session value if you want to keep it stored.\n",
"If you are using the default in-memory session management, the session variables will be cleared when worker process recycles\n",
"yes, unless you are using out of process session state.\n"
] |
[
5,
1,
0
] |
[] |
[] |
[
"iis",
"session_variables",
"worker_process"
] |
stackoverflow_0000084276_iis_session_variables_worker_process.txt
|
Q:
Is there a single resource which explains windows memory thoroughly?
Seriously, I've trawled MSDN and only got half answers - what do the columns on the Task Manager mean? Why can't I calculate the VM Usage by enumerating threads, modules, heaps &c.? How can I be sure I am accurately reporting to clients of my memory manager how much address space is left? Are their myriad collisions in the memory glossary namespace?
An online resource would be most useful in the short term, although books would be acceptable in the medium term.
A:
Try the book "Windows Internals" by Mark Russinovich and I think some other guy too. It's pretty good on getting down to the nitty gritty.
A:
Mark Russinovich has written the excellent book Windows Internals. A new edition that covers the Vista and Server 2008 operating systems is currently in the works with David Solomon, so you may want to pre-order that if your questions are about the new Windows operating systems instead of the old ones.
A:
Here is a quick article on Windows Memory Management, which goes into sufficient depth to interpret what you're actually seeing in Task Manager or Process Explorer.
|
Is there a single resource which explains windows memory thoroughly?
|
Seriously, I've trawled MSDN and only got half answers - what do the columns on the Task Manager mean? Why can't I calculate the VM Usage by enumerating threads, modules, heaps &c.? How can I be sure I am accurately reporting to clients of my memory manager how much address space is left? Are their myriad collisions in the memory glossary namespace?
An online resource would be most useful in the short term, although books would be acceptable in the medium term.
|
[
"Try the book \"Windows Internals\" by Mark Russinovich and I think some other guy too. It's pretty good on getting down to the nitty gritty.\n",
"Mark Russinovich has written the excellent book Windows Internals. A new edition that covers the Vista and Server 2008 operating systems is currently in the works with David Solomon, so you may want to pre-order that if your questions are about the new Windows operating systems instead of the old ones.\n",
"Here is a quick article on Windows Memory Management, which goes into sufficient depth to interpret what you're actually seeing in Task Manager or Process Explorer.\n"
] |
[
4,
2,
1
] |
[] |
[] |
[
"memory",
"windows"
] |
stackoverflow_0000084234_memory_windows.txt
|
Q:
Why isn't the 'len' function inherited by dictionaries and lists in Python
example:
a_list = [1, 2, 3]
a_list.len() # doesn't work
len(a_list) # works
Python being (very) object oriented, I don't understand why the 'len' function isn't inherited by the object.
Plus I keep trying the wrong solution since it appears as the logical one to me
A:
Guido's explanation is here:
First of all, I chose len(x) over x.len() for HCI reasons (def __len__() came much later). There are two intertwined reasons actually, both HCI:
(a) For some operations, prefix notation just reads better than postfix — prefix (and infix!) operations have a long tradition in mathematics which likes notations where the visuals help the mathematician thinking about a problem. Compare the easy with which we rewrite a formula like x*(a+b) into x*a + x*b to the clumsiness of doing the same thing using a raw OO notation.
(b) When I read code that says len(x) I know that it is asking for the length of something. This tells me two things: the result is an integer, and the argument is some kind of container. To the contrary, when I read x.len(), I have to already know that x is some kind of container implementing an interface or inheriting from a class that has a standard len(). Witness the confusion we occasionally have when a class that is not implementing a mapping has a get() or keys() method, or something that isn’t a file has a write() method.
Saying the same thing in another way, I see ‘len‘ as a built-in operation. I’d hate to lose that. /…/
A:
The short answer: 1) backwards compatibility and 2) there's not enough of a difference for it to really matter. For a more detailed explanation, read on.
The idiomatic Python approach to such operations is special methods which aren't intended to be called directly. For example, to make x + y work for your own class, you write a __add__ method. To make sure that int(spam) properly converts your custom class, write a __int__ method. To make sure that len(foo) does something sensible, write a __len__ method.
This is how things have always been with Python, and I think it makes a lot of sense for some things. In particular, this seems like a sensible way to implement operator overloading. As for the rest, different languages disagree; in Ruby you'd convert something to an integer by calling spam.to_i directly instead of saying int(spam).
You're right that Python is an extremely object-oriented language and that having to call an external function on an object to get its length seems odd. On the other hand, len(silly_walks) isn't any more onerous than silly_walks.len(), and Guido has said that he actually prefers it (http://mail.python.org/pipermail/python-3000/2006-November/004643.html).
A:
It just isn't.
You can, however, do:
>>> [1,2,3].__len__()
3
Adding a __len__() method to a class is what makes the len() magic work.
A:
This way fits in better with the rest of the language. The convention in python is that you add __foo__ special methods to objects to make them have certain capabilities (rather than e.g. deriving from a specific base class). For example, an object is
callable if it has a __call__ method
iterable if it has an __iter__ method,
supports access with [] if it has __getitem__ and __setitem__.
...
One of these special methods is __len__ which makes it have a length accessible with len().
A:
Maybe you're looking for __len__. If that method exists, then len(a) calls it:
>>> class Spam:
... def __len__(self): return 3
...
>>> s = Spam()
>>> len(s)
3
A:
Well, there actually is a length method, it is just hidden:
>>> a_list = [1, 2, 3]
>>> a_list.__len__()
3
The len() built-in function appears to be simply a wrapper for a call to the hidden len() method of the object.
Not sure why they made the decision to implement things this way though.
A:
there is some good info below on why certain things are functions and other are methods. It does indeed cause some inconsistencies in the language.
http://mail.python.org/pipermail/python-dev/2008-January/076612.html
|
Why isn't the 'len' function inherited by dictionaries and lists in Python
|
example:
a_list = [1, 2, 3]
a_list.len() # doesn't work
len(a_list) # works
Python being (very) object oriented, I don't understand why the 'len' function isn't inherited by the object.
Plus I keep trying the wrong solution since it appears as the logical one to me
|
[
"Guido's explanation is here:\n\nFirst of all, I chose len(x) over x.len() for HCI reasons (def __len__() came much later). There are two intertwined reasons actually, both HCI:\n(a) For some operations, prefix notation just reads better than postfix — prefix (and infix!) operations have a long tradition in mathematics which likes notations where the visuals help the mathematician thinking about a problem. Compare the easy with which we rewrite a formula like x*(a+b) into x*a + x*b to the clumsiness of doing the same thing using a raw OO notation.\n(b) When I read code that says len(x) I know that it is asking for the length of something. This tells me two things: the result is an integer, and the argument is some kind of container. To the contrary, when I read x.len(), I have to already know that x is some kind of container implementing an interface or inheriting from a class that has a standard len(). Witness the confusion we occasionally have when a class that is not implementing a mapping has a get() or keys() method, or something that isn’t a file has a write() method.\nSaying the same thing in another way, I see ‘len‘ as a built-in operation. I’d hate to lose that. /…/\n\n",
"The short answer: 1) backwards compatibility and 2) there's not enough of a difference for it to really matter. For a more detailed explanation, read on.\nThe idiomatic Python approach to such operations is special methods which aren't intended to be called directly. For example, to make x + y work for your own class, you write a __add__ method. To make sure that int(spam) properly converts your custom class, write a __int__ method. To make sure that len(foo) does something sensible, write a __len__ method.\nThis is how things have always been with Python, and I think it makes a lot of sense for some things. In particular, this seems like a sensible way to implement operator overloading. As for the rest, different languages disagree; in Ruby you'd convert something to an integer by calling spam.to_i directly instead of saying int(spam).\nYou're right that Python is an extremely object-oriented language and that having to call an external function on an object to get its length seems odd. On the other hand, len(silly_walks) isn't any more onerous than silly_walks.len(), and Guido has said that he actually prefers it (http://mail.python.org/pipermail/python-3000/2006-November/004643.html).\n",
"It just isn't.\nYou can, however, do:\n>>> [1,2,3].__len__()\n\n3\n\nAdding a __len__() method to a class is what makes the len() magic work.\n",
"This way fits in better with the rest of the language. The convention in python is that you add __foo__ special methods to objects to make them have certain capabilities (rather than e.g. deriving from a specific base class). For example, an object is \n\ncallable if it has a __call__ method \niterable if it has an __iter__ method, \nsupports access with [] if it has __getitem__ and __setitem__. \n...\n\nOne of these special methods is __len__ which makes it have a length accessible with len().\n",
"Maybe you're looking for __len__. If that method exists, then len(a) calls it:\n>>> class Spam:\n... def __len__(self): return 3\n... \n>>> s = Spam()\n>>> len(s)\n3\n\n",
"Well, there actually is a length method, it is just hidden:\n>>> a_list = [1, 2, 3]\n>>> a_list.__len__()\n3\n\nThe len() built-in function appears to be simply a wrapper for a call to the hidden len() method of the object.\nNot sure why they made the decision to implement things this way though.\n",
"there is some good info below on why certain things are functions and other are methods. It does indeed cause some inconsistencies in the language.\nhttp://mail.python.org/pipermail/python-dev/2008-January/076612.html\n"
] |
[
45,
13,
11,
6,
2,
2,
2
] |
[] |
[] |
[
"python"
] |
stackoverflow_0000083983_python.txt
|
Q:
How to force browser to reload updated XML file?
I am developing a website that relies much on XML data. The web site has an interface where user can update data. The data provided by user will be updated to the respective XML file. However, the changes is not reflected until after 1 or 2 minutes.
Anyone knows how to force the browser to load the latest XML file immediately?
A:
This isn't a browser issue, it's an HTTP issue. You appear to be serving dynamic files without specifying that they shouldn't be cached. Use the Cache-Control: no-cache HTTP header to indicate this. Pragma: no-cache is the ancient HTTP 1.0 way, you can include it, but alone it is unlikely to be 100% effective.
A:
You can add a random (or sequential) number to the url that you change with every update.
A:
Use "Pragma: no-cache" header in HTTP response.
http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9
|
How to force browser to reload updated XML file?
|
I am developing a website that relies much on XML data. The web site has an interface where user can update data. The data provided by user will be updated to the respective XML file. However, the changes is not reflected until after 1 or 2 minutes.
Anyone knows how to force the browser to load the latest XML file immediately?
|
[
"This isn't a browser issue, it's an HTTP issue. You appear to be serving dynamic files without specifying that they shouldn't be cached. Use the Cache-Control: no-cache HTTP header to indicate this. Pragma: no-cache is the ancient HTTP 1.0 way, you can include it, but alone it is unlikely to be 100% effective.\n",
"You can add a random (or sequential) number to the url that you change with every update.\n",
"Use \"Pragma: no-cache\" header in HTTP response.\nhttp://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9\n"
] |
[
2,
1,
1
] |
[] |
[] |
[
"xml"
] |
stackoverflow_0000084243_xml.txt
|
Q:
sizeof(bitfield_type) legal in ANSI C?
struct foo { unsigned x:1; } f;
printf("%d\n", (int)sizeof(f.x = 1));
What is the expected output and why? Taking the size of a bitfield lvalue directly isn't allowed. But by using the assignment operator, it seems we can still take the size of a bitfield type.
What is the "size of a bitfield in bytes"? Is it the size of the storage unit holding the bitfield? Is it the number of bits taken up by the bf rounded up to the nearest byte count?
Or is the construct undefined behavior because there is nothing in the standard that answers the above questions? Multiple compilers on the same platform are giving me inconsistent results.
A:
You are right, integer promotions aren't applied to the operand of sizeof:
The integer promotions are applied only: as part of the usual arithmetic conversions, to certain argument expressions, to the operands of the unary +, -, and ~ operators, and to both operands of the shift operators, as specified by their respective subclauses.
The real question is whether bitfields have their own types.
Joseph Myers told me:
The conclusion
from C90 DRs was that bit-fields have their own types, and from C99 DRs
was to leave whether they have their own types implementation-defined, and
GCC follows the C90 DRs and so the assignment has type int:1 and is not
promoted as an operand of sizeof.
This was discussed in Defect Report #315.
To summarize: your code is legal but implementation-defined.
A:
The C99 Standard (PDF of latest draft) says in section 6.5.3.4 about sizeof constraints:
The sizeof operator shall not be applied to an expression that has function type or an
incomplete type, to the parenthesized name of such a type, or to an expression that
designates a bit-field member.
This means that applying sizeof to an assignment expression is allowed.
6.5.16.3 says:
The type of an assignment expression is the type of the left operand ...
6.3.1.1.2 says regarding integer promotions:
The following may be used in an expression wherever an int or unsigned int may be used:
...
A bit-field of type _Bool, int, signed int, or unsigned int.
If an int can represent all values of the original type,
the value is converted to an int;
otherwise, it is converted to an unsigned int.
So, your test program should output the size of an int, i.e.,
sizeof(int).
Is there any compiler that does not to this?
A:
Trying to get the size of a bitfield isn't legal, as you have seen. (sizeof returns the size in bytes, which wouldn't make much sense for a bitfield.)
sizeof(f.x = 1) will return the size of the type of the expression. Since C doesn't have a real "bitfield type", the expression (here: an assignment expression) usually gets the type of the bit field's base type, in your example unsigned int, but it is possible for a compiler to use a smaller type internally (in this case probably unsigned char because it's big enough for one bit).
A:
Wouldn't
(f.x = 1)
be an expression evaluating to true (technically in evaluates to result of the assignment, which is 1/true in this case), and thus,
sizeof( f.x = 1)
is asking for the size of true in terms of how many chars it would take to store it?
I should also add that the Wikipedia article on sizeof is nice. In particular, they say "sizeof is a compile-time operator that returns the size, in multiples of the size of char, of the variable or parenthesized type-specifier that it precedes."
The article also explains that sizeof works on expressions.
A:
sizeof( f.x = 1)
returns 1 as its answer. The sizeof(1) is presumably the size of an integer on the platform you are compiling on, probably either 4 or 8 bytes.
A:
No, you must be thinking of the == operator, which yields a "boolean" expression of type int in C and indeed bool in C++.
I think the expression will convert the value 1 to the correspondin bitfield type and assign it to the bitfield. The result should also be a bitfield type because there are no hidden promotions or conversions that I can see.
Thus we are effectively getting access to the bitfield type.
No compiler diagnostic is required because "f.x = 1" isn't an lvalue, i.e. it does not designate the bitfield directly. It's just a value of type "unsigned :1".
I'm specifically using "f.x = 1" because "sizeof f.x" takes the size of a bitfield lvalue, which is clearly not allowed.
A:
The
(f.x = 1)
is not an expression, it is an assignment and thus returns the assigned value. In this case, the size of that value depends on the variable, it has been assigned to.
unsigned x:1
has 1 Bit and its sizeof returns 1 byte (8 bit alignment)
If you would use
unsigned x:12
then the sizeof(f.x = 1) would return 2 byte (again because of the 8 bit alignment)
A:
The sizeof(1) is presumably the size of an integer on the platform you are compiling on, probably either 4 or 8 bytes.
Note that I'm NOT taking sizeof(1), which is effectively sizeof(int). Look close, I'm taking sizeof(f.x = 1), which should effectively be sizeof(bitfield_type).
I'd like to see a reference to something that tells me whether the construct is legal. As an added bonus, it would be nice if it told me what sort of result is expected.
gcc certainly disagrees with the assertion that sizeof(bitfield_type) should be the same as sizeof(int), but only on some platforms.
A:
Trying to get the size of a bitfield isn't legal, as you have seen. (sizeof returns the size in bytes, which wouldn't make much sense for a bitfield.)
So are you stating that the behavior is undefined, i.e. it has the same degree of legality as "*(int *)0 = 0;", and compilers can choose to fail to handle this sensibly?
That's what I'm trying to find out. Do you assume that it's undefined by omission, or is there something that explicitly declares it as illegal?
A:
is not an expression, it is an assignment and thus returns the assigned value. In this case, the size of that value depends on the variable, it has been assigned to.
First, it IS an expression containing the assignment operator.
Second, I'm quite aware of what's happening in my example :)
then the sizeof(f.x = 1) would return 2 byte (again because of the 8 bit alignment)
Where did you get this? Is this what happens on a particular compiler that you have tried, or are these semantics stated in the standard? Because I haven't found any such statements. I want to know whether the construct is guaranteed to work at all.
A:
in this second example, if you would define your struct as a
struct foo { unsigned x:12} f;
and then write a value like 1 into f.x - it uses 2 Bytes because of the alignment. If you do an assignment like
f.x = 1;
and this returns the assigned value. This is quite similar to
int a, b, c;
a = b = c = 1;
where the asignment is evaluated from right to left. c = 1 assigns 1 to the variable c and this asignment returns the assigned value and assigns it to b (and so forth) until 1 is assigned to a
it is equal to
a = ( b = ( c = 1 ) )
in your case, the sizeof gets the size of your asignment, wich is NOT a bitfield, but the variable assigned to it.
sizeof ( f.x = 1)
does not return the bitfields size, but the variable assigment which is a 12 bit representation of the 1 (in my case) and therefore sizeof() returns 2 byte (because of the 8bit aligment)
A:
Look, I understand full well what I'm doing with the assignment trick.
You are telling me that the size of a bitfield type is rounded up to the cloest byte count, which is one option I listed in the initial question. But you didn't back it up with references.
In particular, I have tried various compilers which give me sizeof(int) instead of sizeof(char) EVEN if I apply this to a bitfield with only has a single bit.
I wouldn't even mind if multiple compilers randomly get to choose their own interpretation of this construct. Certainly bitfield storage allocation is quite implementation-defined.
However, I really do want to know whether the construct is GUARANTEED to work and yield SOME value.
A:
CL, I've seen your citations before, and agree they're totally relevant, but even after having read them I wasn't sure whether the code is defined.
6.3.1.1.2 says regarding integer promotions:
Yes, but integer promotion rules only apply if a promotion is in fact carried out. I do not think that my example requires a promotion. Likewise if you do
char ch;
sizeof ch;
... then ch also isn't promoted.
I think we are dealing directly with the bitfield type here.
I've also seen gcc output 1 while many other compilers (and even other gcc versions) don't. This doesn't convince me that the code is illegal because the size could just as well be implementation-defined enough to make the result inconsistent across multiple compilers.
However, I'm confused as to whether the code may be undefined because nothing in the standard seems to state how the sizeof bitfield case is handled.
|
sizeof(bitfield_type) legal in ANSI C?
|
struct foo { unsigned x:1; } f;
printf("%d\n", (int)sizeof(f.x = 1));
What is the expected output and why? Taking the size of a bitfield lvalue directly isn't allowed. But by using the assignment operator, it seems we can still take the size of a bitfield type.
What is the "size of a bitfield in bytes"? Is it the size of the storage unit holding the bitfield? Is it the number of bits taken up by the bf rounded up to the nearest byte count?
Or is the construct undefined behavior because there is nothing in the standard that answers the above questions? Multiple compilers on the same platform are giving me inconsistent results.
|
[
"You are right, integer promotions aren't applied to the operand of sizeof:\n\nThe integer promotions are applied only: as part of the usual arithmetic conversions, to certain argument expressions, to the operands of the unary +, -, and ~ operators, and to both operands of the shift operators, as specified by their respective subclauses.\n\nThe real question is whether bitfields have their own types.\nJoseph Myers told me:\n\nThe conclusion\n from C90 DRs was that bit-fields have their own types, and from C99 DRs \n was to leave whether they have their own types implementation-defined, and \n GCC follows the C90 DRs and so the assignment has type int:1 and is not \n promoted as an operand of sizeof.\n\nThis was discussed in Defect Report #315.\nTo summarize: your code is legal but implementation-defined.\n",
"The C99 Standard (PDF of latest draft) says in section 6.5.3.4 about sizeof constraints:\n\nThe sizeof operator shall not be applied to an expression that has function type or an\n incomplete type, to the parenthesized name of such a type, or to an expression that\n designates a bit-field member.\n\nThis means that applying sizeof to an assignment expression is allowed.\n6.5.16.3 says:\n\nThe type of an assignment expression is the type of the left operand ...\n\n6.3.1.1.2 says regarding integer promotions:\n\nThe following may be used in an expression wherever an int or unsigned int may be used:\n\n...\nA bit-field of type _Bool, int, signed int, or unsigned int.\n\nIf an int can represent all values of the original type,\n the value is converted to an int;\n otherwise, it is converted to an unsigned int.\n\nSo, your test program should output the size of an int, i.e.,\nsizeof(int).\nIs there any compiler that does not to this?\n",
"Trying to get the size of a bitfield isn't legal, as you have seen. (sizeof returns the size in bytes, which wouldn't make much sense for a bitfield.)\nsizeof(f.x = 1) will return the size of the type of the expression. Since C doesn't have a real \"bitfield type\", the expression (here: an assignment expression) usually gets the type of the bit field's base type, in your example unsigned int, but it is possible for a compiler to use a smaller type internally (in this case probably unsigned char because it's big enough for one bit).\n",
"Wouldn't\n(f.x = 1)\n\nbe an expression evaluating to true (technically in evaluates to result of the assignment, which is 1/true in this case), and thus,\nsizeof( f.x = 1)\n\nis asking for the size of true in terms of how many chars it would take to store it?\nI should also add that the Wikipedia article on sizeof is nice. In particular, they say \"sizeof is a compile-time operator that returns the size, in multiples of the size of char, of the variable or parenthesized type-specifier that it precedes.\"\nThe article also explains that sizeof works on expressions.\n",
"sizeof( f.x = 1)\n\nreturns 1 as its answer. The sizeof(1) is presumably the size of an integer on the platform you are compiling on, probably either 4 or 8 bytes.\n",
"No, you must be thinking of the == operator, which yields a \"boolean\" expression of type int in C and indeed bool in C++.\nI think the expression will convert the value 1 to the correspondin bitfield type and assign it to the bitfield. The result should also be a bitfield type because there are no hidden promotions or conversions that I can see.\nThus we are effectively getting access to the bitfield type.\nNo compiler diagnostic is required because \"f.x = 1\" isn't an lvalue, i.e. it does not designate the bitfield directly. It's just a value of type \"unsigned :1\".\nI'm specifically using \"f.x = 1\" because \"sizeof f.x\" takes the size of a bitfield lvalue, which is clearly not allowed.\n",
"The \n(f.x = 1)\n\nis not an expression, it is an assignment and thus returns the assigned value. In this case, the size of that value depends on the variable, it has been assigned to. \nunsigned x:1\n\nhas 1 Bit and its sizeof returns 1 byte (8 bit alignment)\nIf you would use \nunsigned x:12\n\nthen the sizeof(f.x = 1) would return 2 byte (again because of the 8 bit alignment)\n",
"\nThe sizeof(1) is presumably the size of an integer on the platform you are compiling on, probably either 4 or 8 bytes.\n\nNote that I'm NOT taking sizeof(1), which is effectively sizeof(int). Look close, I'm taking sizeof(f.x = 1), which should effectively be sizeof(bitfield_type).\nI'd like to see a reference to something that tells me whether the construct is legal. As an added bonus, it would be nice if it told me what sort of result is expected.\ngcc certainly disagrees with the assertion that sizeof(bitfield_type) should be the same as sizeof(int), but only on some platforms.\n",
"\nTrying to get the size of a bitfield isn't legal, as you have seen. (sizeof returns the size in bytes, which wouldn't make much sense for a bitfield.)\n\nSo are you stating that the behavior is undefined, i.e. it has the same degree of legality as \"*(int *)0 = 0;\", and compilers can choose to fail to handle this sensibly?\nThat's what I'm trying to find out. Do you assume that it's undefined by omission, or is there something that explicitly declares it as illegal?\n",
"\nis not an expression, it is an assignment and thus returns the assigned value. In this case, the size of that value depends on the variable, it has been assigned to. \n\nFirst, it IS an expression containing the assignment operator.\nSecond, I'm quite aware of what's happening in my example :)\n\nthen the sizeof(f.x = 1) would return 2 byte (again because of the 8 bit alignment)\n\nWhere did you get this? Is this what happens on a particular compiler that you have tried, or are these semantics stated in the standard? Because I haven't found any such statements. I want to know whether the construct is guaranteed to work at all.\n",
"in this second example, if you would define your struct as a\nstruct foo { unsigned x:12} f;\n\nand then write a value like 1 into f.x - it uses 2 Bytes because of the alignment. If you do an assignment like\nf.x = 1;\n\nand this returns the assigned value. This is quite similar to \nint a, b, c;\na = b = c = 1;\n\nwhere the asignment is evaluated from right to left. c = 1 assigns 1 to the variable c and this asignment returns the assigned value and assigns it to b (and so forth) until 1 is assigned to a\nit is equal to\na = ( b = ( c = 1 ) )\n\nin your case, the sizeof gets the size of your asignment, wich is NOT a bitfield, but the variable assigned to it.\nsizeof ( f.x = 1)\n\ndoes not return the bitfields size, but the variable assigment which is a 12 bit representation of the 1 (in my case) and therefore sizeof() returns 2 byte (because of the 8bit aligment)\n",
"Look, I understand full well what I'm doing with the assignment trick.\nYou are telling me that the size of a bitfield type is rounded up to the cloest byte count, which is one option I listed in the initial question. But you didn't back it up with references.\nIn particular, I have tried various compilers which give me sizeof(int) instead of sizeof(char) EVEN if I apply this to a bitfield with only has a single bit.\nI wouldn't even mind if multiple compilers randomly get to choose their own interpretation of this construct. Certainly bitfield storage allocation is quite implementation-defined.\nHowever, I really do want to know whether the construct is GUARANTEED to work and yield SOME value.\n",
"CL, I've seen your citations before, and agree they're totally relevant, but even after having read them I wasn't sure whether the code is defined.\n\n6.3.1.1.2 says regarding integer promotions:\n\nYes, but integer promotion rules only apply if a promotion is in fact carried out. I do not think that my example requires a promotion. Likewise if you do\nchar ch;\nsizeof ch;\n\n... then ch also isn't promoted.\nI think we are dealing directly with the bitfield type here.\nI've also seen gcc output 1 while many other compilers (and even other gcc versions) don't. This doesn't convince me that the code is illegal because the size could just as well be implementation-defined enough to make the result inconsistent across multiple compilers.\nHowever, I'm confused as to whether the code may be undefined because nothing in the standard seems to state how the sizeof bitfield case is handled.\n"
] |
[
4,
3,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"c"
] |
stackoverflow_0000081294_c.txt
|
Q:
How do I set up a custom build step in Visual Studio 6?
Unfortunately it looks like for various reasons I'm going to have to use Visual Studio 6 instead of a newer version of VS.
It's been a long time since I've used it. I'm looking through its menus and don't see any obvious way to set up any custom build steps (pre-build, post-build, pre-link... anything would help actually).
Can anyone give me instructions on how to set up steps like this?
A:
Open your project, then open the Project Settings screen (Project → Settings or ALT-F7). Alternatively, right click on a file in the FileView and select Settings.
From the Project Settings screen, go to the General tab and check "Always use custom build step". This means that the file you just chose will be an input file for a custom build step. From the "Custom Build" tab you can then give the commands to run and specify what files will be generated.
For pre-link, post-build and such, select an executable (or library) from the Project Settings screen. Then use the little arrow button to scroll to the rightmost tabs. From there you'll find the Pre-link and Post-build steps.
It's quite simple, really, I'm sure this is enough to get you started.
|
How do I set up a custom build step in Visual Studio 6?
|
Unfortunately it looks like for various reasons I'm going to have to use Visual Studio 6 instead of a newer version of VS.
It's been a long time since I've used it. I'm looking through its menus and don't see any obvious way to set up any custom build steps (pre-build, post-build, pre-link... anything would help actually).
Can anyone give me instructions on how to set up steps like this?
|
[
"Open your project, then open the Project Settings screen (Project → Settings or ALT-F7). Alternatively, right click on a file in the FileView and select Settings.\nFrom the Project Settings screen, go to the General tab and check \"Always use custom build step\". This means that the file you just chose will be an input file for a custom build step. From the \"Custom Build\" tab you can then give the commands to run and specify what files will be generated. \nFor pre-link, post-build and such, select an executable (or library) from the Project Settings screen. Then use the little arrow button to scroll to the rightmost tabs. From there you'll find the Pre-link and Post-build steps.\nIt's quite simple, really, I'm sure this is enough to get you started.\n"
] |
[
5
] |
[] |
[] |
[
"visual_c++_6"
] |
stackoverflow_0000084232_visual_c++_6.txt
|
Q:
Fade splash screen in and out
In a C# windows forms application. I have a splash screen with some multi-threaded processes happening in the background. What I would like to do is when I display the splash screen initially, I would like to have it appear to "fade in". And then, once all the processes finish, I would like it to appear as though the splash screen is "fading out". I'm using C# and .NET 2.0. Thanks.
A:
You could use a timer to modify the Form.Opacity level.
A:
When using Opacity property have to remember that its of type double, where 1.0 is complete opacity, and 0.0 is completely transparency.
private void fadeTimer_Tick(object sender, EventArgs e)
{
this.Opacity -= 0.01;
if (this.Opacity <= 0)
{
this.Close();
}
}
A:
You can use the Opacity property for the form to alter the fade (between 0.0 and 1.0).
A:
While(this.Opacity !=0)
{
this.Opacity -= 0.05;
Thread.Sleep(50);//This is for the speed of the opacity... and will let the form redraw
}
|
Fade splash screen in and out
|
In a C# windows forms application. I have a splash screen with some multi-threaded processes happening in the background. What I would like to do is when I display the splash screen initially, I would like to have it appear to "fade in". And then, once all the processes finish, I would like it to appear as though the splash screen is "fading out". I'm using C# and .NET 2.0. Thanks.
|
[
"You could use a timer to modify the Form.Opacity level.\n",
"When using Opacity property have to remember that its of type double, where 1.0 is complete opacity, and 0.0 is completely transparency.\n private void fadeTimer_Tick(object sender, EventArgs e)\n {\n this.Opacity -= 0.01;\n\n if (this.Opacity <= 0)\n {\n this.Close();\n } \n }\n\n",
"You can use the Opacity property for the form to alter the fade (between 0.0 and 1.0).\n",
"While(this.Opacity !=0)\n{\n this.Opacity -= 0.05;\n Thread.Sleep(50);//This is for the speed of the opacity... and will let the form redraw\n}\n\n"
] |
[
9,
6,
3,
3
] |
[] |
[] |
[
".net_2.0",
"c#",
"splash_screen",
"winforms"
] |
stackoverflow_0000084211_.net_2.0_c#_splash_screen_winforms.txt
|
Q:
NHibernate, Sum Query
If i have a simple named query defined, the preforms a count function, on one column:
<query name="Activity.GetAllMiles">
<![CDATA[
select sum(Distance) from Activity
]]>
</query>
How do I get the result of a sum or any query that dont return of one the mapped entities, with NHibernate using Either IQuery or ICriteria?
Here is my attempt (im unable to test it right now), would this work?
public decimal Find(String namedQuery)
{
using (ISession session = NHibernateHelper.OpenSession())
{
IQuery query = session.GetNamedQuery(namedQuery);
return query.UniqueResult<decimal>();
}
}
A:
As an indirect answer to your question, here is how I do it without a named query.
var session = GetSession();
var criteria = session.CreateCriteria(typeof(Order))
.Add(Restrictions.Eq("Product", product))
.SetProjection(Projections.CountDistinct("Price"));
return (int) criteria.UniqueResult();
A:
Sorry! I actually wanted a sum, not a count, which explains alot. Iv edited the post accordingly
This works fine:
var criteria = session.CreateCriteria(typeof(Activity))
.SetProjection(Projections.Sum("Distance"));
return (double)criteria.UniqueResult();
The named query approach still dies, "Errors in named queries: {Activity.GetAllMiles}":
using (ISession session = NHibernateHelper.OpenSession())
{
IQuery query = session.GetNamedQuery("Activity.GetAllMiles");
return query.UniqueResult<double>();
}
A:
I think in your original example, you just need to to query.UniqueResult(); the count will return an integer.
|
NHibernate, Sum Query
|
If i have a simple named query defined, the preforms a count function, on one column:
<query name="Activity.GetAllMiles">
<![CDATA[
select sum(Distance) from Activity
]]>
</query>
How do I get the result of a sum or any query that dont return of one the mapped entities, with NHibernate using Either IQuery or ICriteria?
Here is my attempt (im unable to test it right now), would this work?
public decimal Find(String namedQuery)
{
using (ISession session = NHibernateHelper.OpenSession())
{
IQuery query = session.GetNamedQuery(namedQuery);
return query.UniqueResult<decimal>();
}
}
|
[
"As an indirect answer to your question, here is how I do it without a named query.\nvar session = GetSession();\nvar criteria = session.CreateCriteria(typeof(Order))\n .Add(Restrictions.Eq(\"Product\", product))\n .SetProjection(Projections.CountDistinct(\"Price\"));\nreturn (int) criteria.UniqueResult();\n\n",
"Sorry! I actually wanted a sum, not a count, which explains alot. Iv edited the post accordingly\nThis works fine:\nvar criteria = session.CreateCriteria(typeof(Activity))\n .SetProjection(Projections.Sum(\"Distance\"));\n return (double)criteria.UniqueResult();\n\nThe named query approach still dies, \"Errors in named queries: {Activity.GetAllMiles}\":\n using (ISession session = NHibernateHelper.OpenSession())\n {\n IQuery query = session.GetNamedQuery(\"Activity.GetAllMiles\");\n\n\n return query.UniqueResult<double>();\n }\n\n",
"I think in your original example, you just need to to query.UniqueResult(); the count will return an integer.\n"
] |
[
4,
2,
0
] |
[] |
[] |
[
"hql",
"nhibernate",
"querying"
] |
stackoverflow_0000065060_hql_nhibernate_querying.txt
|
Q:
How can I access App Engine through a Corporate proxy?
I have corporate proxy that supports https but not HTTP CONNECT (even after authentication). It just gives 403 Forbidden in response anything but HTTP or HTTPS URLS. It uses HTTP authenication, not NTLM. It is well documented the urllib2 does not work with https thru a proxy. App Engine trys to connect to a https URL using urllib2 to update the app.
On *nix, urllib2 expects proxies to set using env variables.
export http_proxy="http://mycorporateproxy:8080"
export https_proxy="https://mycorporateproxy:8080"
This is sited as a work around: http://code.activestate.com/recipes/456195/. Also see http://code.google.com/p/googleappengine/issues/detail?id=126.
None of these fixes have worked for me. They seem to rely on the proxy server supporting HTTP CONNECT. Does anyone have any other work arounds? I sure I am not the only
one behind a restrictive corporate proxy.
A:
Do you mean it uses http basic-auth before allowing proxying, and does it then allow 'connect'.
Then you should be able to tunnel over it using http-tunnel or proxytunnel
|
How can I access App Engine through a Corporate proxy?
|
I have corporate proxy that supports https but not HTTP CONNECT (even after authentication). It just gives 403 Forbidden in response anything but HTTP or HTTPS URLS. It uses HTTP authenication, not NTLM. It is well documented the urllib2 does not work with https thru a proxy. App Engine trys to connect to a https URL using urllib2 to update the app.
On *nix, urllib2 expects proxies to set using env variables.
export http_proxy="http://mycorporateproxy:8080"
export https_proxy="https://mycorporateproxy:8080"
This is sited as a work around: http://code.activestate.com/recipes/456195/. Also see http://code.google.com/p/googleappengine/issues/detail?id=126.
None of these fixes have worked for me. They seem to rely on the proxy server supporting HTTP CONNECT. Does anyone have any other work arounds? I sure I am not the only
one behind a restrictive corporate proxy.
|
[
"Do you mean it uses http basic-auth before allowing proxying, and does it then allow 'connect'.\nThen you should be able to tunnel over it using http-tunnel or proxytunnel\n"
] |
[
1
] |
[] |
[] |
[
"google_app_engine",
"python"
] |
stackoverflow_0000064362_google_app_engine_python.txt
|
Q:
Cross-Database information_schema Joins in SQL Server
I am attempting to provide a general solution for the migration of data from one schema version to another. A problem arises when the column data type from the source schema does not match that of the destination. I would like to create a query that will perform a preliminary compare on the columns data types to return which columns need to be fixed before migration is possible.
My current approach is to return the table and column names from information_schema.columns where DATA_TYPE's between catalogs do not match. However, querying information_schema directly will only return results from the catalog of the connection.
Has anyone written a query like this?
A:
I do this by querying the system tables directly. Look into the syscolumns and sysobjects tables. You can also join across linked servers too
select t1.name as tname,c1.name as cname
from adventureworks.dbo.syscolumns c1
join adventureworks.dbo.sysobjects t1 on c1.id = t1.id
where t1.type = 'U'
order by t1.name,c1.colorder
A:
I have always been in the fortunate position to have Red Gate Schema compare which i think would do what you ask. Cheap at twice the price!
|
Cross-Database information_schema Joins in SQL Server
|
I am attempting to provide a general solution for the migration of data from one schema version to another. A problem arises when the column data type from the source schema does not match that of the destination. I would like to create a query that will perform a preliminary compare on the columns data types to return which columns need to be fixed before migration is possible.
My current approach is to return the table and column names from information_schema.columns where DATA_TYPE's between catalogs do not match. However, querying information_schema directly will only return results from the catalog of the connection.
Has anyone written a query like this?
|
[
"I do this by querying the system tables directly. Look into the syscolumns and sysobjects tables. You can also join across linked servers too\nselect t1.name as tname,c1.name as cname\nfrom adventureworks.dbo.syscolumns c1\njoin adventureworks.dbo.sysobjects t1 on c1.id = t1.id \nwhere t1.type = 'U' \norder by t1.name,c1.colorder\n\n",
"I have always been in the fortunate position to have Red Gate Schema compare which i think would do what you ask. Cheap at twice the price!\n"
] |
[
3,
2
] |
[] |
[] |
[
"data_migration",
"sql",
"sql_server"
] |
stackoverflow_0000084165_data_migration_sql_sql_server.txt
|
Q:
Create an index on a MySQL column based on the length its contents?
How do I create an index on a column in MySQL v 5.0 (myisam db engine) based upon the length of its value its a TEXT data type up to 7000 characters, do I have to add another column with the length of the first column?
A:
Yes, as MySQL doesn't support function-based indexes (like ADD INDEX myIndex(LENGTH(text))), you'll need a new int column and define a trigger to auto-update it after inserts and updates.
A:
Sounds like a good approach to me (sorry don't know mysql, but in oracle you could set a trigger so that when your main column is updated the "length" column gets automatically updated)
|
Create an index on a MySQL column based on the length its contents?
|
How do I create an index on a column in MySQL v 5.0 (myisam db engine) based upon the length of its value its a TEXT data type up to 7000 characters, do I have to add another column with the length of the first column?
|
[
"Yes, as MySQL doesn't support function-based indexes (like ADD INDEX myIndex(LENGTH(text))), you'll need a new int column and define a trigger to auto-update it after inserts and updates.\n",
"Sounds like a good approach to me (sorry don't know mysql, but in oracle you could set a trigger so that when your main column is updated the \"length\" column gets automatically updated)\n"
] |
[
2,
0
] |
[] |
[] |
[
"indexing",
"myisam",
"mysql"
] |
stackoverflow_0000084346_indexing_myisam_mysql.txt
|
Q:
Why is this regular expression faster?
I'm writing a Telnet client of sorts in C# and part of what I have to parse are ANSI/VT100 escape sequences, specifically, just those used for colour and formatting (detailed here).
One method I have is one to find all the codes and remove them, so I can render the text without any formatting if needed:
public static string StripStringFormating(string formattedString)
{
if (rTest.IsMatch(formattedString))
return rTest.Replace(formattedString, string.Empty);
else
return formattedString;
}
I'm new to regular expressions and I was suggested to use this:
static Regex rText = new Regex(@"\e\[[\d;]+m", RegexOptions.Compiled);
However, this failed if the escape code was incomplete due to an error on the server. So then this was suggested, but my friend warned it might be slower (this one also matches another condition (z) that I might come across later):
static Regex rTest =
new Regex(@"(\e(\[([\d;]*[mz]?))?)?", RegexOptions.Compiled);
This not only worked, but was in fact faster to and reduced the impact on my text rendering. Can someone explain to a regexp newbie, why? :)
A:
Do you really want to do run the regexp twice? Without having checked (bad me) I would have thought that this would work well:
public static string StripStringFormating(string formattedString)
{
return rTest.Replace(formattedString, string.Empty);
}
If it does, you should see it run ~twice as fast...
A:
The reason why #1 is slower is that [\d;]+ is a greedy quantifier. Using +? or *? is going to do lazy quantifing. See MSDN - Quantifiers for more info.
You may want to try:
"(\e\[(\d{1,2};)*?[mz]?)?"
That may be faster for you.
A:
Without doing detailed analysis, I'd guess that it's faster because of the question marks. These allow the regular expression to be "lazy," and stop as soon as they have enough to match, rather than checking if the rest of the input matches.
I'm not entirely happy with this answer though, because this mostly applies to question marks after * or +. If I were more familiar with the input, it might make more sense to me.
(Also, for the code formatting, you can select all of your code and press Ctrl+K to have it add the four spaces required.)
A:
I'm not sure if this will help with what you are working on, but long ago I wrote a regular expression to parse ANSI graphic files.
(?s)(?:\e\[(?:(\d+);?)*([A-Za-z])(.*?))(?=\e\[|\z)
It will return each code and the text associated with it.
Input string:
<ESC>[1;32mThis is bright green.<ESC>[0m This is the default color.
Results:
[ [1, 32], m, This is bright green.]
[0, m, This is the default color.]
|
Why is this regular expression faster?
|
I'm writing a Telnet client of sorts in C# and part of what I have to parse are ANSI/VT100 escape sequences, specifically, just those used for colour and formatting (detailed here).
One method I have is one to find all the codes and remove them, so I can render the text without any formatting if needed:
public static string StripStringFormating(string formattedString)
{
if (rTest.IsMatch(formattedString))
return rTest.Replace(formattedString, string.Empty);
else
return formattedString;
}
I'm new to regular expressions and I was suggested to use this:
static Regex rText = new Regex(@"\e\[[\d;]+m", RegexOptions.Compiled);
However, this failed if the escape code was incomplete due to an error on the server. So then this was suggested, but my friend warned it might be slower (this one also matches another condition (z) that I might come across later):
static Regex rTest =
new Regex(@"(\e(\[([\d;]*[mz]?))?)?", RegexOptions.Compiled);
This not only worked, but was in fact faster to and reduced the impact on my text rendering. Can someone explain to a regexp newbie, why? :)
|
[
"Do you really want to do run the regexp twice? Without having checked (bad me) I would have thought that this would work well:\npublic static string StripStringFormating(string formattedString)\n{ \n return rTest.Replace(formattedString, string.Empty);\n}\n\nIf it does, you should see it run ~twice as fast...\n",
"The reason why #1 is slower is that [\\d;]+ is a greedy quantifier. Using +? or *? is going to do lazy quantifing. See MSDN - Quantifiers for more info.\nYou may want to try:\n\"(\\e\\[(\\d{1,2};)*?[mz]?)?\"\n\nThat may be faster for you.\n",
"Without doing detailed analysis, I'd guess that it's faster because of the question marks. These allow the regular expression to be \"lazy,\" and stop as soon as they have enough to match, rather than checking if the rest of the input matches.\nI'm not entirely happy with this answer though, because this mostly applies to question marks after * or +. If I were more familiar with the input, it might make more sense to me.\n(Also, for the code formatting, you can select all of your code and press Ctrl+K to have it add the four spaces required.)\n",
"I'm not sure if this will help with what you are working on, but long ago I wrote a regular expression to parse ANSI graphic files.\n(?s)(?:\\e\\[(?:(\\d+);?)*([A-Za-z])(.*?))(?=\\e\\[|\\z)\n\nIt will return each code and the text associated with it.\nInput string:\n<ESC>[1;32mThis is bright green.<ESC>[0m This is the default color.\n\nResults:\n[ [1, 32], m, This is bright green.]\n[0, m, This is the default color.]\n\n"
] |
[
4,
3,
1,
1
] |
[] |
[] |
[
"ansi",
"regex"
] |
stackoverflow_0000004870_ansi_regex.txt
|
Q:
Is there any tool which can generate a report for a valid C program
Is there any tool that can parse a valid C program and generate a report which contains list of functions, global variables, #define constants, local variables in each function, etc.
A:
Doxygen does all of the above.
A:
Try exuberant-ctags with the -x option and tell it to generate all of its kinds.
Exuberant CTAGS is the default ctags on many linux distros.
You might try: exuberant-ctags -x --c-kinds=cdefglmnpstuvx --language-force=c filename
will even work if filename doesn't have .c extension.
You can use exuberant-ctags --list-kinds=c to see the possible tags.
Under windows, the cygwin environment supports ctags. I'm not sure if there's a windows build that doesn't need cygwin.
A:
There are a few tools, depending on what you want to do. I'm not sure what you mean by "report", things like lxr will do html etc. cross referenced links. But for a person to use to help understand some code, then ncc or cscope (the later of which is in most Linux distributions) also some of the IDEs have some of these features (like eclipse).
Older alternatives to cscope are ctags and etags.
|
Is there any tool which can generate a report for a valid C program
|
Is there any tool that can parse a valid C program and generate a report which contains list of functions, global variables, #define constants, local variables in each function, etc.
|
[
"Doxygen does all of the above.\n",
"Try exuberant-ctags with the -x option and tell it to generate all of its kinds.\nExuberant CTAGS is the default ctags on many linux distros.\nYou might try: exuberant-ctags -x --c-kinds=cdefglmnpstuvx --language-force=c filename\nwill even work if filename doesn't have .c extension.\nYou can use exuberant-ctags --list-kinds=c to see the possible tags.\nUnder windows, the cygwin environment supports ctags. I'm not sure if there's a windows build that doesn't need cygwin.\n",
"There are a few tools, depending on what you want to do. I'm not sure what you mean by \"report\", things like lxr will do html etc. cross referenced links. But for a person to use to help understand some code, then ncc or cscope (the later of which is in most Linux distributions) also some of the IDEs have some of these features (like eclipse).\n Older alternatives to cscope are ctags and etags.\n"
] |
[
3,
0,
0
] |
[] |
[] |
[
"c",
"parsing",
"report"
] |
stackoverflow_0000084286_c_parsing_report.txt
|
Q:
NullReferenceException on instantiated object?
This is a segment of code from an app I've inherited, a user got a Yellow screen of death:
Object reference not set to an instance of an object
on the line:
bool l_Success ...
Now I'm 95% sure the faulty argument is ref l_Monitor which is very weird considering the object is instantiated a few lines before. Anyone have a clue why it would happen? Note that I have seen the same issue pop up in other places in the code.
IDMS.Monitor l_Monitor = new IDMS.Monitor();
l_Monitor.LogFile.Product_ID = "SE_WEB_APP";
if (m_PermType_RadioButtonList.SelectedIndex == -1) {
l_Monitor.LogFile.Log(
Nortel.IS.IDMS.LogFile.MessageTypes.ERROR,
"No permission type selected"
);
return;
}
bool l_Success = SE.UI.Utilities.GetPermissionList(
ref l_Monitor,
ref m_CPermissions_ListBox,
(int)this.ViewState["m_Account_Share_ID"],
(m_PermFolders_DropDownList.Enabled)
? m_PermFolders_DropDownList.SelectedItem.Value
: "-1",
(SE.Types.PermissionType)m_PermType_RadioButtonList.SelectedIndex,
(SE.Types.PermissionResource)m_PermResource_RadioButtonList.SelectedIndex);
A:
You sure that one of the properties trying to be accessed on the l_Monitor instance isn't null?
A:
Sprinkle in a few variables for all the property-queries on that (loooooongg) line temporarily. Run the debugger, Check values and Corner the little bug.
A:
I'm inclined to agree with the others; it sounds like one of the parameters you are passing SE.UI.Utilities.GetPermissionList is null which is causing the exception. Your best bet is to fire up the debugger and check was the variables are before that code is called.
A:
The NullReferenceException was actually thrown within a catch block so the stack trace couldn't display that line of code so instead it stopped at the caller.
It was indeed one of the properties of the l_Monitor instance.
|
NullReferenceException on instantiated object?
|
This is a segment of code from an app I've inherited, a user got a Yellow screen of death:
Object reference not set to an instance of an object
on the line:
bool l_Success ...
Now I'm 95% sure the faulty argument is ref l_Monitor which is very weird considering the object is instantiated a few lines before. Anyone have a clue why it would happen? Note that I have seen the same issue pop up in other places in the code.
IDMS.Monitor l_Monitor = new IDMS.Monitor();
l_Monitor.LogFile.Product_ID = "SE_WEB_APP";
if (m_PermType_RadioButtonList.SelectedIndex == -1) {
l_Monitor.LogFile.Log(
Nortel.IS.IDMS.LogFile.MessageTypes.ERROR,
"No permission type selected"
);
return;
}
bool l_Success = SE.UI.Utilities.GetPermissionList(
ref l_Monitor,
ref m_CPermissions_ListBox,
(int)this.ViewState["m_Account_Share_ID"],
(m_PermFolders_DropDownList.Enabled)
? m_PermFolders_DropDownList.SelectedItem.Value
: "-1",
(SE.Types.PermissionType)m_PermType_RadioButtonList.SelectedIndex,
(SE.Types.PermissionResource)m_PermResource_RadioButtonList.SelectedIndex);
|
[
"You sure that one of the properties trying to be accessed on the l_Monitor instance isn't null?\n",
"Sprinkle in a few variables for all the property-queries on that (loooooongg) line temporarily. Run the debugger, Check values and Corner the little bug.\n",
"I'm inclined to agree with the others; it sounds like one of the parameters you are passing SE.UI.Utilities.GetPermissionList is null which is causing the exception. Your best bet is to fire up the debugger and check was the variables are before that code is called.\n",
"The NullReferenceException was actually thrown within a catch block so the stack trace couldn't display that line of code so instead it stopped at the caller. \nIt was indeed one of the properties of the l_Monitor instance.\n"
] |
[
1,
0,
0,
0
] |
[] |
[] |
[
"asp.net",
"c#",
"yellow_screen_of_death"
] |
stackoverflow_0000055130_asp.net_c#_yellow_screen_of_death.txt
|
Q:
PHP - RSS builder
I have a old website that generate its own RSS everytime a new post is created. Everything worked when I was on a server with PHP 4 but now that the host change to PHP 5, I always have a "bad formed XML". I was using xml_parser_create() and xml_parse(...) and fwrite(..) to save everything.
Here is the example when saving (I read before save to append old RSS line of course).
function SaveXml()
{
if (!is_file($this->getFileName()))
{
//Création du fichier
$file_handler = fopen($this->getFileName(),"w");
fwrite($file_handler,"");
fclose($file_handler);
}//Fin du if
//Header xml version="1.0" encoding="utf-8"
$strFileData = '<?xml version="1.0" encoding="iso-8859-1" ?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/"><channel><title>'.$this->getProjectName().'</title><link>http://www.mywebsite.com</link><description>My description</description><lastBuildDate>' . date("r"). '</lastBuildDate>';
//Data
reset($this->arrData);
foreach($this->arrData as $i => $value)
{
$strFileData .= '<item>';
$strFileData .= '<title>'. $this->GetNews($i,0) . '</title>';
$strFileData .= '<pubDate>'. $this->GetNews($i,1) . '</pubDate>';
$strFileData .= '<dc:creator>'. $this->GetNews($i,2) . '</dc:creator>';
$strFileData .= '<description><![CDATA['. $this->GetNews($i,3) . ']]> </description>';
$strFileData .= '<link><![CDATA['. $this->GetNews($i,4) . ']]></link>';
$strFileData .= '<guid>'. $this->GetNews($i,4) . '</guid>';
//$strFileData .= '<category>'. $this->GetNews($i,5) . '</category>';
$strFileData .= '<category>Mycategory</category>';
$strFileData .= '</item>';
}//Fin du for i
$strFileData .= '</channel></rss>';
if (file_exists($this->getFileName()))//Détruit le fichier
unlink($this->getFileName());
$file_handler = fopen($this->getFileName(),"w");
fwrite($file_handler,$strFileData);
fclose($file_handler);
}//Fin de SaveXml
My question is : how do you create and fill up your RSS in PHP?
A:
I would use simpleXML to create the required structure and export the XML. Then I'd cache it to disk with file_put_contents().
A:
At swcombine.com we use Feedcreator. Use that one and your problem will be gone. :)
Here is the PHP code to use it once installed:
function feed_simnews() {
$objRSS = new UniversalFeedCreator();
$objRSS->title = 'My News';
$objRSS->link = 'http://link.to/news.php';
$objRSS->description = 'daily news from me';
$objRSS->xsl = 'http://link.to/feeds/feedxsl.xsl';
$objRSS->language = 'en';
$objRSS->copyright = 'Copyright: Mine!';
$objRSS->webmaster = '[email protected]';
$objRSS->syndicationURL = 'http://link.to/news/simnews.php';
$objRSS->ttl = 180;
$objImage = new FeedImage();
$objImage->title = 'my logo';
$objImage->url = 'http://link.to/feeds/logo.jpg';
$objImage->link = 'http://link.to';
$objImage->description = 'Feed provided by link.to. Click to visit.';
$objImage->width = 120;
$objImage->height = 60;
$objRSS->image = $objImage;
//Function retrieving an array of your news from date start to last week
$colNews = getYourNews(array('start_date' => 'Last week'));
foreach($colNews as $p) {
$objItem = new FeedItem();
$objItem->title = $p->title;
$objItem->description = $p->body;
$objItem->link = $p->link;
$objItem->date = $p->date;
$objItem->author = $p->author;
$objItem->guid = $p->guid;
$objRSS->addItem($objItem);
}
$objRSS->saveFeed('RSS2.0', 'http://link.to/feeds/news.xml', false);
};
Quite KISS. :)
A:
I've used this LGPL-licensed feedcreator class in the past and it worked quite well for the very simple use I had for it.
A:
PHP5 now comes with the SimpleXML extension, it's a pretty quick way to build valid XML if your needs aren't complicated.
However, the problem you're suggesting doesn't seem to an issue of implementation more a problem of syntax. Perhaps you could update your question with a code example, or, a copy of the XML that is produced.
A:
Not a full answer, but you don't have to parse your own XML. It will hurt performance and reliability.
But definitely make sure it is well-formed. It shouldn't be very hard if you generate it by hand or using general-purpose tools. Or maybe your included HTML ruins it?
A:
There are lots of things that can make XML malformed. It might be a problem with character entities (a '<', '>', or '&' in the data between the XML tags). Try running anything output from a database through htmlentities() when you concatenate the string. Do you have an example of the generated XML for us to look at so we can see where the problem is?
|
PHP - RSS builder
|
I have a old website that generate its own RSS everytime a new post is created. Everything worked when I was on a server with PHP 4 but now that the host change to PHP 5, I always have a "bad formed XML". I was using xml_parser_create() and xml_parse(...) and fwrite(..) to save everything.
Here is the example when saving (I read before save to append old RSS line of course).
function SaveXml()
{
if (!is_file($this->getFileName()))
{
//Création du fichier
$file_handler = fopen($this->getFileName(),"w");
fwrite($file_handler,"");
fclose($file_handler);
}//Fin du if
//Header xml version="1.0" encoding="utf-8"
$strFileData = '<?xml version="1.0" encoding="iso-8859-1" ?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/"><channel><title>'.$this->getProjectName().'</title><link>http://www.mywebsite.com</link><description>My description</description><lastBuildDate>' . date("r"). '</lastBuildDate>';
//Data
reset($this->arrData);
foreach($this->arrData as $i => $value)
{
$strFileData .= '<item>';
$strFileData .= '<title>'. $this->GetNews($i,0) . '</title>';
$strFileData .= '<pubDate>'. $this->GetNews($i,1) . '</pubDate>';
$strFileData .= '<dc:creator>'. $this->GetNews($i,2) . '</dc:creator>';
$strFileData .= '<description><![CDATA['. $this->GetNews($i,3) . ']]> </description>';
$strFileData .= '<link><![CDATA['. $this->GetNews($i,4) . ']]></link>';
$strFileData .= '<guid>'. $this->GetNews($i,4) . '</guid>';
//$strFileData .= '<category>'. $this->GetNews($i,5) . '</category>';
$strFileData .= '<category>Mycategory</category>';
$strFileData .= '</item>';
}//Fin du for i
$strFileData .= '</channel></rss>';
if (file_exists($this->getFileName()))//Détruit le fichier
unlink($this->getFileName());
$file_handler = fopen($this->getFileName(),"w");
fwrite($file_handler,$strFileData);
fclose($file_handler);
}//Fin de SaveXml
My question is : how do you create and fill up your RSS in PHP?
|
[
"I would use simpleXML to create the required structure and export the XML. Then I'd cache it to disk with file_put_contents().\n",
"At swcombine.com we use Feedcreator. Use that one and your problem will be gone. :)\nHere is the PHP code to use it once installed:\nfunction feed_simnews() {\n $objRSS = new UniversalFeedCreator();\n $objRSS->title = 'My News';\n $objRSS->link = 'http://link.to/news.php';\n $objRSS->description = 'daily news from me';\n $objRSS->xsl = 'http://link.to/feeds/feedxsl.xsl';\n $objRSS->language = 'en';\n $objRSS->copyright = 'Copyright: Mine!';\n $objRSS->webmaster = '[email protected]';\n $objRSS->syndicationURL = 'http://link.to/news/simnews.php';\n $objRSS->ttl = 180;\n\n $objImage = new FeedImage();\n $objImage->title = 'my logo';\n $objImage->url = 'http://link.to/feeds/logo.jpg';\n $objImage->link = 'http://link.to';\n $objImage->description = 'Feed provided by link.to. Click to visit.';\n $objImage->width = 120;\n $objImage->height = 60;\n $objRSS->image = $objImage;\n\n //Function retrieving an array of your news from date start to last week\n $colNews = getYourNews(array('start_date' => 'Last week'));\n\n foreach($colNews as $p) {\n $objItem = new FeedItem();\n $objItem->title = $p->title;\n $objItem->description = $p->body;\n $objItem->link = $p->link;\n $objItem->date = $p->date;\n $objItem->author = $p->author;\n $objItem->guid = $p->guid;\n\n $objRSS->addItem($objItem);\n }\n\n $objRSS->saveFeed('RSS2.0', 'http://link.to/feeds/news.xml', false);\n};\n\nQuite KISS. :)\n",
"I've used this LGPL-licensed feedcreator class in the past and it worked quite well for the very simple use I had for it.\n",
"PHP5 now comes with the SimpleXML extension, it's a pretty quick way to build valid XML if your needs aren't complicated.\nHowever, the problem you're suggesting doesn't seem to an issue of implementation more a problem of syntax. Perhaps you could update your question with a code example, or, a copy of the XML that is produced.\n",
"Not a full answer, but you don't have to parse your own XML. It will hurt performance and reliability.\nBut definitely make sure it is well-formed. It shouldn't be very hard if you generate it by hand or using general-purpose tools. Or maybe your included HTML ruins it?\n",
"There are lots of things that can make XML malformed. It might be a problem with character entities (a '<', '>', or '&' in the data between the XML tags). Try running anything output from a database through htmlentities() when you concatenate the string. Do you have an example of the generated XML for us to look at so we can see where the problem is?\n"
] |
[
2,
2,
1,
0,
0,
0
] |
[] |
[] |
[
"php",
"rss"
] |
stackoverflow_0000082872_php_rss.txt
|
Q:
asp.net Convert CSV string to string[]
Is there an easy way to convert a string from csv format into a string[] or list?
I can guarantee that there are no commas in the data.
A:
String.Split is just not going to cut it, but a Regex.Split may - Try this one:
using System.Text.RegularExpressions;
string[] line;
line = Regex.Split( input, ",(?=(?:[^\"]*\"[^\"]*\")*(?![^\"]*\"))");
Where 'input' is the csv line. This will handle quoted delimiters, and should give you back an array of strings representing each field in the line.
A:
If you want robust CSV handling, check out FileHelpers
A:
string[] splitString = origString.Split(',');
(Following comment not added by original answerer)
Please keep in mind that this answer addresses the SPECIFIC case where there are guaranteed to be NO commas in the data.
A:
Try:
Regex rex = new Regex(",(?=([^\"]*\"[^\"]*\")*(?![^\"]*\"))");
string[] values = rex.Split( csvLine );
Source: http://weblogs.asp.net/prieck/archive/2004/01/16/59457.aspx
A:
You can take a look at using the Microsoft.VisualBasic assembly with the
Microsoft.VisualBasic.FileIO.TextFieldParser
It handles CSV (or any delimiter) with quotes. I've found it quite handy recently.
A:
There isn't a simple way to do this well, if you want to account for quoted elements with embedded commas, especially if they are mixed with non-quoted fields.
You will also probably want to convert the lines to a dictionary, keyed by the column name.
My code to do this is several hundred lines long.
I think there are some examples on the web, open source projects, etc.
A:
Try this;
static IEnumerable<string> CsvParse(string input)
{
// null strings return a one-element enumeration containing null.
if (input == null)
{
yield return null;
yield break;
}
// we will 'eat' bits of the string until it's gone.
String remaining = input;
while (remaining.Length > 0)
{
if (remaining.StartsWith("\"")) // deal with quotes
{
remaining = remaining.Substring(1); // pass over the initial quote.
// find the end quote.
int endQuotePosition = remaining.IndexOf("\"");
switch (endQuotePosition)
{
case -1:
// unclosed quote.
throw new ArgumentOutOfRangeException("Unclosed quote");
case 0:
// the empty quote
yield return "";
remaining = remaining.Substring(2);
break;
default:
string quote = remaining.Substring(0, endQuotePosition).Trim();
remaining = remaining.Substring(endQuotePosition + 1);
yield return quote;
break;
}
}
else // deal with commas
{
int nextComma = remaining.IndexOf(",");
switch (nextComma)
{
case -1:
// no more commas -- read to end
yield return remaining.Trim();
yield break;
case 0:
// the empty cell
yield return "";
remaining = remaining.Substring(1);
break;
default:
// get everything until next comma
string cell = remaining.Substring(0, nextComma).Trim();
remaining = remaining.Substring(nextComma + 1);
yield return cell;
break;
}
}
}
}
A:
CsvString.split(',');
A:
Get a string[] of all the lines:
string[] lines = System.IO.File.ReadAllLines("yourfile.csv");
Then loop through and split those lines (this error prone because it doesn't check for commas in quote-delimited fields):
foreach (string line in lines)
{
string[] items = line.Split({','}};
}
A:
string test = "one,two,three";
string[] okNow = test.Split(',');
A:
string s = "1,2,3,4,5";
string myStrings[] = s.Split({','}};
Note that Split() takes an array of characters to split on.
A:
separationChar[] = {';'}; // or '\t' ',' etc.
var strArray = strCSV.Split(separationChar);
A:
string[] splitStrings = myCsv.Split(",".ToCharArray());
A:
Some CSV files have double quotes around the values along with a comma. Therefore sometimes you can split on this string literal: ","
A:
A Csv file with Quoted fields, is not a Csv file. Far more things (Excel) output without quotes rather than with quotes when you select "Csv" in a save as.
If you want one you can use, free, or commit to, here's mine that also does IDataReader/Record. It also uses DataTable to define/convert/enforce columns and DbNull.
http://github.com/claco/csvdatareader/
It doesn't do quotes.. yet. I just tossed it together a few days ago to scratch an itch.
Forgotten Semicolon: Nice link. Thanks.
cfeduke: Thanks for the tip to Microsoft.VisualBasic.FileIO.TextFieldParser. Going into CsvDataReader tonight.
A:
http://github.com/claco/csvdatareader/ updated using TextFieldParser suggested by cfeduke.
Just a few props away from exposing separators/trimspaces/type ig you just need code to steal.
A:
I was already splitting on tabs so this did the trick for me:
public static string CsvToTabDelimited(string line) {
var ret = new StringBuilder(line.Length);
bool inQuotes = false;
for (int idx = 0; idx < line.Length; idx++) {
if (line[idx] == '"') {
inQuotes = !inQuotes;
} else {
if (line[idx] == ',') {
ret.Append(inQuotes ? ',' : '\t');
} else {
ret.Append(line[idx]);
}
}
}
return ret.ToString();
}
|
asp.net Convert CSV string to string[]
|
Is there an easy way to convert a string from csv format into a string[] or list?
I can guarantee that there are no commas in the data.
|
[
"String.Split is just not going to cut it, but a Regex.Split may - Try this one:\nusing System.Text.RegularExpressions;\n\nstring[] line;\nline = Regex.Split( input, \",(?=(?:[^\\\"]*\\\"[^\\\"]*\\\")*(?![^\\\"]*\\\"))\");\n\nWhere 'input' is the csv line. This will handle quoted delimiters, and should give you back an array of strings representing each field in the line.\n",
"If you want robust CSV handling, check out FileHelpers\n",
"string[] splitString = origString.Split(',');\n(Following comment not added by original answerer)\nPlease keep in mind that this answer addresses the SPECIFIC case where there are guaranteed to be NO commas in the data.\n",
"Try:\nRegex rex = new Regex(\",(?=([^\\\"]*\\\"[^\\\"]*\\\")*(?![^\\\"]*\\\"))\");\nstring[] values = rex.Split( csvLine );\n\nSource: http://weblogs.asp.net/prieck/archive/2004/01/16/59457.aspx\n",
"You can take a look at using the Microsoft.VisualBasic assembly with the\nMicrosoft.VisualBasic.FileIO.TextFieldParser\n\nIt handles CSV (or any delimiter) with quotes. I've found it quite handy recently.\n",
"There isn't a simple way to do this well, if you want to account for quoted elements with embedded commas, especially if they are mixed with non-quoted fields.\nYou will also probably want to convert the lines to a dictionary, keyed by the column name.\nMy code to do this is several hundred lines long.\nI think there are some examples on the web, open source projects, etc.\n",
"Try this; \nstatic IEnumerable<string> CsvParse(string input)\n{\n // null strings return a one-element enumeration containing null.\n if (input == null)\n {\n yield return null;\n yield break;\n }\n\n // we will 'eat' bits of the string until it's gone.\n String remaining = input;\n while (remaining.Length > 0)\n {\n\n if (remaining.StartsWith(\"\\\"\")) // deal with quotes\n {\n remaining = remaining.Substring(1); // pass over the initial quote.\n\n // find the end quote.\n int endQuotePosition = remaining.IndexOf(\"\\\"\");\n switch (endQuotePosition)\n {\n case -1:\n // unclosed quote.\n throw new ArgumentOutOfRangeException(\"Unclosed quote\");\n case 0:\n // the empty quote\n yield return \"\";\n remaining = remaining.Substring(2);\n break;\n default:\n string quote = remaining.Substring(0, endQuotePosition).Trim();\n remaining = remaining.Substring(endQuotePosition + 1);\n yield return quote;\n break;\n }\n }\n else // deal with commas\n {\n int nextComma = remaining.IndexOf(\",\");\n switch (nextComma)\n {\n case -1:\n // no more commas -- read to end\n yield return remaining.Trim();\n yield break;\n\n case 0:\n // the empty cell\n yield return \"\";\n remaining = remaining.Substring(1);\n break;\n\n default:\n // get everything until next comma\n string cell = remaining.Substring(0, nextComma).Trim();\n remaining = remaining.Substring(nextComma + 1);\n yield return cell;\n break;\n }\n }\n }\n\n}\n\n",
"CsvString.split(',');\n\n",
"Get a string[] of all the lines: \nstring[] lines = System.IO.File.ReadAllLines(\"yourfile.csv\");\n\nThen loop through and split those lines (this error prone because it doesn't check for commas in quote-delimited fields):\nforeach (string line in lines)\n{\n string[] items = line.Split({','}};\n}\n\n",
"string test = \"one,two,three\";\nstring[] okNow = test.Split(',');\n\n",
"string s = \"1,2,3,4,5\";\n\nstring myStrings[] = s.Split({','}};\n\nNote that Split() takes an array of characters to split on.\n",
"separationChar[] = {';'}; // or '\\t' ',' etc.\nvar strArray = strCSV.Split(separationChar);\n\n",
"string[] splitStrings = myCsv.Split(\",\".ToCharArray());\n\n",
"Some CSV files have double quotes around the values along with a comma. Therefore sometimes you can split on this string literal: \",\" \n",
"A Csv file with Quoted fields, is not a Csv file. Far more things (Excel) output without quotes rather than with quotes when you select \"Csv\" in a save as.\nIf you want one you can use, free, or commit to, here's mine that also does IDataReader/Record. It also uses DataTable to define/convert/enforce columns and DbNull.\nhttp://github.com/claco/csvdatareader/\nIt doesn't do quotes.. yet. I just tossed it together a few days ago to scratch an itch.\nForgotten Semicolon: Nice link. Thanks.\ncfeduke: Thanks for the tip to Microsoft.VisualBasic.FileIO.TextFieldParser. Going into CsvDataReader tonight.\n",
"http://github.com/claco/csvdatareader/ updated using TextFieldParser suggested by cfeduke.\nJust a few props away from exposing separators/trimspaces/type ig you just need code to steal.\n",
"I was already splitting on tabs so this did the trick for me:\npublic static string CsvToTabDelimited(string line) {\n var ret = new StringBuilder(line.Length);\n bool inQuotes = false;\n for (int idx = 0; idx < line.Length; idx++) {\n if (line[idx] == '\"') {\n inQuotes = !inQuotes;\n } else {\n if (line[idx] == ',') {\n ret.Append(inQuotes ? ',' : '\\t');\n } else {\n ret.Append(line[idx]);\n }\n }\n }\n return ret.ToString();\n}\n\n"
] |
[
17,
8,
2,
2,
2,
1,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"c#",
"csv",
"string"
] |
stackoverflow_0000073385_c#_csv_string.txt
|
Q:
How do you override the string representation the HTML helper methods use for a model’s properties?
The html helper methods check the ViewDataDictionary for a value. The value can either be in the dictionary or in the Model, as a property. To extract the value, an internal sealed class named the ViewDataEvaluator uses PropertyDescriptor to get the value. Then, Convert.ToString() is called to convert the object returned to a string.
Desired code in Controller action
The controller action should only populate the Model, not format it (formatting the model is global).
Desired code in View
The view can render a HTML textbox and extract the string representation of the property with this line of code:
<%=Html.TextBox(“Date”) %>
<%=Html.TextBox(“Time”) %>
<%=Html.TextBox(“UnitPrice”) %>
Binding Model's Property to HtmlHelper.TextBox()
For the textbox’s value, the UnitPrice property’s value from the model instance is converted to a string. I need to override this behavior with my own conversion to a string, which is per property – not per type. For example, I need a different string representation of a decimal for UnitPrice and another string representation of a decimal for UnitQuantity.
For example, I need to format the UnitPrice's decimal precision based on the market.
string decimalPlaces = ViewData.Model.Precision.ToString ();
<%=Html.TextBox(“UnitPrice”, ViewData.Model.TypeName.UnitPrice.ToString("N" + decimalPlaces)) %>
2-way databinding please
Just like the IModelBinder is the Parse for each property of the model, I need a Format for each property, kinda like Windows Forms binding, but based on the model instead of the control. This would enable the model to round-trip and have proper formatting. I would prefer a design where I could override the default formatting. In addition, my model is in a separate assembly, so attributed properties specifying a formatter are not an option.
Please note I need property specific formatting for a model, not type specific formatting.
A:
There's no way to specify a format with the helpers themselves. The approach you've taken will work. Another approach is to add the value pre-formatted into the ModelState.
EDIT: Are you sure you even want to format a text input with the currency? For example, what you would see in the input is:
<input type="text" name="UnitPrice" value="$1.23" />
When you post that back to the server, we won't understand it. Instead, I'd put the currency symbol outside of the text input. For example:
$<%= Html.TextBox("UnitPrice") %>
I'm sure there's an easy method to render "$" without hard-coding it so it's localizable, but I don't know what it is offhand.
EDIT AGAIN
A comment from a developer on my team:
Well, to be fair, this isn’t that bad.
Often when you format a number or a
date it’s still understandable coming
back in. For example, padding a
number (like a ZIP code) to 5 digits,
padding a decimal to the hundredths,
formatting a date to be yyyy-mm-dd,
etc. will come in just fine. Adding
extra characters like currency symbols
will break, but normally input fields
don’t take or display currency symbols
anyway – it’s implied.
|
How do you override the string representation the HTML helper methods use for a model’s properties?
|
The html helper methods check the ViewDataDictionary for a value. The value can either be in the dictionary or in the Model, as a property. To extract the value, an internal sealed class named the ViewDataEvaluator uses PropertyDescriptor to get the value. Then, Convert.ToString() is called to convert the object returned to a string.
Desired code in Controller action
The controller action should only populate the Model, not format it (formatting the model is global).
Desired code in View
The view can render a HTML textbox and extract the string representation of the property with this line of code:
<%=Html.TextBox(“Date”) %>
<%=Html.TextBox(“Time”) %>
<%=Html.TextBox(“UnitPrice”) %>
Binding Model's Property to HtmlHelper.TextBox()
For the textbox’s value, the UnitPrice property’s value from the model instance is converted to a string. I need to override this behavior with my own conversion to a string, which is per property – not per type. For example, I need a different string representation of a decimal for UnitPrice and another string representation of a decimal for UnitQuantity.
For example, I need to format the UnitPrice's decimal precision based on the market.
string decimalPlaces = ViewData.Model.Precision.ToString ();
<%=Html.TextBox(“UnitPrice”, ViewData.Model.TypeName.UnitPrice.ToString("N" + decimalPlaces)) %>
2-way databinding please
Just like the IModelBinder is the Parse for each property of the model, I need a Format for each property, kinda like Windows Forms binding, but based on the model instead of the control. This would enable the model to round-trip and have proper formatting. I would prefer a design where I could override the default formatting. In addition, my model is in a separate assembly, so attributed properties specifying a formatter are not an option.
Please note I need property specific formatting for a model, not type specific formatting.
|
[
"There's no way to specify a format with the helpers themselves. The approach you've taken will work. Another approach is to add the value pre-formatted into the ModelState.\nEDIT: Are you sure you even want to format a text input with the currency? For example, what you would see in the input is:\n<input type=\"text\" name=\"UnitPrice\" value=\"$1.23\" />\n\nWhen you post that back to the server, we won't understand it. Instead, I'd put the currency symbol outside of the text input. For example:\n$<%= Html.TextBox(\"UnitPrice\") %>\n\nI'm sure there's an easy method to render \"$\" without hard-coding it so it's localizable, but I don't know what it is offhand.\nEDIT AGAIN\nA comment from a developer on my team: \n\nWell, to be fair, this isn’t that bad.\n Often when you format a number or a\n date it’s still understandable coming\n back in. For example, padding a\n number (like a ZIP code) to 5 digits,\n padding a decimal to the hundredths,\n formatting a date to be yyyy-mm-dd,\n etc. will come in just fine. Adding\n extra characters like currency symbols\n will break, but normally input fields\n don’t take or display currency symbols\n anyway – it’s implied.\n\n"
] |
[
3
] |
[] |
[] |
[
"asp.net_mvc"
] |
stackoverflow_0000084418_asp.net_mvc.txt
|
Q:
Error handling reporting methods with ASP.NET 2.0 / C#
Does anyone know of an open source module or a good method for handling application errors and e-mailing them to an admin and/or saving to a database?
A:
ELMAH is a great drop-in tool for this. DLL in the bin directory, and some markup to add to the web.config and you're done.
A:
log4net can save errors to a database or send emails. We use this at my job (despite the fact it caused Jeff Atwood much stress in the SO beta). Catch the errors in the global.asax page in the Application Error method.
A:
You should look into the application_error method in the global.asax page to catch the errors, and into either log4net and the enterprise library to log those errors in whatever form you choose to any provider you choose -- such as database or e-mail.
A:
log4net is open source, fairly easy to learn, and can be configured to log to multiple sources (file, email, db, eventlog) very easily.
A:
CALM does this and more. It's not free or open source, but it is inexpensive and comprehensive.
caveat: I am the author of CALM
A:
MS Enterprise Library definitely offers this functionality. It has a little bit of a learning curve, but once it's set up, it is pretty easy to maintain. It also offers several different logging sources, but is not open source.
A:
I have a free provider as well that is available from my website that is a drop in solution for error handling.
|
Error handling reporting methods with ASP.NET 2.0 / C#
|
Does anyone know of an open source module or a good method for handling application errors and e-mailing them to an admin and/or saving to a database?
|
[
"ELMAH is a great drop-in tool for this. DLL in the bin directory, and some markup to add to the web.config and you're done.\n",
"log4net can save errors to a database or send emails. We use this at my job (despite the fact it caused Jeff Atwood much stress in the SO beta). Catch the errors in the global.asax page in the Application Error method.\n",
"You should look into the application_error method in the global.asax page to catch the errors, and into either log4net and the enterprise library to log those errors in whatever form you choose to any provider you choose -- such as database or e-mail.\n",
"log4net is open source, fairly easy to learn, and can be configured to log to multiple sources (file, email, db, eventlog) very easily.\n",
"CALM does this and more. It's not free or open source, but it is inexpensive and comprehensive.\ncaveat: I am the author of CALM\n",
"MS Enterprise Library definitely offers this functionality. It has a little bit of a learning curve, but once it's set up, it is pretty easy to maintain. It also offers several different logging sources, but is not open source.\n",
"I have a free provider as well that is available from my website that is a drop in solution for error handling.\n"
] |
[
3,
2,
1,
0,
0,
0,
0
] |
[] |
[] |
[
"asp.net",
"c#"
] |
stackoverflow_0000084587_asp.net_c#.txt
|
Q:
Which JavaScript library is recommended for neat UI effects?
I need a JavaScript library that supports Ajax as well as help me in making simple and neat animation effects in a website I am working on.
Which library do you recommend?
A:
I would definitely recommend JQuery as the easiest to use and the one which requires you to write the least code. http://jquery.com/
A:
http://script.aculo.us/
I think it fits your 'neat animation effects' requirement.
A:
That's a pretty broad question, some of the top open source stacks are
- YUI (Yahoo)
- Prototype with Scriptaculuous
- ExtJs
- Dojo
It's a pretty personal choice based on code style, look and feel, and which one you prefer.
A:
Take a look at Dojo/Dijit/Dojox (http://dojotoolkit.org). They have a lot of cool special effects, and a lot more that will come in handy to anyone working with Javascript.
They also keep docs and related articles at http://dojocampus.org/
A:
I like ExtJS a lot. It's a great library for developing complex interfaces with javascript.
A:
I've been playing with Scriptaculous and jQuery. Both are good although I'm leaning more toward jQuery.
A:
I am a fan of YUI. It supports Animation and Ajax.
In addition, there is just a plethora of controls: menus, movable windows, tree controls, sliders, tabview, the list goes on and on. I have used their code and I've had a good cross-browser experience with it. Doesn't surprise me. They do extensive testing on the toolkit.
A:
Stack Overflow uses jQuery if that matters. Scriptaculous tries pretty hard to do everything that you can do in Flash. Dojo has an SVG abstraction that lets you do things that are not directly supported in JavaScript.
A:
Personally, I'm a fan of MooTools' animation classes (Fx.Tween, Fx.Morph, Fx.Transitions). Very straight-forward and easy to use. For more advance animation Fx.Slide, Fx.Scroll and Fx.Elements are also available...
It also has a neat Ajax class (Request) that will take care of all your ajax needs.
Obviously though this is my personal opinion... Any of the big ones (Yahoo UI, jQuery, MooTools, Prototype etc...) will all be able to do both Ajax and Animation so I'd suggest looking at sample code from all those libraries and chose the one you like the most!
A:
Spry has a lot of effects that seem to be relatively easy to use.
The downside (upside?) with Spry is its packaging. It's split into many separate pieces and parts.
So if you want to use a lot of Spry, you'll either be making several calls to external javascript files, or you'll be gluing them together on your own. Spry won't do it for you neatly (like YUI does).
However if you want to just use a single component or effect, Spry is very lightweight!
A:
If you want to implement some basic animation jQuery is ok.
Also personally I like the prototype.js
For more difficult thing we using some features of Microsoft AJAX client library
|
Which JavaScript library is recommended for neat UI effects?
|
I need a JavaScript library that supports Ajax as well as help me in making simple and neat animation effects in a website I am working on.
Which library do you recommend?
|
[
"I would definitely recommend JQuery as the easiest to use and the one which requires you to write the least code. http://jquery.com/\n",
"http://script.aculo.us/\nI think it fits your 'neat animation effects' requirement.\n",
"That's a pretty broad question, some of the top open source stacks are\n - YUI (Yahoo)\n - Prototype with Scriptaculuous\n - ExtJs\n - Dojo\nIt's a pretty personal choice based on code style, look and feel, and which one you prefer.\n",
"Take a look at Dojo/Dijit/Dojox (http://dojotoolkit.org). They have a lot of cool special effects, and a lot more that will come in handy to anyone working with Javascript.\nThey also keep docs and related articles at http://dojocampus.org/\n",
"I like ExtJS a lot. It's a great library for developing complex interfaces with javascript.\n",
"I've been playing with Scriptaculous and jQuery. Both are good although I'm leaning more toward jQuery.\n",
"I am a fan of YUI. It supports Animation and Ajax.\nIn addition, there is just a plethora of controls: menus, movable windows, tree controls, sliders, tabview, the list goes on and on. I have used their code and I've had a good cross-browser experience with it. Doesn't surprise me. They do extensive testing on the toolkit.\n",
"Stack Overflow uses jQuery if that matters. Scriptaculous tries pretty hard to do everything that you can do in Flash. Dojo has an SVG abstraction that lets you do things that are not directly supported in JavaScript.\n",
"Personally, I'm a fan of MooTools' animation classes (Fx.Tween, Fx.Morph, Fx.Transitions). Very straight-forward and easy to use. For more advance animation Fx.Slide, Fx.Scroll and Fx.Elements are also available...\nIt also has a neat Ajax class (Request) that will take care of all your ajax needs.\nObviously though this is my personal opinion... Any of the big ones (Yahoo UI, jQuery, MooTools, Prototype etc...) will all be able to do both Ajax and Animation so I'd suggest looking at sample code from all those libraries and chose the one you like the most!\n",
"Spry has a lot of effects that seem to be relatively easy to use. \nThe downside (upside?) with Spry is its packaging. It's split into many separate pieces and parts. \nSo if you want to use a lot of Spry, you'll either be making several calls to external javascript files, or you'll be gluing them together on your own. Spry won't do it for you neatly (like YUI does). \nHowever if you want to just use a single component or effect, Spry is very lightweight!\n",
"\nIf you want to implement some basic animation jQuery is ok.\nAlso personally I like the prototype.js\nFor more difficult thing we using some features of Microsoft AJAX client library\n\n"
] |
[
9,
3,
3,
2,
2,
1,
1,
1,
1,
1,
0
] |
[] |
[] |
[
"html",
"javascript"
] |
stackoverflow_0000078570_html_javascript.txt
|
Q:
Deleting Rows from a SQL Table marked for Replication
I erroneously delete all the rows from a MS SQL 2000 table that is used in merge replication (the table is on the publisher). I then compounded the issue by using a DTS operation to retrieve the rows from a backup database and repopulate the table.
This has created the following issue:
The delete operation marked the rows for deletion on the clients but the DTS operation bypasses the replication triggers so the imported rows are not marked for insertion on the subscribers. In effect the subscribers lose the data although it is on the publisher.
So I thought "no worries" I will just delete the rows again and then add them correctly via an insert statement and they will then be marked for insertion on the subscribers.
This is my problem:
I cannot delete the DTSed rows because I get a "Cannot insert duplicate key row in object 'MSmerge_tombstone' with unique index 'uc1MSmerge_tombstone'." error. What I would like to do is somehow delete the rows from the table bypassing the merge replication trigger. Is this possible? I don't want to remove and redo the replication because the subscribers are 50+ windows mobile devices.
Edit: I have tried the Truncate Table command. This gives the following error "Cannot truncate table xxxx because it is published for replication"
A:
Have you tried truncating the table?
A:
You may have to truncate the table and reset the ID field back to 0 if you need the inserted rows to have the same ID. If not, just truncate and it should be fine.
A:
You also could look into temporarily dropping the unique index and adding it back when you're done.
A:
Look into sp_mergedummyupdate
A:
Would creating a second table be an option? You could create a second table, populate it with the needed data, add the constraints/indexes, then drop the first table and rename your second table. This should give you the data with the right keys...and it should all consist of SQL statements that are allowed to trickle down the replication. It just isn't probably the best on performance...and definitely would impose some risk.
I haven't tried this first hand in a replicated environment...but it may be at least worth trying out.
A:
Thanks for the tips...I eventually found a solution:
I deleted the merge delete trigger from the table
Deleted the DTSed rows
Recreated the merge delete trigger
Added my rows correctly using an insert statement.
I was a little worried bout fiddling with the merge triggers but every thing appears to be working correctly.
|
Deleting Rows from a SQL Table marked for Replication
|
I erroneously delete all the rows from a MS SQL 2000 table that is used in merge replication (the table is on the publisher). I then compounded the issue by using a DTS operation to retrieve the rows from a backup database and repopulate the table.
This has created the following issue:
The delete operation marked the rows for deletion on the clients but the DTS operation bypasses the replication triggers so the imported rows are not marked for insertion on the subscribers. In effect the subscribers lose the data although it is on the publisher.
So I thought "no worries" I will just delete the rows again and then add them correctly via an insert statement and they will then be marked for insertion on the subscribers.
This is my problem:
I cannot delete the DTSed rows because I get a "Cannot insert duplicate key row in object 'MSmerge_tombstone' with unique index 'uc1MSmerge_tombstone'." error. What I would like to do is somehow delete the rows from the table bypassing the merge replication trigger. Is this possible? I don't want to remove and redo the replication because the subscribers are 50+ windows mobile devices.
Edit: I have tried the Truncate Table command. This gives the following error "Cannot truncate table xxxx because it is published for replication"
|
[
"Have you tried truncating the table?\n",
"You may have to truncate the table and reset the ID field back to 0 if you need the inserted rows to have the same ID. If not, just truncate and it should be fine.\n",
"You also could look into temporarily dropping the unique index and adding it back when you're done. \n",
"Look into sp_mergedummyupdate\n",
"Would creating a second table be an option? You could create a second table, populate it with the needed data, add the constraints/indexes, then drop the first table and rename your second table. This should give you the data with the right keys...and it should all consist of SQL statements that are allowed to trickle down the replication. It just isn't probably the best on performance...and definitely would impose some risk.\nI haven't tried this first hand in a replicated environment...but it may be at least worth trying out.\n",
"Thanks for the tips...I eventually found a solution:\nI deleted the merge delete trigger from the table\nDeleted the DTSed rows\nRecreated the merge delete trigger\nAdded my rows correctly using an insert statement.\nI was a little worried bout fiddling with the merge triggers but every thing appears to be working correctly.\n"
] |
[
2,
2,
1,
1,
0,
0
] |
[] |
[] |
[
"replication",
"sql_server",
"sql_server_2000"
] |
stackoverflow_0000083475_replication_sql_server_sql_server_2000.txt
|
Q:
Stored procedures or OR mappers?
Which is better? Or use and OR mapper with SP's? If you have a system with SP's already, is an OR mapper worth it?
A:
I like ORM's because you don't have to reinvent the wheel. That being said, it completely depends on your application needs, development style and that of the team.
This question has already been covered Why is parameterized SQL generated by NHibernate just as fast as a stored procedure?
A:
There is nothing good to be said about stored procedures. There were a necessity 10 years ago but every single benefit of using sprocs is no longer valid. The two most common arguments are regarding security and performance. The "sending stuff over the wire" crap doesn't hold either, I can certainly create a query dynamically to do everything on the server too. One thing the sproc proponents won't tell you is that it makes updates impossible if you are using column conflict resolution on a merge publication. Only DBAs who think they are the database overlord insist on sprocs because it makes their job look more impressive than it really is.
A:
This has been discussed at length on previous questions.
What are the pros and cons to keeping SQL in Stored Procs versus Code
A:
At my work, we mostly do line of business apps - contract work.
For this type of business, I'm a huge fan of ORM. About four years ago (when the ORM tools were less mature) we studied up on CSLA and rolled our own simplified ORM tool that we use in most of our applications,including some enterprise-class systems that have 100+ tables.
We estimate that this approach (which of course includes a lot of code generation) creates a time savings of up to 30% in our projects. Seriously, it's rediculous.
There is a small performance trade-off, but it's insubstantial as long as you have a decent understanding of software development. There are always exceptions that require flexibility.
For instance, extremely data-intensive batch operations should still be handled in specialized sprocs if possible. You probably don't want to send 100,000 huge records over the wire if you could do it in a sproc right on the database.
This is the type of problem that newbie devs run into whether they're using ORM or not. They just have to see the results and if they're competent, they will get it.
What we've seen in our web apps is that usually the most difficult to solve performance bottlenecks are no longer database-related even with ORM. Rather, tey're on the front-end (browser) due to bandwidth, AJAX overhead, etc. Even mid-range database servers are incredibly powerful these days.
Of course, other shops who work on much larger high-demand systems may have different experiences there. :)
A:
Stored procedures hands down. OR Mappers are language specific, and often add graphic slowdowns.
Stored procedures means you're not limited by the language interface, and you can merely tack on new interfaces to the database in forwards compatible ways.
My personal opinion of OR Mappers is their existence highlights a design flaw in the popular structure of databases. Database developers should realize the tasks people are trying to achieve with complicated OR-Mappers and create server-side utilities that assist in performing this task.
OR Mappers also are epic targets of the "leaky abstraction" syndrome ( Joel On Software: Leaky Abstractions )
Where its quite easy to find things it just cant handle because of the abstraction layer not being psychic.
A:
Stored procedures are better, in my view, because they can have an independent security configuration from the underlying tables.
This means you can allow specific operations without out allowing writes/reads to specific tables. It also limits the damage that people can do if they discover a SQL injection exploit.
A:
Definitely ORMs. More flexible, more portable (generally they tend to have portability built in). In case of slowness you may want to use caching or hand-tuned SQL in hot spots.
Generally stored procedures have several problems with maintainability.
separate from application (so many changes have now to be made in two places)
generally harder to change
harder to put under version control
harder to make sure they're updated (deployment issues)
portability (already mentioned)
A:
I personally have found that SP's tend to be faster performance wise, at least for the large data items that I execute on a regular basis. But I know many people that swear by OR tools and wouldn't do ANYTHING else.
A:
I would argue that using an OR mapper will increase readability and maintainability of your applications source code, while using SP will increase the performance of the application.
A:
They are not actually mutually exclusive, though to your point they usually are so.
The advantage of using Object Relational mapping is that you can swap out data sources. Not only database structure, but you could use any data source. With advent web services / Service-oriented architecture / ESB's, in a larger corporation, it would be wise to consider having a higher level separation of concerns than what you could get in stored procedures. However, in smaller companies and in application that will never use a different data source, then SP's can fit the bill fine. And one last point, it is not necessary to use an OR mapper to get the abstraction. My former team had great success by simply using an adapter model using Spring.NET to plug-in the data source.
A:
@ Kent Fredrick
My personal opinion of OR Mappers is their existence highlights a design flaw in the popular structure of databases"
I think you're talking about the difference between the relational model and object-oriented model. This is actually why we need ORMs, but the implementations of these models were done on purpose - it is not a design flow - it is just how things turned out to be historically.
A:
Use stored procedures where you have identified a performance bottleneck. if you haven't identified a bottleneck, what are you doing with premature optimisation?
Use stored procedures where you are concerned about security access to a particular table.
Use stored procs when you have a SQL wizard who is prepared to sit and write complex queries that join together loads of tables in a legacy database- to do the things that are hard in an OR mapper.
Use the OR mapper for the other (at least) 80% of your database: where the selects and updates are so routine as to make access through stored procedures alone a pointless exercise in manual coding, and where updates are so infrequent that there is no performance cost. Use an OR mapper to automate the easy stuff.
Most OR mappers can talk to stored procs for the rest.
You should not use stored procs assuming that they're faster than a sql statement in a string, this is not necessarily the case in the last few versions of MS SQL server.
You do not need to use stored procs to thwart SQL injection attacks, there are other ways to do make sure that your query parameters are strongly typed and not just string-concatenated.
You don't need to use an OR mapper to get a POCO domain model, but it does help.
A:
If you already have a data API that's exposed as sprocs, you'd need to justify a major architectural overhaul to go to ORM.
For a green-fields build, I'd evaluate several things:
If there's a dedicated DBA on the team, I'd lean to sprocs
If there's more than one application touching the same DB I'd lean to sprocs
If there's no possibility of database migration ever, I'd lean to sprocs
If I'm trying to implement MVCC in the DB, I'd lean to sprocs
If I'm deploying this as a product with potentially multiple backend dbs (MySql, MSSql, Oracle), I'd lean to ORM
If I'm on a tight deadline, I'd lean to ORM, since it's a faster way to create my domain model and keep it in sync with the data model (with appropriate tooling).
If I'm exposing the same domain model in multiple ways (web app, web service, RIA client), I'll lean to ORM as then data model is then hidden behind my ORM facade, making a robust domain model is more valuable to me.
I think performance is a bit of a red herring; hibernate seems to perform nearly as well or better than hand-coded SQL (due to it's caching tiers), and it's easy to write a bad query in your sproc either way.
The most important criteria are probably the team's skillset and long-term database portability needs.
A:
Well the SP's are already there. It doesn't make sense to can them really. I guess does it make sense to use a mapper with SP's?
A:
"I'm trying to drive in a nail. Should I use the heel of my shoe or a glass bottle?"
Both Stored Procedures and ORMs are difficult and annoying to use for a developer (though not necessarily for a DBA or architect, respectively), because they incur a start-up cost and higher maintenance cost that doesn't guarantee a pay-off.
Both will pay off well if the requirements aren't expected to change much over the lifespan of the system, but they will get in your way if you're building the system to discover the requirements in the first place.
Straight-coded SQL or quasi-ORM like LINQ and ActiveRecord is better for build-to-discover projects (which happen in the enterprise a lot more than the PR wants you to think).
Stored Procedures are better in a language-agnostic environment, or where fine-grained control over permissions is required. They're also better if your DBA has a better grasp of the requirements than your programmers.
Full-blown ORMs are better if you do Big Design Up Front, use lots of UML, want to abstract the database back-end, and your architect has a better grasp of the requirements than either your DBA or programmers.
And then there's option #4: Use all of them. A whole system is not usually just one program, and while many programs may talk to the same database, they could each use whatever method is appropriate both for the program's specific task, and for its level of maturity. That is: you start with straight-coded SQL or LINQ, then mature the program by refactoring in ORM and Stored Procedures where you see they make sense.
|
Stored procedures or OR mappers?
|
Which is better? Or use and OR mapper with SP's? If you have a system with SP's already, is an OR mapper worth it?
|
[
"I like ORM's because you don't have to reinvent the wheel. That being said, it completely depends on your application needs, development style and that of the team.\nThis question has already been covered Why is parameterized SQL generated by NHibernate just as fast as a stored procedure?\n",
"There is nothing good to be said about stored procedures. There were a necessity 10 years ago but every single benefit of using sprocs is no longer valid. The two most common arguments are regarding security and performance. The \"sending stuff over the wire\" crap doesn't hold either, I can certainly create a query dynamically to do everything on the server too. One thing the sproc proponents won't tell you is that it makes updates impossible if you are using column conflict resolution on a merge publication. Only DBAs who think they are the database overlord insist on sprocs because it makes their job look more impressive than it really is.\n",
"This has been discussed at length on previous questions.\nWhat are the pros and cons to keeping SQL in Stored Procs versus Code\n",
"At my work, we mostly do line of business apps - contract work. \nFor this type of business, I'm a huge fan of ORM. About four years ago (when the ORM tools were less mature) we studied up on CSLA and rolled our own simplified ORM tool that we use in most of our applications,including some enterprise-class systems that have 100+ tables.\nWe estimate that this approach (which of course includes a lot of code generation) creates a time savings of up to 30% in our projects. Seriously, it's rediculous. \nThere is a small performance trade-off, but it's insubstantial as long as you have a decent understanding of software development. There are always exceptions that require flexibility.\nFor instance, extremely data-intensive batch operations should still be handled in specialized sprocs if possible. You probably don't want to send 100,000 huge records over the wire if you could do it in a sproc right on the database. \nThis is the type of problem that newbie devs run into whether they're using ORM or not. They just have to see the results and if they're competent, they will get it.\nWhat we've seen in our web apps is that usually the most difficult to solve performance bottlenecks are no longer database-related even with ORM. Rather, tey're on the front-end (browser) due to bandwidth, AJAX overhead, etc. Even mid-range database servers are incredibly powerful these days. \nOf course, other shops who work on much larger high-demand systems may have different experiences there. :) \n",
"Stored procedures hands down. OR Mappers are language specific, and often add graphic slowdowns. \nStored procedures means you're not limited by the language interface, and you can merely tack on new interfaces to the database in forwards compatible ways. \nMy personal opinion of OR Mappers is their existence highlights a design flaw in the popular structure of databases. Database developers should realize the tasks people are trying to achieve with complicated OR-Mappers and create server-side utilities that assist in performing this task. \nOR Mappers also are epic targets of the \"leaky abstraction\" syndrome ( Joel On Software: Leaky Abstractions )\nWhere its quite easy to find things it just cant handle because of the abstraction layer not being psychic. \n",
"Stored procedures are better, in my view, because they can have an independent security configuration from the underlying tables.\nThis means you can allow specific operations without out allowing writes/reads to specific tables. It also limits the damage that people can do if they discover a SQL injection exploit.\n",
"Definitely ORMs. More flexible, more portable (generally they tend to have portability built in). In case of slowness you may want to use caching or hand-tuned SQL in hot spots.\nGenerally stored procedures have several problems with maintainability.\n\nseparate from application (so many changes have now to be made in two places)\ngenerally harder to change\nharder to put under version control\nharder to make sure they're updated (deployment issues)\nportability (already mentioned)\n\n",
"I personally have found that SP's tend to be faster performance wise, at least for the large data items that I execute on a regular basis. But I know many people that swear by OR tools and wouldn't do ANYTHING else.\n",
"I would argue that using an OR mapper will increase readability and maintainability of your applications source code, while using SP will increase the performance of the application.\n",
"They are not actually mutually exclusive, though to your point they usually are so.\nThe advantage of using Object Relational mapping is that you can swap out data sources. Not only database structure, but you could use any data source. With advent web services / Service-oriented architecture / ESB's, in a larger corporation, it would be wise to consider having a higher level separation of concerns than what you could get in stored procedures. However, in smaller companies and in application that will never use a different data source, then SP's can fit the bill fine. And one last point, it is not necessary to use an OR mapper to get the abstraction. My former team had great success by simply using an adapter model using Spring.NET to plug-in the data source.\n",
"@ Kent Fredrick\n\nMy personal opinion of OR Mappers is their existence highlights a design flaw in the popular structure of databases\"\n\nI think you're talking about the difference between the relational model and object-oriented model. This is actually why we need ORMs, but the implementations of these models were done on purpose - it is not a design flow - it is just how things turned out to be historically.\n",
"Use stored procedures where you have identified a performance bottleneck. if you haven't identified a bottleneck, what are you doing with premature optimisation?\nUse stored procedures where you are concerned about security access to a particular table.\nUse stored procs when you have a SQL wizard who is prepared to sit and write complex queries that join together loads of tables in a legacy database- to do the things that are hard in an OR mapper.\nUse the OR mapper for the other (at least) 80% of your database: where the selects and updates are so routine as to make access through stored procedures alone a pointless exercise in manual coding, and where updates are so infrequent that there is no performance cost. Use an OR mapper to automate the easy stuff.\nMost OR mappers can talk to stored procs for the rest. \nYou should not use stored procs assuming that they're faster than a sql statement in a string, this is not necessarily the case in the last few versions of MS SQL server.\nYou do not need to use stored procs to thwart SQL injection attacks, there are other ways to do make sure that your query parameters are strongly typed and not just string-concatenated.\nYou don't need to use an OR mapper to get a POCO domain model, but it does help.\n",
"If you already have a data API that's exposed as sprocs, you'd need to justify a major architectural overhaul to go to ORM.\nFor a green-fields build, I'd evaluate several things:\n\nIf there's a dedicated DBA on the team, I'd lean to sprocs\nIf there's more than one application touching the same DB I'd lean to sprocs\nIf there's no possibility of database migration ever, I'd lean to sprocs\nIf I'm trying to implement MVCC in the DB, I'd lean to sprocs\nIf I'm deploying this as a product with potentially multiple backend dbs (MySql, MSSql, Oracle), I'd lean to ORM\nIf I'm on a tight deadline, I'd lean to ORM, since it's a faster way to create my domain model and keep it in sync with the data model (with appropriate tooling).\nIf I'm exposing the same domain model in multiple ways (web app, web service, RIA client), I'll lean to ORM as then data model is then hidden behind my ORM facade, making a robust domain model is more valuable to me.\n\nI think performance is a bit of a red herring; hibernate seems to perform nearly as well or better than hand-coded SQL (due to it's caching tiers), and it's easy to write a bad query in your sproc either way.\nThe most important criteria are probably the team's skillset and long-term database portability needs.\n",
"Well the SP's are already there. It doesn't make sense to can them really. I guess does it make sense to use a mapper with SP's? \n",
"\"I'm trying to drive in a nail. Should I use the heel of my shoe or a glass bottle?\"\nBoth Stored Procedures and ORMs are difficult and annoying to use for a developer (though not necessarily for a DBA or architect, respectively), because they incur a start-up cost and higher maintenance cost that doesn't guarantee a pay-off. \nBoth will pay off well if the requirements aren't expected to change much over the lifespan of the system, but they will get in your way if you're building the system to discover the requirements in the first place.\nStraight-coded SQL or quasi-ORM like LINQ and ActiveRecord is better for build-to-discover projects (which happen in the enterprise a lot more than the PR wants you to think).\nStored Procedures are better in a language-agnostic environment, or where fine-grained control over permissions is required. They're also better if your DBA has a better grasp of the requirements than your programmers.\nFull-blown ORMs are better if you do Big Design Up Front, use lots of UML, want to abstract the database back-end, and your architect has a better grasp of the requirements than either your DBA or programmers.\nAnd then there's option #4: Use all of them. A whole system is not usually just one program, and while many programs may talk to the same database, they could each use whatever method is appropriate both for the program's specific task, and for its level of maturity. That is: you start with straight-coded SQL or LINQ, then mature the program by refactoring in ORM and Stored Procedures where you see they make sense.\n"
] |
[
4,
4,
3,
3,
2,
1,
1,
0,
0,
0,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"sql",
"stored_procedures"
] |
stackoverflow_0000082882_sql_stored_procedures.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.