content
stringlengths 86
88.9k
| title
stringlengths 0
150
| question
stringlengths 1
35.8k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 30
130
|
---|---|---|---|---|---|---|---|---|
Q:
Page transitions effects in Safari?
How can I add Page transitions effects like IE in Safari for web pages?
A:
You could check out this example: http://sachiniscool.blogspot.com/2006/01/implementing-page-transitions-in.html. It describes how to emulate page transitions in Firefox using AJAX and CSS. The same method works for Safari as well. The code below is taken from that page and slightly formatted:
var xmlhttp;
var timerId = 0;
var op = 1;
function getPageFx() {
url = "/transpage2.html";
if (window.XMLHttpRequest) {
xmlhttp = new XMLHttpRequest()
xmlhttp.onreadystatechange=xmlhttpChange
xmlhttp.open("GET",url,true)
xmlhttp.send(null)
} else getPageIE();
}
function xmlhttpChange() {
// if xmlhttp shows "loaded"
if (xmlhttp.readyState == 4) {
// if "OK"
if (xmlhttp.status == 200) {
if (timerId != 0)
window.clearTimeout(timerId);
timerId = window.setTimeout("trans();",100);
} else {
alert(xmlhttp.status)
}
}
}
function trans() {
op -= .1;
document.body.style.opacity = op;
if(op < .4) {
window.clearTimeout(timerId);
timerId = 0; document.body.style.opacity = 1;
document.open();
document.write(xmlhttp.responseText);
document.close();
return;
}
timerId = window.setTimeout("trans();",100);
}
function getPageIE() {
window.location.href = "transpage2.html";
}
A:
Check out Scriptaculous. Avoid IE-Only JS if that's what you are referring to (no idea what kind of effect you mean).
|
Page transitions effects in Safari?
|
How can I add Page transitions effects like IE in Safari for web pages?
|
[
"You could check out this example: http://sachiniscool.blogspot.com/2006/01/implementing-page-transitions-in.html. It describes how to emulate page transitions in Firefox using AJAX and CSS. The same method works for Safari as well. The code below is taken from that page and slightly formatted:\nvar xmlhttp;\nvar timerId = 0;\nvar op = 1;\n\nfunction getPageFx() {\n url = \"/transpage2.html\";\n if (window.XMLHttpRequest) {\n xmlhttp = new XMLHttpRequest()\n xmlhttp.onreadystatechange=xmlhttpChange\n xmlhttp.open(\"GET\",url,true)\n xmlhttp.send(null)\n } else getPageIE();\n}\n\nfunction xmlhttpChange() {\n// if xmlhttp shows \"loaded\"\n if (xmlhttp.readyState == 4) {\n // if \"OK\"\n if (xmlhttp.status == 200) {\n if (timerId != 0)\n window.clearTimeout(timerId);\n timerId = window.setTimeout(\"trans();\",100);\n } else {\n alert(xmlhttp.status)\n }\n }\n}\n\nfunction trans() {\n op -= .1;\n document.body.style.opacity = op;\n if(op < .4) {\n window.clearTimeout(timerId);\n timerId = 0; document.body.style.opacity = 1;\n document.open();\n document.write(xmlhttp.responseText);\n document.close();\n return;\n }\n timerId = window.setTimeout(\"trans();\",100);\n}\n\nfunction getPageIE() {\n window.location.href = \"transpage2.html\";\n}\n\n",
"Check out Scriptaculous. Avoid IE-Only JS if that's what you are referring to (no idea what kind of effect you mean).\n"
] |
[
2,
1
] |
[] |
[] |
[
"safari"
] |
stackoverflow_0000101877_safari.txt
|
Q:
Resolving incompatibilities between the Spring.NET and NHibernate assemblies
I am trying to develop a .NET Web Project using NHibernate and Spring.NET, but I'm stuck. Spring.NET seems to depend on different versions of the NHibernate assemblies (maybe it needs 1.2.1.4000 and my NHibernate version is 1.2.0.4000).
I had originally solved similar problems using the "bindingRedirect" tag, but now even that stopped working.
Is there any simple solution to resolve these inter-library relations?
A:
I too ran into this, frustrated I just grabbed the Spring source and compiled it against the latest NHibernate to make it go away forever. Not sure if that's an option for you but the 10 minutes that took seems to have saved me a lot of time overall.
Here's the SourceForge link for the Spring Source for all versions: Spring Source
A:
Spring.Net is open source isn't it? Why don't you just download the source, update the reference to the same version of NHibernate you are using and recompile?
|
Resolving incompatibilities between the Spring.NET and NHibernate assemblies
|
I am trying to develop a .NET Web Project using NHibernate and Spring.NET, but I'm stuck. Spring.NET seems to depend on different versions of the NHibernate assemblies (maybe it needs 1.2.1.4000 and my NHibernate version is 1.2.0.4000).
I had originally solved similar problems using the "bindingRedirect" tag, but now even that stopped working.
Is there any simple solution to resolve these inter-library relations?
|
[
"I too ran into this, frustrated I just grabbed the Spring source and compiled it against the latest NHibernate to make it go away forever. Not sure if that's an option for you but the 10 minutes that took seems to have saved me a lot of time overall.\nHere's the SourceForge link for the Spring Source for all versions: Spring Source\n",
"Spring.Net is open source isn't it? Why don't you just download the source, update the reference to the same version of NHibernate you are using and recompile?\n"
] |
[
10,
2
] |
[] |
[] |
[
".net",
"assemblies",
"nhibernate",
"spring.net"
] |
stackoverflow_0000101974_.net_assemblies_nhibernate_spring.net.txt
|
Q:
Best Way To Format An HTML Email?
I am implementing a comment control that allows a person to select comments and have them sent to specified departments. The email needs to be formatted in a specific way, and I was wondering what the best way to do this would be.
Should I just hard code all of the style information into one massive method, or should I try and create a separate file and read it in, and then replace certain tags with the relevant information?
A:
Find and use some kind of template library, if possible. This will make each email a template which will then be much easier to maintain than the hardcoded form.
A:
Campaign monitor has some great, well-tested free templates:
http://www.campaignmonitor.com/templates/
Make sure whatever you use will display well in all clients.
A great guide:
http://www.campaignmonitor.com/blog/archives/2008/05/2008_email_design_guidelines.html
A:
In addition to using some sort of template, as tedious as it is, inline styles are the most cross-client compatible way of styling HTML emails. Not every email client will fetch an external stylesheet and many don't do so well with an embedded style section.
That being the case, I would choose a fairly simple set of style rules for the email in order to ensure that it looks the same in different email clients and try not to rely too heavily on images as many client will require that extra click to show content.
A:
I would use a template approach. It wouldn't he hard to create a simple regex template system, replacing something like #somevar# with the value for 'somevar'. You could also use a premade template system, like Smarty for PHP. I think that would be the cleanest approach.
Alex
A:
I've used XLST templates in the past to format emails. Generally emails are best constructed using tables and inline CSS. Note that outlook 2007 does not support background images :(
A:
Definitely use templates. I have done it with text templates using custom tags like so:
<p>Dear |FIRST_NAME|,
But I really cannot recommend this; it is a world of pain. The second time I did it (an html email appender for log4net) I used an xslt to transform the object (in this case a log4net message) into an html email. Much neater.
Note that certain clients (e.g. Lotus Notes) do not support XHTML, so use plain old HTML 1.0, with no css, and you should be ok.
|
Best Way To Format An HTML Email?
|
I am implementing a comment control that allows a person to select comments and have them sent to specified departments. The email needs to be formatted in a specific way, and I was wondering what the best way to do this would be.
Should I just hard code all of the style information into one massive method, or should I try and create a separate file and read it in, and then replace certain tags with the relevant information?
|
[
"Find and use some kind of template library, if possible. This will make each email a template which will then be much easier to maintain than the hardcoded form.\n",
"Campaign monitor has some great, well-tested free templates:\nhttp://www.campaignmonitor.com/templates/\nMake sure whatever you use will display well in all clients. \nA great guide:\nhttp://www.campaignmonitor.com/blog/archives/2008/05/2008_email_design_guidelines.html\n",
"In addition to using some sort of template, as tedious as it is, inline styles are the most cross-client compatible way of styling HTML emails. Not every email client will fetch an external stylesheet and many don't do so well with an embedded style section. \nThat being the case, I would choose a fairly simple set of style rules for the email in order to ensure that it looks the same in different email clients and try not to rely too heavily on images as many client will require that extra click to show content.\n",
"I would use a template approach. It wouldn't he hard to create a simple regex template system, replacing something like #somevar# with the value for 'somevar'. You could also use a premade template system, like Smarty for PHP. I think that would be the cleanest approach.\nAlex\n",
"I've used XLST templates in the past to format emails. Generally emails are best constructed using tables and inline CSS. Note that outlook 2007 does not support background images :(\n",
"Definitely use templates. I have done it with text templates using custom tags like so:\n<p>Dear |FIRST_NAME|,\n\nBut I really cannot recommend this; it is a world of pain. The second time I did it (an html email appender for log4net) I used an xslt to transform the object (in this case a log4net message) into an html email. Much neater.\nNote that certain clients (e.g. Lotus Notes) do not support XHTML, so use plain old HTML 1.0, with no css, and you should be ok.\n"
] |
[
6,
5,
3,
0,
0,
0
] |
[] |
[] |
[
"email",
"format",
"html"
] |
stackoverflow_0000101709_email_format_html.txt
|
Q:
Has anyone got NVelocity working with ASP.NET MVC Preview 5?
I'm guessing I need to implement an NVelocityViewEngine and NVelocityView - but before I do I wanted to check to see if anyone has already done this.
I can't see anything in the trunk for MVCContrib.
I've already seen the post below - I'm looking specifically for something which works with Preview 5:
Testing ScottGu: Alternate View Engines with ASP.NET MVC (NVelocity)
Otherwise I'll start writing one :)
A:
Personally I don't have a clue about NVelocity, but here is a link that might help you.
A:
There's an NVelocity implementation in MvcContrib. You need to reference the MvcContrib.Castle dll.
|
Has anyone got NVelocity working with ASP.NET MVC Preview 5?
|
I'm guessing I need to implement an NVelocityViewEngine and NVelocityView - but before I do I wanted to check to see if anyone has already done this.
I can't see anything in the trunk for MVCContrib.
I've already seen the post below - I'm looking specifically for something which works with Preview 5:
Testing ScottGu: Alternate View Engines with ASP.NET MVC (NVelocity)
Otherwise I'll start writing one :)
|
[
"Personally I don't have a clue about NVelocity, but here is a link that might help you.\n",
"There's an NVelocity implementation in MvcContrib. You need to reference the MvcContrib.Castle dll.\n"
] |
[
0,
0
] |
[] |
[] |
[
"asp.net_mvc",
"nvelocity"
] |
stackoverflow_0000101449_asp.net_mvc_nvelocity.txt
|
Q:
Missing label on Drupal 5 CCK single on/off checkbox
I'm creating a form using the Content Construction Kit (CCK) in Drupal5.
I've added several singe on/off checkboxes but their associated labels are not being displayed. Help text is displayed underneath the checkboxes but this is not the desired behavior. To me the expected behavior is that the label would appear beside the checkboxes.
Any thoughts?
A:
Found the answer:
It turns out to be functionality provided by the CCK, but it's counterintuitive.
For single on/off checkboxes, drupal will use the on label specified in the allowed values field:
0
1|This is my label
|
Missing label on Drupal 5 CCK single on/off checkbox
|
I'm creating a form using the Content Construction Kit (CCK) in Drupal5.
I've added several singe on/off checkboxes but their associated labels are not being displayed. Help text is displayed underneath the checkboxes but this is not the desired behavior. To me the expected behavior is that the label would appear beside the checkboxes.
Any thoughts?
|
[
"Found the answer:\nIt turns out to be functionality provided by the CCK, but it's counterintuitive.\nFor single on/off checkboxes, drupal will use the on label specified in the allowed values field:\n0\n1|This is my label\n\n"
] |
[
5
] |
[] |
[] |
[
"cck",
"drupal"
] |
stackoverflow_0000102005_cck_drupal.txt
|
Q:
What's the Developer Express equivalent of System.Windows.Forms.LinkButton?
I can't seem to find Developer Express' version of the LinkButton. (The Windows Forms linkbutton, not the ASP.NET linkbutton.) HyperLinkEdit doesn't seem to be what I'm looking for since it looks like a TextEdit/TextBox.
Anyone know what their version of it is? I'm using the latest DevX controls: 8.2.1.
A:
The control is called the HyperLinkEdit. You have to adjust the properties to get it to behave like the System.Windows.Forms control like so:
control.BorderStyle = BorderStyles.NoBorder;
control.Properties.Appearance.BackColor = Color.Transparent;
control.Properties.AppearanceFocused.BackColor = Color.Transparent;
control.Properties.ReadOnly = true;
A:
You should probably just use the standard ASP.Net LinkButton, unless it's really missing something you need.
|
What's the Developer Express equivalent of System.Windows.Forms.LinkButton?
|
I can't seem to find Developer Express' version of the LinkButton. (The Windows Forms linkbutton, not the ASP.NET linkbutton.) HyperLinkEdit doesn't seem to be what I'm looking for since it looks like a TextEdit/TextBox.
Anyone know what their version of it is? I'm using the latest DevX controls: 8.2.1.
|
[
"The control is called the HyperLinkEdit. You have to adjust the properties to get it to behave like the System.Windows.Forms control like so:\n control.BorderStyle = BorderStyles.NoBorder;\n control.Properties.Appearance.BackColor = Color.Transparent;\n control.Properties.AppearanceFocused.BackColor = Color.Transparent;\n control.Properties.ReadOnly = true;\n\n",
"You should probably just use the standard ASP.Net LinkButton, unless it's really missing something you need.\n"
] |
[
3,
0
] |
[] |
[] |
[
"devexpress"
] |
stackoverflow_0000003625_devexpress.txt
|
Q:
Windows CD Burning API
We need to programatically burn files to CD in a C\C++ Windows XP/Vista application we are developing using Borlands Turbo C++.
What is the simplest and best way to do this? We would prefer a native windows API (that doesnt rely on MFC) so as not to rely on any third party software/drivers if one is available.
A:
We used the following:
Store files in the directory returned by GetBurnPath, then write using Burn. GetCDRecordableInfo is used to check when the CD is ready.
#include <stdio.h>
#include <imapi.h>
#include <windows.h>
struct MEDIAINFO {
BYTE nSessions;
BYTE nLastTrack;
ULONG nStartAddress;
ULONG nNextWritable;
ULONG nFreeBlocks;
};
//==============================================================================
// Description: CD burning on Windows XP
//==============================================================================
#define CSIDL_CDBURN_AREA 0x003b
SHSTDAPI_(BOOL) SHGetSpecialFolderPathA(HWND hwnd, LPSTR pszPath, int csidl, BOOL fCreate);
SHSTDAPI_(BOOL) SHGetSpecialFolderPathW(HWND hwnd, LPWSTR pszPath, int csidl, BOOL fCreate);
#ifdef UNICODE
#define SHGetSpecialFolderPath SHGetSpecialFolderPathW
#else
#define SHGetSpecialFolderPath SHGetSpecialFolderPathA
#endif
//==============================================================================
// Interface IDiscMaster
const IID IID_IDiscMaster = {0x520CCA62,0x51A5,0x11D3,{0x91,0x44,0x00,0x10,0x4B,0xA1,0x1C,0x5E}};
const CLSID CLSID_MSDiscMasterObj = {0x520CCA63,0x51A5,0x11D3,{0x91,0x44,0x00,0x10,0x4B,0xA1,0x1C,0x5E}};
typedef interface ICDBurn ICDBurn;
// Interface ICDBurn
const IID IID_ICDBurn = {0x3d73a659,0xe5d0,0x4d42,{0xaf,0xc0,0x51,0x21,0xba,0x42,0x5c,0x8d}};
const CLSID CLSID_CDBurn = {0xfbeb8a05,0xbeee,0x4442,{0x80,0x4e,0x40,0x9d,0x6c,0x45,0x15,0xe9}};
MIDL_INTERFACE("3d73a659-e5d0-4d42-afc0-5121ba425c8d")
ICDBurn : public IUnknown
{
public:
virtual HRESULT STDMETHODCALLTYPE GetRecorderDriveLetter(
/* [size_is][out] */ LPWSTR pszDrive,
/* [in] */ UINT cch) = 0;
virtual HRESULT STDMETHODCALLTYPE Burn(
/* [in] */ HWND hwnd) = 0;
virtual HRESULT STDMETHODCALLTYPE HasRecordableDrive(
/* [out] */ BOOL *pfHasRecorder) = 0;
};
//==============================================================================
// Description: Get burn pathname
// Parameters: pathname - must be at least MAX_PATH in size
// Returns: Non-zero for an error
// Notes: CoInitialize(0) must be called once in application
//==============================================================================
int GetBurnPath(char *path)
{
ICDBurn* pICDBurn;
int ret = 0;
if (SUCCEEDED(CoCreateInstance(CLSID_CDBurn, NULL,CLSCTX_INPROC_SERVER,IID_ICDBurn,(LPVOID*)&pICDBurn))) {
BOOL flag;
if (pICDBurn->HasRecordableDrive(&flag) == S_OK) {
if (SHGetSpecialFolderPath(0, path, CSIDL_CDBURN_AREA, 0)) {
strcat(path, "\\");
}
else {
ret = 1;
}
}
else {
ret = 2;
}
pICDBurn->Release();
}
else {
ret = 3;
}
return ret;
}
//==============================================================================
// Description: Get CD pathname
// Parameters: pathname - must be at least 5 bytes in size
// Returns: Non-zero for an error
// Notes: CoInitialize(0) must be called once in application
//==============================================================================
int GetCDPath(char *path)
{
ICDBurn* pICDBurn;
int ret = 0;
if (SUCCEEDED(CoCreateInstance(CLSID_CDBurn, NULL,CLSCTX_INPROC_SERVER,IID_ICDBurn,(LPVOID*)&pICDBurn))) {
BOOL flag;
WCHAR drive[5];
if (pICDBurn->GetRecorderDriveLetter(drive, 4) == S_OK) {
sprintf(path, "%S", drive);
}
else {
ret = 1;
}
pICDBurn->Release();
}
else {
ret = 3;
}
return ret;
}
//==============================================================================
// Description: Burn CD
// Parameters: None
// Returns: Non-zero for an error
// Notes: CoInitialize(0) must be called once in application
//==============================================================================
int Burn(void)
{
ICDBurn* pICDBurn;
int ret = 0;
if (SUCCEEDED(CoCreateInstance(CLSID_CDBurn, NULL,CLSCTX_INPROC_SERVER,IID_ICDBurn,(LPVOID*)&pICDBurn))) {
if (pICDBurn->Burn(NULL) != S_OK) {
ret = 1;
}
pICDBurn->Release();
}
else {
ret = 2;
}
return ret;
}
//==============================================================================
bool GetCDRecordableInfo(long *FreeSpaceSize)
{
bool Result = false;
IDiscMaster *idm = NULL;
IDiscRecorder *idr = NULL;
IEnumDiscRecorders *pEnumDiscRecorders = NULL;
ULONG cnt;
long type;
long mtype;
long mflags;
MEDIAINFO mi;
try {
CoCreateInstance(CLSID_MSDiscMasterObj, 0, CLSCTX_ALL, IID_IDiscMaster, (void**)&idm);
idm->Open();
idm->EnumDiscRecorders(&pEnumDiscRecorders);
pEnumDiscRecorders->Next(1, &idr, &cnt);
pEnumDiscRecorders->Release();
idr->OpenExclusive();
idr->GetRecorderType(&type);
idr->QueryMediaType(&mtype, &mflags);
idr->QueryMediaInfo(&mi.nSessions, &mi.nLastTrack, &mi.nStartAddress, &mi.nNextWritable, &mi.nFreeBlocks);
idr->Release();
idm->Close();
idm->Release();
Result = true;
}
catch (...) {
Result = false;
}
if (Result == true) {
Result = false;
if (mtype == 0) {
// No Media inserted
Result = false;
}
else {
if ((mflags & 0x04) == 0x04) {
// Writable Media
Result = true;
}
else {
Result = false;
}
if (Result == true) {
*FreeSpaceSize = (mi.nFreeBlocks * 2048);
}
else {
*FreeSpaceSize = 0;
}
}
}
return Result;
}
A:
To complement the accepted answer, we added this helper function to programatically change the burn directory on the fly as this was a requirement of ours.
typedef HMODULE (WINAPI * SHSETFOLDERPATHA)( int , HANDLE , DWORD , LPCTSTR );
int SetBurnPath( char * cpPath )
{
SHSETFOLDERPATHA pSHSetFolderPath;
HANDLE hShell = LoadLibraryA( "shell32.dll" );
if( hShell == NULL )
return -2;
DWORD dwOrdinal = 0x00000000 + 231;
pSHSetFolderPath = (SHSETFOLDERPATHA)GetProcAddress( hShell, (LPCSTR)dwOrdinal );
if( pSHSetFolderPath == NULL )
return -3;
if( pSHSetFolderPath( CSIDL_CDBURN_AREA, NULL, 0, cpPath ) == S_OK )
return 0;
return -1;
}
A:
This is the information for IMAPI in MSDN site http://msdn.microsoft.com/en-us/library/aa939967.aspx
|
Windows CD Burning API
|
We need to programatically burn files to CD in a C\C++ Windows XP/Vista application we are developing using Borlands Turbo C++.
What is the simplest and best way to do this? We would prefer a native windows API (that doesnt rely on MFC) so as not to rely on any third party software/drivers if one is available.
|
[
"We used the following: \nStore files in the directory returned by GetBurnPath, then write using Burn. GetCDRecordableInfo is used to check when the CD is ready.\n#include <stdio.h>\n#include <imapi.h>\n#include <windows.h>\n\nstruct MEDIAINFO {\n BYTE nSessions;\n BYTE nLastTrack;\n ULONG nStartAddress;\n ULONG nNextWritable;\n ULONG nFreeBlocks;\n};\n//==============================================================================\n// Description: CD burning on Windows XP\n//==============================================================================\n#define CSIDL_CDBURN_AREA 0x003b\nSHSTDAPI_(BOOL) SHGetSpecialFolderPathA(HWND hwnd, LPSTR pszPath, int csidl, BOOL fCreate);\nSHSTDAPI_(BOOL) SHGetSpecialFolderPathW(HWND hwnd, LPWSTR pszPath, int csidl, BOOL fCreate);\n#ifdef UNICODE\n#define SHGetSpecialFolderPath SHGetSpecialFolderPathW\n#else\n#define SHGetSpecialFolderPath SHGetSpecialFolderPathA\n#endif\n//==============================================================================\n// Interface IDiscMaster\nconst IID IID_IDiscMaster = {0x520CCA62,0x51A5,0x11D3,{0x91,0x44,0x00,0x10,0x4B,0xA1,0x1C,0x5E}};\nconst CLSID CLSID_MSDiscMasterObj = {0x520CCA63,0x51A5,0x11D3,{0x91,0x44,0x00,0x10,0x4B,0xA1,0x1C,0x5E}};\n\ntypedef interface ICDBurn ICDBurn;\n// Interface ICDBurn\nconst IID IID_ICDBurn = {0x3d73a659,0xe5d0,0x4d42,{0xaf,0xc0,0x51,0x21,0xba,0x42,0x5c,0x8d}};\nconst CLSID CLSID_CDBurn = {0xfbeb8a05,0xbeee,0x4442,{0x80,0x4e,0x40,0x9d,0x6c,0x45,0x15,0xe9}};\n\nMIDL_INTERFACE(\"3d73a659-e5d0-4d42-afc0-5121ba425c8d\")\nICDBurn : public IUnknown\n{\npublic:\n virtual HRESULT STDMETHODCALLTYPE GetRecorderDriveLetter(\n /* [size_is][out] */ LPWSTR pszDrive,\n /* [in] */ UINT cch) = 0;\n\n virtual HRESULT STDMETHODCALLTYPE Burn(\n /* [in] */ HWND hwnd) = 0;\n\n virtual HRESULT STDMETHODCALLTYPE HasRecordableDrive(\n /* [out] */ BOOL *pfHasRecorder) = 0;\n};\n//==============================================================================\n// Description: Get burn pathname\n// Parameters: pathname - must be at least MAX_PATH in size\n// Returns: Non-zero for an error\n// Notes: CoInitialize(0) must be called once in application\n//==============================================================================\nint GetBurnPath(char *path)\n{\n ICDBurn* pICDBurn;\n int ret = 0;\n\n if (SUCCEEDED(CoCreateInstance(CLSID_CDBurn, NULL,CLSCTX_INPROC_SERVER,IID_ICDBurn,(LPVOID*)&pICDBurn))) {\n BOOL flag;\n if (pICDBurn->HasRecordableDrive(&flag) == S_OK) {\n if (SHGetSpecialFolderPath(0, path, CSIDL_CDBURN_AREA, 0)) {\n strcat(path, \"\\\\\");\n }\n else {\n ret = 1;\n }\n }\n else {\n ret = 2;\n }\n pICDBurn->Release();\n }\n else {\n ret = 3;\n }\n return ret;\n}\n//==============================================================================\n// Description: Get CD pathname\n// Parameters: pathname - must be at least 5 bytes in size\n// Returns: Non-zero for an error\n// Notes: CoInitialize(0) must be called once in application\n//==============================================================================\nint GetCDPath(char *path)\n{\n ICDBurn* pICDBurn;\n int ret = 0;\n\n if (SUCCEEDED(CoCreateInstance(CLSID_CDBurn, NULL,CLSCTX_INPROC_SERVER,IID_ICDBurn,(LPVOID*)&pICDBurn))) {\n BOOL flag;\n WCHAR drive[5];\n if (pICDBurn->GetRecorderDriveLetter(drive, 4) == S_OK) {\n sprintf(path, \"%S\", drive);\n }\n else {\n ret = 1;\n }\n pICDBurn->Release();\n }\n else {\n ret = 3;\n }\n return ret;\n}\n//==============================================================================\n// Description: Burn CD\n// Parameters: None\n// Returns: Non-zero for an error\n// Notes: CoInitialize(0) must be called once in application\n//==============================================================================\nint Burn(void)\n{\n ICDBurn* pICDBurn;\n int ret = 0;\n\n if (SUCCEEDED(CoCreateInstance(CLSID_CDBurn, NULL,CLSCTX_INPROC_SERVER,IID_ICDBurn,(LPVOID*)&pICDBurn))) {\n if (pICDBurn->Burn(NULL) != S_OK) {\n ret = 1;\n }\n pICDBurn->Release();\n }\n else {\n ret = 2;\n }\n return ret;\n}\n//==============================================================================\nbool GetCDRecordableInfo(long *FreeSpaceSize)\n{\n bool Result = false;\n IDiscMaster *idm = NULL;\n IDiscRecorder *idr = NULL;\n IEnumDiscRecorders *pEnumDiscRecorders = NULL;\n ULONG cnt;\n long type;\n long mtype;\n long mflags;\n MEDIAINFO mi;\n\n try {\n CoCreateInstance(CLSID_MSDiscMasterObj, 0, CLSCTX_ALL, IID_IDiscMaster, (void**)&idm);\n idm->Open();\n idm->EnumDiscRecorders(&pEnumDiscRecorders);\n pEnumDiscRecorders->Next(1, &idr, &cnt);\n pEnumDiscRecorders->Release();\n\n idr->OpenExclusive();\n idr->GetRecorderType(&type);\n idr->QueryMediaType(&mtype, &mflags);\n idr->QueryMediaInfo(&mi.nSessions, &mi.nLastTrack, &mi.nStartAddress, &mi.nNextWritable, &mi.nFreeBlocks);\n idr->Release();\n\n idm->Close();\n idm->Release();\n Result = true;\n }\n catch (...) {\n Result = false;\n }\n\n if (Result == true) {\n Result = false;\n if (mtype == 0) {\n // No Media inserted\n Result = false;\n }\n else {\n if ((mflags & 0x04) == 0x04) {\n // Writable Media\n Result = true;\n }\n else {\n Result = false;\n }\n\n if (Result == true) {\n *FreeSpaceSize = (mi.nFreeBlocks * 2048);\n }\n else {\n *FreeSpaceSize = 0;\n }\n }\n }\n\n return Result;\n}\n\n",
"To complement the accepted answer, we added this helper function to programatically change the burn directory on the fly as this was a requirement of ours.\ntypedef HMODULE (WINAPI * SHSETFOLDERPATHA)( int , HANDLE , DWORD , LPCTSTR );\n\nint SetBurnPath( char * cpPath )\n{\n SHSETFOLDERPATHA pSHSetFolderPath;\n HANDLE hShell = LoadLibraryA( \"shell32.dll\" );\n if( hShell == NULL )\n return -2;\n\n DWORD dwOrdinal = 0x00000000 + 231;\n\n pSHSetFolderPath = (SHSETFOLDERPATHA)GetProcAddress( hShell, (LPCSTR)dwOrdinal );\n if( pSHSetFolderPath == NULL )\n return -3;\n\n if( pSHSetFolderPath( CSIDL_CDBURN_AREA, NULL, 0, cpPath ) == S_OK )\n return 0;\n\n return -1;\n}\n\n",
"This is the information for IMAPI in MSDN site http://msdn.microsoft.com/en-us/library/aa939967.aspx\n"
] |
[
15,
4,
0
] |
[
"You should be able to use the shell's ICDBurn interface. Back in the XP day MFC didn't even have any classes for cd burning. I'll see if I can find some examples for you, but it's been a while since I looked at this.\n"
] |
[
-1
] |
[
"c",
"c++",
"cd_burning",
"windows"
] |
stackoverflow_0000082993_c_c++_cd_burning_windows.txt
|
Q:
Any other tools/plugins like VisualAssist that will change my life (MSVS)?
I was introduced to VisualAssist a few years ago and for me there's no going back. Are there any other tools I'm missing out on?
A:
If you're a vim user, ViEmu is indispensable. It's a plugin available for Visual Studio (SQL Server and Office as well, although it's sold separately) that transforms the editor into Vim.
Another plugin by the same company is Codekana. In its current incarnation, it spruces up code structure considerably, and makes reading code much more pleasurable. Based on several chats with the author, he's planning on growing it into other areas as well.
A:
BeyondCompare : Life-changing folder & file diff with many installable extensions for additional file types. Don't know what I'd do without it.
A:
There's a few things that get installed on every computer I use for development:
ExamDiff is the best light-weight diff program I've found.
Tortoise SVN is the best version control client
Perforce is a way to make your life worse when your company inflicts it upon you.
A:
Just after installing VisualAssist I go after WinMerge, which also significantly simplified my life.
A:
I tried Resharper for a while. It was great but too expensive for my taste and I could not get my employer to purchase it when the trial expired. You might take a look.
A:
I own all of these tools and use them on a regualar basis.
Resharper
CodeRush/Refactor Pro!
NDepend
Gallio/MbUnit
|
Any other tools/plugins like VisualAssist that will change my life (MSVS)?
|
I was introduced to VisualAssist a few years ago and for me there's no going back. Are there any other tools I'm missing out on?
|
[
"If you're a vim user, ViEmu is indispensable. It's a plugin available for Visual Studio (SQL Server and Office as well, although it's sold separately) that transforms the editor into Vim.\nAnother plugin by the same company is Codekana. In its current incarnation, it spruces up code structure considerably, and makes reading code much more pleasurable. Based on several chats with the author, he's planning on growing it into other areas as well.\n",
"BeyondCompare : Life-changing folder & file diff with many installable extensions for additional file types. Don't know what I'd do without it.\n",
"There's a few things that get installed on every computer I use for development:\n\nExamDiff is the best light-weight diff program I've found.\nTortoise SVN is the best version control client\nPerforce is a way to make your life worse when your company inflicts it upon you.\n\n",
"Just after installing VisualAssist I go after WinMerge, which also significantly simplified my life.\n",
"I tried Resharper for a while. It was great but too expensive for my taste and I could not get my employer to purchase it when the trial expired. You might take a look.\n",
"I own all of these tools and use them on a regualar basis. \n\nResharper\nCodeRush/Refactor Pro!\nNDepend\nGallio/MbUnit\n\n"
] |
[
2,
2,
1,
1,
0,
0
] |
[] |
[] |
[
"visual_assist"
] |
stackoverflow_0000088190_visual_assist.txt
|
Q:
Understanding the behaviour of inject used with a lambda in Ruby
I often plug pre-configured lambdas into enumerable methods like 'map', 'select' etc.
but the behavior of 'inject' seems to be different.
e.g. with
mult4 = lambda {|item| item * 4 }
then
(5..10).map &mult4
gives me
[20, 24, 28, 32, 36, 40]
However, if I make a 2-parameter lambda for use with an inject like so,
multL = lambda {|product, n| product * n }
I want to be able to say
(5..10).inject(2) &multL
since 'inject' has an optional single parameter for the initial value,
but that gives me ...
irb(main):027:0> (5..10).inject(2) &multL
LocalJumpError: no block given
from (irb):27:in `inject'
from (irb):27
However, if I stuff the '&multL' into a second parameter to inject, then it works.
irb(main):028:0> (5..10).inject(2, &multL)
=> 302400
My question is "why does that work and not the previous attempt?"
A:
So the reason that
(5..10).map &mult4
works and
(5..10).inject(2) &multL
doesn't is that ruby parens are implicit in the first case, so it really means
(5..10).map(&mult4)
if you wanted, for the second case you could use
(5..10).inject 2, &multL
The outside the parens trick only works for passing blocks to a method, not lambda objects.
|
Understanding the behaviour of inject used with a lambda in Ruby
|
I often plug pre-configured lambdas into enumerable methods like 'map', 'select' etc.
but the behavior of 'inject' seems to be different.
e.g. with
mult4 = lambda {|item| item * 4 }
then
(5..10).map &mult4
gives me
[20, 24, 28, 32, 36, 40]
However, if I make a 2-parameter lambda for use with an inject like so,
multL = lambda {|product, n| product * n }
I want to be able to say
(5..10).inject(2) &multL
since 'inject' has an optional single parameter for the initial value,
but that gives me ...
irb(main):027:0> (5..10).inject(2) &multL
LocalJumpError: no block given
from (irb):27:in `inject'
from (irb):27
However, if I stuff the '&multL' into a second parameter to inject, then it works.
irb(main):028:0> (5..10).inject(2, &multL)
=> 302400
My question is "why does that work and not the previous attempt?"
|
[
"So the reason that\n(5..10).map &mult4\n\nworks and \n(5..10).inject(2) &multL\n\ndoesn't is that ruby parens are implicit in the first case, so it really means\n(5..10).map(&mult4)\n\nif you wanted, for the second case you could use\n(5..10).inject 2, &multL\n\nThe outside the parens trick only works for passing blocks to a method, not lambda objects.\n"
] |
[
11
] |
[] |
[] |
[
"inject",
"lambda",
"ruby"
] |
stackoverflow_0000102165_inject_lambda_ruby.txt
|
Q:
Letting several assemblies access the same text file
I've got many assemblies/projects in the same c#/.net solution. A setting needs to be saved by people using the web application gui, and then a console app and some test projects need to access the same file. Where should I put the file and how to access it?
I've tried using "AppDomain.CurrentDomain.BaseDirectory" but that ends up being different for my assemblies. Also the "System.Reflection.Assembly.Get*Assembly.Location" fail to give me what I need.
Maybe this isn't something I should but in a file, but rather the database? But it feels so complicated doing that for a few lines of configuration.
A:
Put the file in
Path.Combine(
Environment.GetFolderPath(Environment.SpecialFolder.CommonApplicationData),
"[Company Name]\[Application Suite]");
A:
Personally, I would be leveraging the database because the alternative is either a configuration headache or is more trouble than it's worth.
You could configure each application to point to the same file, but that becomes problematic if you want to move the file. Or you could write a service to manage the file and expose that to clients, but at this point you may as well just use the DB.
A:
Thought about storing it in the registry or in Isolated Storage? Not sure if multiple applications can share Isolated Storage or not, though.
A:
projects can have build events -- why not add a post-build event to copy the file to all required locations?
|
Letting several assemblies access the same text file
|
I've got many assemblies/projects in the same c#/.net solution. A setting needs to be saved by people using the web application gui, and then a console app and some test projects need to access the same file. Where should I put the file and how to access it?
I've tried using "AppDomain.CurrentDomain.BaseDirectory" but that ends up being different for my assemblies. Also the "System.Reflection.Assembly.Get*Assembly.Location" fail to give me what I need.
Maybe this isn't something I should but in a file, but rather the database? But it feels so complicated doing that for a few lines of configuration.
|
[
"Put the file in \n Path.Combine(\n Environment.GetFolderPath(Environment.SpecialFolder.CommonApplicationData),\n \"[Company Name]\\[Application Suite]\");\n\n",
"Personally, I would be leveraging the database because the alternative is either a configuration headache or is more trouble than it's worth.\nYou could configure each application to point to the same file, but that becomes problematic if you want to move the file. Or you could write a service to manage the file and expose that to clients, but at this point you may as well just use the DB.\n",
"Thought about storing it in the registry or in Isolated Storage? Not sure if multiple applications can share Isolated Storage or not, though.\n",
"projects can have build events -- why not add a post-build event to copy the file to all required locations?\n"
] |
[
3,
1,
0,
0
] |
[] |
[] |
[
".net",
"c#",
"file",
"file_io",
"filesystems"
] |
stackoverflow_0000101238_.net_c#_file_file_io_filesystems.txt
|
Q:
How to generate one texture from N textures?
Let's say I have N pictures of an object, taken from N know positions. I also have the 3D geometry of the object, and I know all the characteristics of both the camera and the lens.
I want to generate a unique giant picture from the N pictures I have, so that it can be mapped/projected onto the object surface.
Does anybody knows where to start? Articles, references, books?
A:
Not sure if it helps you directly, but these guys have some amazing demos of some related techniques: http://grail.cs.washington.edu/projects/videoenhancement/videoEnhancement.htm.
A:
Generate texture-mapping coords for your geometry
Generate a big blank texture
For each pixel
Figure out the point on the geometry it maps to
Figure out the pixel in each image that projects onto this point
Colour the pixel with a weighted blend of all these pixels, weighted by how much the surface normal is facing the corresponding camera and ignoring those images where there's another piece of geometry between the point and the camera
Apply your completed texture to the geometry
A:
I'd suspect that this can be done using some variation of projection maps mixed with image reconstruction.
A:
Have a look at cubemapping. It may be useful. You may want to project another convex shape to the cube and use the resulting texture as a conventional cubemap texture.
A:
Google up "shadow mapping", as the same problem is solved during that process (images of the scene as seen from some known points are projected onto the 3D geometry in the scene). The problem is well-understood and there is plenty of code.
|
How to generate one texture from N textures?
|
Let's say I have N pictures of an object, taken from N know positions. I also have the 3D geometry of the object, and I know all the characteristics of both the camera and the lens.
I want to generate a unique giant picture from the N pictures I have, so that it can be mapped/projected onto the object surface.
Does anybody knows where to start? Articles, references, books?
|
[
"Not sure if it helps you directly, but these guys have some amazing demos of some related techniques: http://grail.cs.washington.edu/projects/videoenhancement/videoEnhancement.htm.\n",
"\nGenerate texture-mapping coords for your geometry\nGenerate a big blank texture\nFor each pixel\n\n\nFigure out the point on the geometry it maps to\nFigure out the pixel in each image that projects onto this point\nColour the pixel with a weighted blend of all these pixels, weighted by how much the surface normal is facing the corresponding camera and ignoring those images where there's another piece of geometry between the point and the camera\n\nApply your completed texture to the geometry\n\n",
"I'd suspect that this can be done using some variation of projection maps mixed with image reconstruction.\n",
"Have a look at cubemapping. It may be useful. You may want to project another convex shape to the cube and use the resulting texture as a conventional cubemap texture.\n",
"Google up \"shadow mapping\", as the same problem is solved during that process (images of the scene as seen from some known points are projected onto the 3D geometry in the scene). The problem is well-understood and there is plenty of code.\n"
] |
[
1,
1,
0,
0,
0
] |
[] |
[] |
[
"3d",
"algorithm",
"textures"
] |
stackoverflow_0000101735_3d_algorithm_textures.txt
|
Q:
SVN mark major version
Sorry, I'm new to SVN and I looked around a little for this. How do you mark a major version in SVN, kind of like set up a restore point. Right now I just setup my server and added all my files- I've been intermittently committing different changes. When I have something in a stable state is there a way to mark this so I can easily revert back to it if necessary?
A:
Sounds like you're looking for tags.
Tags in the Subversion book
"A tag is just a “snapshot” of a project in time"
A:
The typical way is to create a 'tag' directory in the root of your repository and copy the entire trunk over to that directory. (Copying is cheap in Subversion because it's just adding references to specific revisions of existing files.)
So you might say:
svn cp http://svn.example.com/trunk/ http://svn.example.com/tags/major-revision-01/
See the Subversion book for more information, particularly the tags chapter.
A:
All we do is we create a branch. We have the standard root level directories: trunk, tags, releases, branches.
The main thing to remember is that all branching is simply like creating a copy, and all branches off of the trunk are just like creating a copy (except that it is a shallow copy, only copying the deltas).
For us, all development is done in the trunk. If someone is doing a major rework then tend to put it in branches. Major releases are put into releases and all other labels and items we want to tag are put in the tags folder.
For our releases, we have the following directory structure:
repository
+--trunk
+--releases
+--v1.0
+--v1.1
+--v1.4
+--v2.0
+--branches
+--tags
A:
Check tags
A:
If you are using the svn standard structure you should have a branches, tags, and trunk folder.
What you are looking to do is to make a copy of the current trunk to a folder in tags.
Example command line:
svn copy mysvnurl/myproject/trunk mysvnurl/myproject/tags/majorrelease_01
A:
try reading this page svn copy . Basically you just need to do a svn copy
A:
In CVS, this was called a "tag". SVN doesn't use a separate mechanism for tags, it just creates a branch. So just create a new branch, and give it a descriptive name like "release-1.2".
Alternatively, the lazy way would be to write down the current repository revision number in a text file ;)
A:
Here's another helpful idea. Use CruiseControl (or CruiseControl.NET) to automatically label at a fixed interval (i.e. nightly, or every 15 minutes)
Get A Build Process Now!
|
SVN mark major version
|
Sorry, I'm new to SVN and I looked around a little for this. How do you mark a major version in SVN, kind of like set up a restore point. Right now I just setup my server and added all my files- I've been intermittently committing different changes. When I have something in a stable state is there a way to mark this so I can easily revert back to it if necessary?
|
[
"Sounds like you're looking for tags.\nTags in the Subversion book\n\"A tag is just a “snapshot” of a project in time\"\n",
"The typical way is to create a 'tag' directory in the root of your repository and copy the entire trunk over to that directory. (Copying is cheap in Subversion because it's just adding references to specific revisions of existing files.) \nSo you might say:\nsvn cp http://svn.example.com/trunk/ http://svn.example.com/tags/major-revision-01/\n\nSee the Subversion book for more information, particularly the tags chapter.\n",
"All we do is we create a branch. We have the standard root level directories: trunk, tags, releases, branches. \nThe main thing to remember is that all branching is simply like creating a copy, and all branches off of the trunk are just like creating a copy (except that it is a shallow copy, only copying the deltas).\nFor us, all development is done in the trunk. If someone is doing a major rework then tend to put it in branches. Major releases are put into releases and all other labels and items we want to tag are put in the tags folder.\nFor our releases, we have the following directory structure:\nrepository\n+--trunk\n+--releases\n +--v1.0\n +--v1.1\n +--v1.4\n +--v2.0\n+--branches\n+--tags\n\n",
"Check tags\n",
"If you are using the svn standard structure you should have a branches, tags, and trunk folder. \nWhat you are looking to do is to make a copy of the current trunk to a folder in tags.\nExample command line:\nsvn copy mysvnurl/myproject/trunk mysvnurl/myproject/tags/majorrelease_01\n",
"try reading this page svn copy . Basically you just need to do a svn copy\n",
"In CVS, this was called a \"tag\". SVN doesn't use a separate mechanism for tags, it just creates a branch. So just create a new branch, and give it a descriptive name like \"release-1.2\".\nAlternatively, the lazy way would be to write down the current repository revision number in a text file ;)\n",
"Here's another helpful idea. Use CruiseControl (or CruiseControl.NET) to automatically label at a fixed interval (i.e. nightly, or every 15 minutes)\nGet A Build Process Now!\n"
] |
[
16,
11,
3,
2,
1,
0,
0,
0
] |
[] |
[] |
[
"svn",
"version_control"
] |
stackoverflow_0000102128_svn_version_control.txt
|
Q:
How to determine the amount of memory used by unmanaged code
I'm working against a large COM library (ArcObjects) and I'm trying to pinpont a memory leak.
What is the most reliable way to determine the amount of memory used by unmanaged code/objects.
What performance counters can be used?
A:
Use UMDH to get snapshot of your memory heap, run it twice then use the tools to show all the allocations that occurred between the 2 snapshots. This is great in helping you track down which areas might be leaking.
This article explains in in simple terms.
I suggest you use a CComPtr<> to wrap your objects, not forgetting that you must release it before passing it into a function that returns a raw pointer reference (as the cast operator will be used to get the pointer that then gets overwritten)
A:
The 'Virtual Bytes' counter for a process represents the total amount of memory the process has reserved. If you have a memory leak then this will trend upwards.
|
How to determine the amount of memory used by unmanaged code
|
I'm working against a large COM library (ArcObjects) and I'm trying to pinpont a memory leak.
What is the most reliable way to determine the amount of memory used by unmanaged code/objects.
What performance counters can be used?
|
[
"Use UMDH to get snapshot of your memory heap, run it twice then use the tools to show all the allocations that occurred between the 2 snapshots. This is great in helping you track down which areas might be leaking.\nThis article explains in in simple terms.\nI suggest you use a CComPtr<> to wrap your objects, not forgetting that you must release it before passing it into a function that returns a raw pointer reference (as the cast operator will be used to get the pointer that then gets overwritten)\n",
"The 'Virtual Bytes' counter for a process represents the total amount of memory the process has reserved. If you have a memory leak then this will trend upwards.\n"
] |
[
2,
0
] |
[] |
[] |
[
"arcobjects",
"com",
"unmanaged"
] |
stackoverflow_0000102222_arcobjects_com_unmanaged.txt
|
Q:
How can I determine the transfer rate ?
Can I determine from an ASP.NET application the transfer rate, i.e. how many KB per second are transferd?
A:
You can set some performance counters on ASP.NET.
See here for some examples.
Some specific ones that may help you figure out what you want are:
Request Bytes Out Total
The total size, in bytes, of responses sent to a client. This does not include standard HTTP response headers.
Requests/Sec
The number of requests executed per second. This represents the current throughput of the application. Under constant load, this number should remain within a certain range, barring other server work (such as garbage collection, cache cleanup thread, external server tools, and so on).
Requests Total
The total number of requests since the service was started.
A:
There are a number of debugging tools you can use to check this at the browser. It will of course vary by page, cache settings, server load, network connection speed, etc.
Check out http://www.fiddlertool.com/fiddler/
Or if you are using Firefox, the FireBug add-in http://addons.mozilla.org/en-US/firefox/addon/1843
|
How can I determine the transfer rate ?
|
Can I determine from an ASP.NET application the transfer rate, i.e. how many KB per second are transferd?
|
[
"You can set some performance counters on ASP.NET.\nSee here for some examples.\nSome specific ones that may help you figure out what you want are:\nRequest Bytes Out Total\nThe total size, in bytes, of responses sent to a client. This does not include standard HTTP response headers.\nRequests/Sec\nThe number of requests executed per second. This represents the current throughput of the application. Under constant load, this number should remain within a certain range, barring other server work (such as garbage collection, cache cleanup thread, external server tools, and so on).\nRequests Total\nThe total number of requests since the service was started.\n",
"There are a number of debugging tools you can use to check this at the browser. It will of course vary by page, cache settings, server load, network connection speed, etc.\nCheck out http://www.fiddlertool.com/fiddler/\nOr if you are using Firefox, the FireBug add-in http://addons.mozilla.org/en-US/firefox/addon/1843\n"
] |
[
3,
0
] |
[] |
[] |
[
"asp.net"
] |
stackoverflow_0000102206_asp.net.txt
|
Q:
Is conditional compilation a valid mock/stub strategy for unit testing?
In a recent question on stubbing, many answers suggested C# interfaces or delegates for implementing stubs, but one answer suggested using conditional compilation, retaining static binding in the production code. This answer was modded -2 at the time of reading, so at least 2 people really thought this was a wrong answer. Perhaps misuse of DEBUG was the reason, or perhaps use of fixed value instead of more extensive validation. But I can't help wondering:
Is the use of conditional compilation an inappropriate technique for implementing unit test stubs? Sometimes? Always?
Thanks.
Edit-add: I'd like to add an example as a though experiment:
class Foo {
public Foo() { .. }
private DateTime Now {
get {
#if UNITTEST_Foo
return Stub_DateTime.Now;
#else
return DateTime.Now;
#endif
}
}
// .. rest of Foo members
}
comparing to
interface IDateTimeStrategy {
DateTime Now { get; }
}
class ProductionDateTimeStrategy : IDateTimeStrategy {
public DateTime Now { get { return DateTime.Now; } }
}
class Foo {
public Foo() : Foo(new ProductionDateTimeStrategy()) {}
public Foo(IDateTimeStrategy s) { datetimeStrategy = s; .. }
private IDateTime_Strategy datetimeStrategy;
private DateTime Now { get { return datetimeStrategy.Now; } }
}
Which allows the outgoing dependency on "DateTime.Now" to be stubbed through a C# interface. However, we've now added a dynamic dispatch call where static would suffice, the object is larger even in the production version, and we've added a new failure path for Foo's constructor (allocation can fail).
Am I worrying about nothing here? Thanks for the feedback so far!
A:
Try to keep production code separate from test code. Maintain different folder hierarchies.. different solutions/projects.
Unless.. you're in the world of legacy C++ Code. Here anything goes.. if conditional blocks help you get some of the code testable and you see a benefit.. By all means do it. But try to not let it get messier than the initial state. Clearly comment and demarcate conditional blocks. Proceed with caution. It is a valid technique for getting legacy code under a test harness.
A:
I think it lessens the clarity for people reviewing the code. You shouldn't have to remember that there's a conditional tag around specific code to understand the context.
A:
No this is terrible. It leaks test into your production code (even if its conditioned off)
Bad bad.
A:
Test code should be obvious and not inter-mixed in the same blocks as the tested code.
This is pretty much the same reason you shouldn't write
if (globals.isTest)
A:
I thought of another reason this was terrible:
Many times you mock/stub something, you want its methods to return different results depending on what you're testing. This either precludes that or makes it awkward as all heck.
A:
It might be useful as a tool to lean on as you refactor to testability in a large code base. I can see how you might use such techniques to enable smaller changes and avoid a "big bang" refactoring. However I would worry about leaning too hard on such a technique and would try to ensure that such tricks didn't live too long in the code base otherwise you risk making the application code very complex and hard to follow.
|
Is conditional compilation a valid mock/stub strategy for unit testing?
|
In a recent question on stubbing, many answers suggested C# interfaces or delegates for implementing stubs, but one answer suggested using conditional compilation, retaining static binding in the production code. This answer was modded -2 at the time of reading, so at least 2 people really thought this was a wrong answer. Perhaps misuse of DEBUG was the reason, or perhaps use of fixed value instead of more extensive validation. But I can't help wondering:
Is the use of conditional compilation an inappropriate technique for implementing unit test stubs? Sometimes? Always?
Thanks.
Edit-add: I'd like to add an example as a though experiment:
class Foo {
public Foo() { .. }
private DateTime Now {
get {
#if UNITTEST_Foo
return Stub_DateTime.Now;
#else
return DateTime.Now;
#endif
}
}
// .. rest of Foo members
}
comparing to
interface IDateTimeStrategy {
DateTime Now { get; }
}
class ProductionDateTimeStrategy : IDateTimeStrategy {
public DateTime Now { get { return DateTime.Now; } }
}
class Foo {
public Foo() : Foo(new ProductionDateTimeStrategy()) {}
public Foo(IDateTimeStrategy s) { datetimeStrategy = s; .. }
private IDateTime_Strategy datetimeStrategy;
private DateTime Now { get { return datetimeStrategy.Now; } }
}
Which allows the outgoing dependency on "DateTime.Now" to be stubbed through a C# interface. However, we've now added a dynamic dispatch call where static would suffice, the object is larger even in the production version, and we've added a new failure path for Foo's constructor (allocation can fail).
Am I worrying about nothing here? Thanks for the feedback so far!
|
[
"Try to keep production code separate from test code. Maintain different folder hierarchies.. different solutions/projects. \nUnless.. you're in the world of legacy C++ Code. Here anything goes.. if conditional blocks help you get some of the code testable and you see a benefit.. By all means do it. But try to not let it get messier than the initial state. Clearly comment and demarcate conditional blocks. Proceed with caution. It is a valid technique for getting legacy code under a test harness.\n",
"I think it lessens the clarity for people reviewing the code. You shouldn't have to remember that there's a conditional tag around specific code to understand the context.\n",
"No this is terrible. It leaks test into your production code (even if its conditioned off)\nBad bad.\n",
"Test code should be obvious and not inter-mixed in the same blocks as the tested code.\nThis is pretty much the same reason you shouldn't write\nif (globals.isTest)\n\n",
"I thought of another reason this was terrible:\nMany times you mock/stub something, you want its methods to return different results depending on what you're testing. This either precludes that or makes it awkward as all heck.\n",
"It might be useful as a tool to lean on as you refactor to testability in a large code base. I can see how you might use such techniques to enable smaller changes and avoid a \"big bang\" refactoring. However I would worry about leaning too hard on such a technique and would try to ensure that such tricks didn't live too long in the code base otherwise you risk making the application code very complex and hard to follow. \n"
] |
[
3,
2,
1,
1,
1,
0
] |
[] |
[] |
[
"conditional_compilation",
"stub",
"unit_testing"
] |
stackoverflow_0000097114_conditional_compilation_stub_unit_testing.txt
|
Q:
Firefox, saved passwords, and the change password dialogue
If a user saves the password on the login form, ff3 is putting the saved password in the change password dialoge on the profile page, even though its not the same input name as the login. how can I prevent this?
A:
Try using autocomplete="off" as an attribute of the text box. I've used it in the past to stop credit card details being stored by the browser but i dont know if it works with passwords. e.g. print("<input type="text" name="cc" autocomplete="off" />");
A:
I think that FF autofills fields based on the "name" attribute of the field so that if the password box has the name="password" and the change password box has the same it will fill in the same password in both places.
Try changing the name attribute of one of the boxes.
A:
Some sites have 3 inputs for changing a password, one for re-entering the current password and two for entering the new password. If the re-entering input was first and got auto-filled, it wouldn't be a problem.
A:
Go in tools->page properties->security on the page you wish to modify.
|
Firefox, saved passwords, and the change password dialogue
|
If a user saves the password on the login form, ff3 is putting the saved password in the change password dialoge on the profile page, even though its not the same input name as the login. how can I prevent this?
|
[
"Try using autocomplete=\"off\" as an attribute of the text box. I've used it in the past to stop credit card details being stored by the browser but i dont know if it works with passwords. e.g. print(\"<input type=\"text\" name=\"cc\" autocomplete=\"off\" />\");\n",
"I think that FF autofills fields based on the \"name\" attribute of the field so that if the password box has the name=\"password\" and the change password box has the same it will fill in the same password in both places.\nTry changing the name attribute of one of the boxes.\n",
"Some sites have 3 inputs for changing a password, one for re-entering the current password and two for entering the new password. If the re-entering input was first and got auto-filled, it wouldn't be a problem. \n",
"Go in tools->page properties->security on the page you wish to modify.\n"
] |
[
2,
1,
0,
0
] |
[] |
[] |
[
"firefox",
"html"
] |
stackoverflow_0000073230_firefox_html.txt
|
Q:
Including eval / bind values in OnClientClick code
I have a need to open a popup detail window from a gridview (VS 2005 / 2008). What I am trying to do is in the markup for my TemplateColumn have an asp:Button control, sort of like this:
<asp:Button ID="btnShowDetails" runat="server" CausesValidation="false"
CommandName="Details" Text="Order Details"
onClientClick="window.open('PubsOrderDetails.aspx?OrderId=<%# Eval("order_id") %>',
'','scrollbars=yes,resizable=yes, width=350, height=550');"
Of course, what isn't working is the appending of the <%# Eval...%> section to set the query string variable.
Any suggestions? Or is there a far better way of achieving the same result?
A:
I believe the way to do it is
onClientClick=<%# string.Format("window.open('PubsOrderDetails.aspx?OrderId={0}',scrollbars=yes,resizable=yes, width=350, height=550);", Eval("order_id")) %>
A:
I like @AviewAnew's suggestion, though you can also just write that from the code-behind by wiring up and event to the grid views ItemDataBound event. You'd then use the FindControl method on the event args you get to grab a reference to your button, and set the onclick attribute to your window.open statement.
A:
Do this in the code-behind. Just use an event handler for gridview_RowDataBound. (My example uses a gridview with the id of "gvBoxes".
Private Sub gvBoxes_RowDataBound(ByVal sender As Object, ByVal e As System.Web.UI.WebControls.GridViewRowEventArgs) Handles gvBoxes.RowDataBound
Select Case e.Row.RowType
Case DataControlRowType.DataRow
Dim btn As Button = e.Row.FindControl("btnShowDetails")
btn.OnClientClick = "window.open('PubsOrderDetails.aspx?OrderId=" & DataItem.Eval("OrderId") & "','','scrollbars=yes,resizable=yes, width=350, height=550');"
End Select
End Sub
|
Including eval / bind values in OnClientClick code
|
I have a need to open a popup detail window from a gridview (VS 2005 / 2008). What I am trying to do is in the markup for my TemplateColumn have an asp:Button control, sort of like this:
<asp:Button ID="btnShowDetails" runat="server" CausesValidation="false"
CommandName="Details" Text="Order Details"
onClientClick="window.open('PubsOrderDetails.aspx?OrderId=<%# Eval("order_id") %>',
'','scrollbars=yes,resizable=yes, width=350, height=550');"
Of course, what isn't working is the appending of the <%# Eval...%> section to set the query string variable.
Any suggestions? Or is there a far better way of achieving the same result?
|
[
"I believe the way to do it is\n\nonClientClick=<%# string.Format(\"window.open('PubsOrderDetails.aspx?OrderId={0}',scrollbars=yes,resizable=yes, width=350, height=550);\", Eval(\"order_id\")) %>\n\n\n",
"I like @AviewAnew's suggestion, though you can also just write that from the code-behind by wiring up and event to the grid views ItemDataBound event. You'd then use the FindControl method on the event args you get to grab a reference to your button, and set the onclick attribute to your window.open statement.\n",
"Do this in the code-behind. Just use an event handler for gridview_RowDataBound. (My example uses a gridview with the id of \"gvBoxes\".\nPrivate Sub gvBoxes_RowDataBound(ByVal sender As Object, ByVal e As System.Web.UI.WebControls.GridViewRowEventArgs) Handles gvBoxes.RowDataBound\n Select Case e.Row.RowType\n Case DataControlRowType.DataRow\n Dim btn As Button = e.Row.FindControl(\"btnShowDetails\")\n btn.OnClientClick = \"window.open('PubsOrderDetails.aspx?OrderId=\" & DataItem.Eval(\"OrderId\") & \"','','scrollbars=yes,resizable=yes, width=350, height=550');\"\n End Select \nEnd Sub\n\n"
] |
[
13,
2,
2
] |
[] |
[] |
[
"asp.net",
"javascript",
"visual_studio"
] |
stackoverflow_0000102343_asp.net_javascript_visual_studio.txt
|
Q:
Executing different set of MSBuild tasks for each user?
In our development environment each developer has their own dev server. Often times they do not actually develop on that server but develop from their local machine, deploy to their dev server, and then attach with the remote debugger to do debugging.
My question is; how can I use MSBuild to execute a different set of tasks for each user?
I want to enable each user to define their own build process with MSBuild tasks but I don't want that to necessarily affect the other developers. I also want a default set of tasks to execute if a given user explicitly defined their own process.
Example:
SomeProj.csproj
Default MS Build process is to copy to test server or staging server
Custom process for Steve is to copy to Steve's dev server
Custom process for Eric is to copy to Eric's dev server
A:
You could use the project user file (*.suo / *.user) to do some 'poor mans dependency injection'.
looks like this guy did something similar
A:
Yeah, I've done this before. Try trick is to key off $(USERNAME) in your msbuild script. If you haven't tried editing msbuild scripts before, you've got a lot of learning to do.
|
Executing different set of MSBuild tasks for each user?
|
In our development environment each developer has their own dev server. Often times they do not actually develop on that server but develop from their local machine, deploy to their dev server, and then attach with the remote debugger to do debugging.
My question is; how can I use MSBuild to execute a different set of tasks for each user?
I want to enable each user to define their own build process with MSBuild tasks but I don't want that to necessarily affect the other developers. I also want a default set of tasks to execute if a given user explicitly defined their own process.
Example:
SomeProj.csproj
Default MS Build process is to copy to test server or staging server
Custom process for Steve is to copy to Steve's dev server
Custom process for Eric is to copy to Eric's dev server
|
[
"You could use the project user file (*.suo / *.user) to do some 'poor mans dependency injection'.\nlooks like this guy did something similar\n",
"Yeah, I've done this before. Try trick is to key off $(USERNAME) in your msbuild script. If you haven't tried editing msbuild scripts before, you've got a lot of learning to do.\n"
] |
[
2,
0
] |
[] |
[] |
[
"build",
"deployment",
"msbuild",
"visual_studio"
] |
stackoverflow_0000078018_build_deployment_msbuild_visual_studio.txt
|
Q:
VS.Net 2005 required on Build box with .Net 2.0 C++ Projects?
We have a build box that uses CruiseControl.Net and has been building VB.Net and C# projects using msbuild. All I have installed on the box as far as .Net is concerned is .Net 2.0 SDK (I'm trying to keep the box as clean as possible). We are now trying to get a C++ app building on this box. The problem we are running into is that the header files (e.g. windows.h) are not installed with the SDK.
Do I have to install VS 2005 to get this to work?
Edit:
As a couple people have answered, I had actually downloaded the 3.5 Platform SDK, but the applications built on this box MUST run on boxes that do not have 3.5 installed. By installing the 3.5 SDK on my 2.0 build box, am I compromising my build box?
Edit:
I'm going to leave this as unanswered, but thought I would add that I went ahead and installed Visual Studio on the box and all is well. I hate having to do that, but didn't want to run the risk of having a 3.5 SDK on my 2.0 build box. I would still love to hear a better solution.
A:
Visual Studio is not needed, but for C++ you need the Platform SDK as well:
http://www.microsoft.com/downloads/details.aspx?familyid=484269E2-3B89-47E3-8EB7-1F2BE6D7123A&displaylang=en
Edit: There is also one for Windows 2008/Vista, not sure which is the correct one:
http://www.microsoft.com/downloads/details.aspx?familyid=E6E1C3DF-A74F-4207-8586-711EBE331CDC&displaylang=en
A:
No, you have to install the windows platform SDK.
You'll need to download this:
http://www.microsoft.com/downloads/details.aspx?FamilyId=E6E1C3DF-A74F-4207-8586-711EBE331CDC&displaylang=en
Edit: @Michael Stum
You need the Server 2008 / Vista / .NET 3.5 SDK version.
A:
Depending on what you are using in C++ (MFC, ATL, etc) you are probably going to have to install Visual Studio Professional (not express) as a lot of the libraries and headers are part of Visual Studio and not included in the SDK or Visual Studio Express (if you are doing managed C++ using .Net as the main framework then installing the SDK will be enough). We run our build boxes on VM's and so like to have as little installed as possible, so I spent a fair bit of time trying to get things working by installing as little as possible and for our C++ I ended up having to install Visual Studio.
A:
I don't see why having .NET 3.5 would comprimise the build box - 2.0 and 3.5 co-exist without a problem. The only concern I could see would be a developer upgrading a solution to VS2008 without your "permission" and the build not failing...
A:
In general, you need some set of SDKs (Software Development Kits) to be able to build, and some set of redistributable packages to run.
In case it's not obvious, you should be testing your product on an otherwise clean machine before you ship, so you know you got the dependencies right.
|
VS.Net 2005 required on Build box with .Net 2.0 C++ Projects?
|
We have a build box that uses CruiseControl.Net and has been building VB.Net and C# projects using msbuild. All I have installed on the box as far as .Net is concerned is .Net 2.0 SDK (I'm trying to keep the box as clean as possible). We are now trying to get a C++ app building on this box. The problem we are running into is that the header files (e.g. windows.h) are not installed with the SDK.
Do I have to install VS 2005 to get this to work?
Edit:
As a couple people have answered, I had actually downloaded the 3.5 Platform SDK, but the applications built on this box MUST run on boxes that do not have 3.5 installed. By installing the 3.5 SDK on my 2.0 build box, am I compromising my build box?
Edit:
I'm going to leave this as unanswered, but thought I would add that I went ahead and installed Visual Studio on the box and all is well. I hate having to do that, but didn't want to run the risk of having a 3.5 SDK on my 2.0 build box. I would still love to hear a better solution.
|
[
"Visual Studio is not needed, but for C++ you need the Platform SDK as well:\nhttp://www.microsoft.com/downloads/details.aspx?familyid=484269E2-3B89-47E3-8EB7-1F2BE6D7123A&displaylang=en\nEdit: There is also one for Windows 2008/Vista, not sure which is the correct one:\nhttp://www.microsoft.com/downloads/details.aspx?familyid=E6E1C3DF-A74F-4207-8586-711EBE331CDC&displaylang=en\n",
"No, you have to install the windows platform SDK.\nYou'll need to download this:\nhttp://www.microsoft.com/downloads/details.aspx?FamilyId=E6E1C3DF-A74F-4207-8586-711EBE331CDC&displaylang=en\nEdit: @Michael Stum\nYou need the Server 2008 / Vista / .NET 3.5 SDK version.\n",
"Depending on what you are using in C++ (MFC, ATL, etc) you are probably going to have to install Visual Studio Professional (not express) as a lot of the libraries and headers are part of Visual Studio and not included in the SDK or Visual Studio Express (if you are doing managed C++ using .Net as the main framework then installing the SDK will be enough). We run our build boxes on VM's and so like to have as little installed as possible, so I spent a fair bit of time trying to get things working by installing as little as possible and for our C++ I ended up having to install Visual Studio.\n",
"I don't see why having .NET 3.5 would comprimise the build box - 2.0 and 3.5 co-exist without a problem. The only concern I could see would be a developer upgrading a solution to VS2008 without your \"permission\" and the build not failing...\n",
"In general, you need some set of SDKs (Software Development Kits) to be able to build, and some set of redistributable packages to run. \nIn case it's not obvious, you should be testing your product on an otherwise clean machine before you ship, so you know you got the dependencies right.\n"
] |
[
1,
0,
0,
0,
0
] |
[] |
[] |
[
"build_automation",
"c++",
"cruisecontrol.net",
"msbuild"
] |
stackoverflow_0000043766_build_automation_c++_cruisecontrol.net_msbuild.txt
|
Q:
Ubiquity Hack
What's the most useful hack you've discovered for Mozilla's new Ubiquity tool?
A:
I wrote this a few days ago: http://www.appidx.com/ubiq/stackoverflow.html
The execute portion refuses to run with POST data. The code is the right code, and I've tried with the native code of the function with the XUL component javascript and it likewise refuses to run. Any help would be appreciated. The preview on the other hand works fine.
CmdUtils.CreateCommand({
name: "stackoverflow",
author: {name: "Aryeh Goldsmith"},
homepage: "http://www.appidx.com/ubiq/",
icon: "http://stackoverflow.com/favicon.ico",
takes: {search: noun_arb_text},
license: "MPL",
description: "Searches the highlighted text on stackoverflow.",
_version: "52",
preview: function ( pblock, inputObject) {
var query = inputObject.text;
pblock.innerHTML = "Search stackoverflow.com for " + query + "<br/>";
var url = "http://stackoverflow.com/search";
params = {"search-text": query, "hiddenstuff": ''};
jQuery.post( url, params, function( html ) {
var $ = jQuery;
pblock.innerHTML += "<div style='display:none;'>" + html + "</div>";
var ques = $(pblock).find('.summary h3');
var details = $(pblock).find('.summary .excerpt');
var out = "<div style='margin-bottom: 6px;'><b>Previewing the first 5 results:</b></div>";
for (var j = 0; j< ques.size() && j < 5; j++) {
out += "<div style='padding: 5px;'><b>" + ques[j].innerHTML + "</b><br />";
out += details[j].innerHTML + "</div>";
}
pblock.innerHTML = out;
});
},
execute: function( inputObject ) {
var query = inputObject.text;
var url = "http://stackoverflow.com/search";
var params = {
"search-text": query,
hiddenstuff: ""
};
// The following refuses to work... why? I just don't know! AFAIK it's correct.
openUrl(url, params);
},
})
A:
That it can close Firefox faster then I can with the mouse and that little [x] thing in the corner... :-P
A:
"translate this" and "edit-page". I think I'd find the Google Apps features useful if they supported hosted domains.
A:
I just wrote this:
makeSearchCommand({
name: "stackoverflow-tagsearch",
author: { name: "Jörg W Mittag", email: "[email protected]"},
license: "MIT X11",
url: "http://Beta.StackOverflow.Com/questions/tagged/{QUERY}",
icon: "http://StackOverflow.Com/favicon.ico",
description: "Searches <a href=\"http://StackOverflow.Com\">StackOverflow.Com</a> for the given tag(s).",
help: "Searches <a href=\"http://StackOverflow.Com\">StackOverflow.Com</a> for the given tag(s).",
preview: function(pBlock, directObj) {
if (directObj.text)
pBlock.innerHtml = "Searches <a href=\"http://StackOverflow.Com\">StackOverflow.Com</a> for " + directObj.text;
else
pBlock.innerHTML = "Searches <a href=\"http://StackOverflow.Com\">StackOverflow.Com</a> for the given tag(s).";
}
});
Nice toy!
Now I need to figure out how to HTTP POST to http://Beta.StackOverflow.Com/search with JQuery and Ubiquity ... If only there was a site where I could ask that question!
A:
I use a lot the "email it" and the "twitter" commands
A:
My co-worker has had 3 blue-screens on his machine since installing it. Not totally convinced this is what did it, but it's the only thing he's changed today. I'm uninstalling it for now (and so is he).
|
Ubiquity Hack
|
What's the most useful hack you've discovered for Mozilla's new Ubiquity tool?
|
[
"I wrote this a few days ago: http://www.appidx.com/ubiq/stackoverflow.html\nThe execute portion refuses to run with POST data. The code is the right code, and I've tried with the native code of the function with the XUL component javascript and it likewise refuses to run. Any help would be appreciated. The preview on the other hand works fine.\nCmdUtils.CreateCommand({\n name: \"stackoverflow\",\n author: {name: \"Aryeh Goldsmith\"},\n homepage: \"http://www.appidx.com/ubiq/\",\n icon: \"http://stackoverflow.com/favicon.ico\",\n takes: {search: noun_arb_text},\n license: \"MPL\",\n description: \"Searches the highlighted text on stackoverflow.\",\n _version: \"52\",\n\n preview: function ( pblock, inputObject) {\n var query = inputObject.text;\n pblock.innerHTML = \"Search stackoverflow.com for \" + query + \"<br/>\";\n\n var url = \"http://stackoverflow.com/search\";\n params = {\"search-text\": query, \"hiddenstuff\": ''};\n\n jQuery.post( url, params, function( html ) {\n var $ = jQuery;\n pblock.innerHTML += \"<div style='display:none;'>\" + html + \"</div>\";\n var ques = $(pblock).find('.summary h3');\n var details = $(pblock).find('.summary .excerpt');\n var out = \"<div style='margin-bottom: 6px;'><b>Previewing the first 5 results:</b></div>\";\n for (var j = 0; j< ques.size() && j < 5; j++) {\n out += \"<div style='padding: 5px;'><b>\" + ques[j].innerHTML + \"</b><br />\";\n out += details[j].innerHTML + \"</div>\";\n }\n pblock.innerHTML = out;\n });\n },\n\n execute: function( inputObject ) {\n var query = inputObject.text;\n var url = \"http://stackoverflow.com/search\";\n var params = {\n \"search-text\": query,\n hiddenstuff: \"\"\n };\n\n// The following refuses to work... why? I just don't know! AFAIK it's correct.\n openUrl(url, params);\n },\n})\n\n",
"That it can close Firefox faster then I can with the mouse and that little [x] thing in the corner... :-P\n",
"\"translate this\" and \"edit-page\". I think I'd find the Google Apps features useful if they supported hosted domains.\n",
"I just wrote this:\nmakeSearchCommand({\n name: \"stackoverflow-tagsearch\",\n author: { name: \"Jörg W Mittag\", email: \"[email protected]\"},\n license: \"MIT X11\",\n url: \"http://Beta.StackOverflow.Com/questions/tagged/{QUERY}\",\n icon: \"http://StackOverflow.Com/favicon.ico\",\n description: \"Searches <a href=\\\"http://StackOverflow.Com\\\">StackOverflow.Com</a> for the given tag(s).\",\n help: \"Searches <a href=\\\"http://StackOverflow.Com\\\">StackOverflow.Com</a> for the given tag(s).\",\n preview: function(pBlock, directObj) {\n if (directObj.text)\n pBlock.innerHtml = \"Searches <a href=\\\"http://StackOverflow.Com\\\">StackOverflow.Com</a> for \" + directObj.text;\n else\n pBlock.innerHTML = \"Searches <a href=\\\"http://StackOverflow.Com\\\">StackOverflow.Com</a> for the given tag(s).\";\n }\n});\n\nNice toy!\nNow I need to figure out how to HTTP POST to http://Beta.StackOverflow.Com/search with JQuery and Ubiquity ... If only there was a site where I could ask that question!\n",
"I use a lot the \"email it\" and the \"twitter\" commands\n",
"My co-worker has had 3 blue-screens on his machine since installing it. Not totally convinced this is what did it, but it's the only thing he's changed today. I'm uninstalling it for now (and so is he).\n"
] |
[
3,
2,
2,
2,
1,
0
] |
[] |
[] |
[
"ubiquity"
] |
stackoverflow_0000031173_ubiquity.txt
|
Q:
How you setup a greenfield project
I'm setting yup a Greenfield (yeeea!) web application just now was wondering how other people first setup their project with regards to automated/CI build?
I generally follow this:
Create SVN Repository with basic layout (trunk, braches, lib, etc.)
Create basic solution structure (core, ui, tests)
Create a basic test that fails
Copy NAnt scripts, update and tweak, make sure the failing test breaks the build locally
Commit
Setup default debug build on CI server (TeamCity) making sure the build fails
Fix text
Commit
9 Make sure build passes on CI
Done....
A:
A repost from the question text:
Create SVN Repository with basic
layout (trunk, braches, lib, etc.)
Create basic solution structure
(core, ui, tests)
Create a basic
test that fails
Copy NAnt scripts,
update and tweak, make sure the
failing test breaks the build
locally
Commit
Setup default debug
build on CI server (TeamCity) making
sure the build fails
Fix test
Commit
Make sure build passes on CI
Done....
|
How you setup a greenfield project
|
I'm setting yup a Greenfield (yeeea!) web application just now was wondering how other people first setup their project with regards to automated/CI build?
I generally follow this:
Create SVN Repository with basic layout (trunk, braches, lib, etc.)
Create basic solution structure (core, ui, tests)
Create a basic test that fails
Copy NAnt scripts, update and tweak, make sure the failing test breaks the build locally
Commit
Setup default debug build on CI server (TeamCity) making sure the build fails
Fix text
Commit
9 Make sure build passes on CI
Done....
|
[
"A repost from the question text:\n\nCreate SVN Repository with basic\nlayout (trunk, braches, lib, etc.)\nCreate basic solution structure\n(core, ui, tests)\nCreate a basic\ntest that fails\nCopy NAnt scripts,\nupdate and tweak, make sure the\nfailing test breaks the build\nlocally\nCommit\nSetup default debug\nbuild on CI server (TeamCity) making\nsure the build fails\nFix test\nCommit\nMake sure build passes on CI\nDone....\n\n"
] |
[
1
] |
[] |
[] |
[
"build",
"build_automation",
"build_process",
"continuous_integration"
] |
stackoverflow_0000101099_build_build_automation_build_process_continuous_integration.txt
|
Q:
Multiple local connections in flash - what's the better architecture?
I'm using localConnection in AS3 to allow several flash applications to interact with a central application. (Some are AS2, some AS3).
The central application must use a seperate localConnection variable for each receiving connection (otherwise the second app that tries to connect will be rejected).
But what about sending messages back?
Is it better to have the main application use a single localConnection to send messages to all the other applications, or should I assign a LC variable per target? (Since I specify the target anyway in the .send command)
1 Door for all of the messages to exit or 1 door per message target? Which is better and why?
A:
It would seem more organised to communicate with each Flash application separately, although I think it will work either way.
|
Multiple local connections in flash - what's the better architecture?
|
I'm using localConnection in AS3 to allow several flash applications to interact with a central application. (Some are AS2, some AS3).
The central application must use a seperate localConnection variable for each receiving connection (otherwise the second app that tries to connect will be rejected).
But what about sending messages back?
Is it better to have the main application use a single localConnection to send messages to all the other applications, or should I assign a LC variable per target? (Since I specify the target anyway in the .send command)
1 Door for all of the messages to exit or 1 door per message target? Which is better and why?
|
[
"It would seem more organised to communicate with each Flash application separately, although I think it will work either way.\n"
] |
[
1
] |
[] |
[] |
[
"actionscript_2",
"actionscript_3",
"flash",
"localconnection"
] |
stackoverflow_0000101416_actionscript_2_actionscript_3_flash_localconnection.txt
|
Q:
What's a good way to find relative paths in Google App Engine?
So I've done the trivial "warmup" apps with GAE. Now I'd like to build something with a more complex directory structure. Something along the lines of:
siteroot/
models/
controllers/
controller1/
controller2/
...
templates/
template1/
template2/
...
..etc. The controllers will be Python modules handling requests. They would then need to locate (Django-style) templates in associated folders. Most of the demo apps I've seen resolve template paths like this:
path = os.path.join(os.path.dirname(__file__), 'myPage.html')
...the __ file __ property resolves to the currently executing script. So, in my above example, if a Python script were running in controllers/controller1/, then the 'myPage.html' would resolve to that same directory -- controllers/controller1/myPage.html -- and I would rather cleanly separate my Python code and templates.
The solution I've hacked together feels... hacky:
base_paths = os.path.split(os.path.dirname(__file__))
template_dir = os.path.join(base_paths[0], "templates")
So, I'm just snipping off the last element of the path for the currently running script and appending the template directory to the new path. The other (non-GAE specific) solutions I've seen for resolving Python paths seem pretty heavyweight (such as splitting paths into lists and manipulating accordingly). Django seems to have an answer for this, but I'd rather stick to the GAE API, vs. creating a full Django app and modifying it for GAE.
I'm assuming anything hard-coded would be non-starter, since the apps live on Google's infinite server farm. So what's a better way?
A:
You can't use relative paths, as Toni suggests, because you have no guarantee that the path from your working directory to your app's directory will remain the same.
The correct solution is to either use os.path.split, as you are, or to use something like:
path = os.path.join(os.path.dirname(__file__), '..', 'templates', 'myPage.html')
My usual approach is to generate a path to the template directory using the above method, and store it as a member of my controller object, and provide a "getTemplatePath" method that takes the provided filename and joins it with the basename.
A:
The dirname function returns an absolute path, use relative paths. See what is the current directory when your controllers are executed with os.path.abspath(os.path.curdir) and build a path to the templates relative to that location (without the os.path.abspath part of course).
This will only work if the current directory is somewhere inside siteroot, else you could do something like this:
template_dir = os.path.join(os.path.dirname(__file__), os.path.pardir, "templates")
|
What's a good way to find relative paths in Google App Engine?
|
So I've done the trivial "warmup" apps with GAE. Now I'd like to build something with a more complex directory structure. Something along the lines of:
siteroot/
models/
controllers/
controller1/
controller2/
...
templates/
template1/
template2/
...
..etc. The controllers will be Python modules handling requests. They would then need to locate (Django-style) templates in associated folders. Most of the demo apps I've seen resolve template paths like this:
path = os.path.join(os.path.dirname(__file__), 'myPage.html')
...the __ file __ property resolves to the currently executing script. So, in my above example, if a Python script were running in controllers/controller1/, then the 'myPage.html' would resolve to that same directory -- controllers/controller1/myPage.html -- and I would rather cleanly separate my Python code and templates.
The solution I've hacked together feels... hacky:
base_paths = os.path.split(os.path.dirname(__file__))
template_dir = os.path.join(base_paths[0], "templates")
So, I'm just snipping off the last element of the path for the currently running script and appending the template directory to the new path. The other (non-GAE specific) solutions I've seen for resolving Python paths seem pretty heavyweight (such as splitting paths into lists and manipulating accordingly). Django seems to have an answer for this, but I'd rather stick to the GAE API, vs. creating a full Django app and modifying it for GAE.
I'm assuming anything hard-coded would be non-starter, since the apps live on Google's infinite server farm. So what's a better way?
|
[
"You can't use relative paths, as Toni suggests, because you have no guarantee that the path from your working directory to your app's directory will remain the same.\nThe correct solution is to either use os.path.split, as you are, or to use something like:\npath = os.path.join(os.path.dirname(__file__), '..', 'templates', 'myPage.html')\n\nMy usual approach is to generate a path to the template directory using the above method, and store it as a member of my controller object, and provide a \"getTemplatePath\" method that takes the provided filename and joins it with the basename.\n",
"The dirname function returns an absolute path, use relative paths. See what is the current directory when your controllers are executed with os.path.abspath(os.path.curdir) and build a path to the templates relative to that location (without the os.path.abspath part of course).\nThis will only work if the current directory is somewhere inside siteroot, else you could do something like this:\ntemplate_dir = os.path.join(os.path.dirname(__file__), os.path.pardir, \"templates\")\n\n"
] |
[
4,
1
] |
[] |
[] |
[
"google_app_engine",
"python"
] |
stackoverflow_0000061894_google_app_engine_python.txt
|
Q:
Forced Alpha-Numeric User IDs
I am a programmer at a financial institute. I have recently been told to enforce that all new user id's to have at least one alpha and one numeric. I immediately thought that this was a horrible idea and I would rather not implement it, as I believe this is an anti-feature and of poor user experience. The problem is that I don't have a good case for not implementing this requirement.
Do you think this is a good requirement?
Do you have any good reasons not to do it?
Do you know of any research that I could reference.
Edit: This is not in regards to the password. We already have similar requirements for that, which I am not opposed to.
A:
One argument against this is that many usernames / ids in other areas do not require numeric components. It's more likely that users will be better able to remember user ids that they have used elsewhere -- and that is more likely if they do not need to include numerics.
Furthermore, depending on the system, the user ids may work well as defaults when connecting to external systems (ssh behaves this way under unix-like systems). In this case, it is clearly beneficial to have one ID that is shared between systems.
Using the same ID in multiple places improves consistency, which is a well-known aspect of good software interfaces. It's not too difficult to show that the way people interact with a system is a user-interface, and should adhere to (at least some) of the well-known interface guidelines. (Obviously ideas like keyboard shortcuts are meaningless if you're considering the interactions between multiple, possibly unknown, systems, but aspects such as consistency do apply.)
Edit: I'm assuming that this discussion is about usernames or publicly visible IDs, NOT something that pertains directly to security, such as passwords.
A:
I would begin by asking them for their specific reasons behind this. Once you have a list of bullet points and the reasons why, it's easier to refute or provide alternatives.
As for general ideas:
This is opinion, but adding a numeral to a username won't necessarily increase security. People write down usernames on post it notes, most users will just add a '1' to the beginning or end of their username, making it easy to guess.
From a usability standpoint, this is bad as it breaks the norm. Forcing them to add a numeral to their username will just lead to the above point. They will simply add a '1' to the end or beginning of their username.
Remember, the more complex an authentication system is, the more likely a general user is to find ways to circumvent it and make their link in the chain weak.
A:
UserIDs? Requiring passwords to be alphanumeric is generally a good idea, since it makes them more resistant to a dictionary attack. It doesn't really make any sense for usernames. The whole point of having a name/password combo is that the name part doesn't have to be kept secret.
A:
If you're working at a financial institution, there are probably regulations about this sort of thing, so it's most likely out of your hands. But one thing you can do is make it clear to the user when he has entered an invalid ID. And don't wait until he clicks submit; show some kind of message right next to the field, and update it as he types.
A:
A few of the answers above have a counter-argument: If the users pick the same username they use on the other sites, then they are also likely to pick the same or similar passwords for the financial site, lowering security.
A reason not to do it: If you impose more restrictions than they are used to on the users, they will start writing down the login information, and that's an obvious loss of security.
Both of the bank accounts I have require an alphanumeric username and two passwords for the online login. One of them also has a image I have to remember. The two passwords have to change once a month or so. Therefore, I have all the login information right here on a text file. (Even looking at it doesn't make any sense; I'll have to go down to the bank and reset my passwords again. That's a grand total of 7 password resets for 6 logins. Talk about security, not even I can access my account.)
A:
it's good if it's in their password (as alas, financial companies like to deny you this security right [i'm talking to you american express]).
username, i say no, unless they want to.
A:
A username will (presumably) need to be quoted on the phone when calling for support so it will be publicised unlike a password. Also, the username field won't be masked out in browsers like password fields are, so it will have much more exposure and get cached/logged in various places, so the 'benefit' of the added security will be undone in no time.
And the more difficult you make things, the more likely a user is to write it down somewhere which again undermines security (same applies for password policies actually, but that's another story!)
A:
I also work at a financial institution and our usernames (both real people and production IDs) are all lowercase, alphabetical, up to 8 characters and I've never considered it a problem... avoids the confusion of 0 vs O, 1 vs I, and 8 vs B - unless you work for the same company as me and are about to implement a new policy...
A:
Adding any feature adds costs. It will take time now to build and test it, and in the future to support it. No feature should be built without a really good reason.
This feature is pointless. Usernames are not supposed to be kept secret, so having strong usernames has no advantage. It is probably worth spending time making passwords (or other authentication factors) strong, but users should be able to communicate their username to other users without that being a security risk.
If your application imposes extra constraints on the choice of user ID then some of your users will have a different user ID for your application than for the other applications in your environment. Note: I'm assuming that this is an internal application (for use by employees) rather than in Internet-facing application.
Having inconsistent usernames adds a number of specific risks:
It will make the audit trail harder to follow (a serious security risk).
It may add cost if you later start using single sign on.
It will cause a bad user experience as users have to remember that this application uses a weird username.
|
Forced Alpha-Numeric User IDs
|
I am a programmer at a financial institute. I have recently been told to enforce that all new user id's to have at least one alpha and one numeric. I immediately thought that this was a horrible idea and I would rather not implement it, as I believe this is an anti-feature and of poor user experience. The problem is that I don't have a good case for not implementing this requirement.
Do you think this is a good requirement?
Do you have any good reasons not to do it?
Do you know of any research that I could reference.
Edit: This is not in regards to the password. We already have similar requirements for that, which I am not opposed to.
|
[
"One argument against this is that many usernames / ids in other areas do not require numeric components. It's more likely that users will be better able to remember user ids that they have used elsewhere -- and that is more likely if they do not need to include numerics. \nFurthermore, depending on the system, the user ids may work well as defaults when connecting to external systems (ssh behaves this way under unix-like systems). In this case, it is clearly beneficial to have one ID that is shared between systems.\nUsing the same ID in multiple places improves consistency, which is a well-known aspect of good software interfaces. It's not too difficult to show that the way people interact with a system is a user-interface, and should adhere to (at least some) of the well-known interface guidelines. (Obviously ideas like keyboard shortcuts are meaningless if you're considering the interactions between multiple, possibly unknown, systems, but aspects such as consistency do apply.)\nEdit: I'm assuming that this discussion is about usernames or publicly visible IDs, NOT something that pertains directly to security, such as passwords.\n",
"I would begin by asking them for their specific reasons behind this. Once you have a list of bullet points and the reasons why, it's easier to refute or provide alternatives.\nAs for general ideas:\n\nThis is opinion, but adding a numeral to a username won't necessarily increase security. People write down usernames on post it notes, most users will just add a '1' to the beginning or end of their username, making it easy to guess.\nFrom a usability standpoint, this is bad as it breaks the norm. Forcing them to add a numeral to their username will just lead to the above point. They will simply add a '1' to the end or beginning of their username.\n\nRemember, the more complex an authentication system is, the more likely a general user is to find ways to circumvent it and make their link in the chain weak.\n",
"UserIDs? Requiring passwords to be alphanumeric is generally a good idea, since it makes them more resistant to a dictionary attack. It doesn't really make any sense for usernames. The whole point of having a name/password combo is that the name part doesn't have to be kept secret.\n",
"If you're working at a financial institution, there are probably regulations about this sort of thing, so it's most likely out of your hands. But one thing you can do is make it clear to the user when he has entered an invalid ID. And don't wait until he clicks submit; show some kind of message right next to the field, and update it as he types.\n",
"A few of the answers above have a counter-argument: If the users pick the same username they use on the other sites, then they are also likely to pick the same or similar passwords for the financial site, lowering security.\nA reason not to do it: If you impose more restrictions than they are used to on the users, they will start writing down the login information, and that's an obvious loss of security.\nBoth of the bank accounts I have require an alphanumeric username and two passwords for the online login. One of them also has a image I have to remember. The two passwords have to change once a month or so. Therefore, I have all the login information right here on a text file. (Even looking at it doesn't make any sense; I'll have to go down to the bank and reset my passwords again. That's a grand total of 7 password resets for 6 logins. Talk about security, not even I can access my account.)\n",
"it's good if it's in their password (as alas, financial companies like to deny you this security right [i'm talking to you american express]).\nusername, i say no, unless they want to.\n",
"A username will (presumably) need to be quoted on the phone when calling for support so it will be publicised unlike a password. Also, the username field won't be masked out in browsers like password fields are, so it will have much more exposure and get cached/logged in various places, so the 'benefit' of the added security will be undone in no time.\nAnd the more difficult you make things, the more likely a user is to write it down somewhere which again undermines security (same applies for password policies actually, but that's another story!)\n",
"I also work at a financial institution and our usernames (both real people and production IDs) are all lowercase, alphabetical, up to 8 characters and I've never considered it a problem... avoids the confusion of 0 vs O, 1 vs I, and 8 vs B - unless you work for the same company as me and are about to implement a new policy...\n",
"Adding any feature adds costs. It will take time now to build and test it, and in the future to support it. No feature should be built without a really good reason.\nThis feature is pointless. Usernames are not supposed to be kept secret, so having strong usernames has no advantage. It is probably worth spending time making passwords (or other authentication factors) strong, but users should be able to communicate their username to other users without that being a security risk.\nIf your application imposes extra constraints on the choice of user ID then some of your users will have a different user ID for your application than for the other applications in your environment. Note: I'm assuming that this is an internal application (for use by employees) rather than in Internet-facing application.\nHaving inconsistent usernames adds a number of specific risks:\n\nIt will make the audit trail harder to follow (a serious security risk).\nIt may add cost if you later start using single sign on.\nIt will cause a bad user experience as users have to remember that this application uses a weird username.\n\n"
] |
[
4,
4,
1,
1,
1,
0,
0,
0,
0
] |
[] |
[] |
[
"finance",
"security",
"user_experience",
"web_applications"
] |
stackoverflow_0000097765_finance_security_user_experience_web_applications.txt
|
Q:
Problem consuming ActiveMQ messages from Flex client
I am unable to consume messages sent via ActiveMQ from my Flex client. Sending messages via the Producer seems to work, I can also see that the Flex client is connected and subscribed via the properties on the Consumer object, however the "message" event on the Consumer is never fired so it seems like the messages are not received.
When I look in the ActiveMQ console, I can see the number of subscribers, the number of messages sent and the number of messages received. The strange thing is that the received messages counter seems to increment and that I can also trace the log statements in the Tomcat console, but again no messages are received in the Flex client.
Any ideas?
A:
After rebuilding my app from scratch with a fresh install of Tomcat, everything seems to work. Maybe this was caused by the fact that I was using the BlazeDS Turnkey version that contains a preconfigured instance of Tomcat.
BTW: This is a great tutorial: http://mmartinsoftware.blogspot.com/2008/05/simplified-blazeds-and-jms.html
|
Problem consuming ActiveMQ messages from Flex client
|
I am unable to consume messages sent via ActiveMQ from my Flex client. Sending messages via the Producer seems to work, I can also see that the Flex client is connected and subscribed via the properties on the Consumer object, however the "message" event on the Consumer is never fired so it seems like the messages are not received.
When I look in the ActiveMQ console, I can see the number of subscribers, the number of messages sent and the number of messages received. The strange thing is that the received messages counter seems to increment and that I can also trace the log statements in the Tomcat console, but again no messages are received in the Flex client.
Any ideas?
|
[
"After rebuilding my app from scratch with a fresh install of Tomcat, everything seems to work. Maybe this was caused by the fact that I was using the BlazeDS Turnkey version that contains a preconfigured instance of Tomcat.\nBTW: This is a great tutorial: http://mmartinsoftware.blogspot.com/2008/05/simplified-blazeds-and-jms.html\n"
] |
[
1
] |
[] |
[] |
[
"activemq",
"apache_flex"
] |
stackoverflow_0000101689_activemq_apache_flex.txt
|
Q:
How do I select all elements of class "sortasc" within a table with a specific id?
Let's say I have the following HTML:
<table id="foo">
<th class="sortasc">Header</th>
</table>
<table id="bar">
<th class="sortasc">Header</th>
</table>
I know that I can do the following to get all of the th elements that have class="sortasc"
$$('th.sortasc').each()
However that gives me the th elements from both table foo and table bar.
How can I tell it to give me just the th elements from table foo?
A:
table#foo th.sortasc
A:
This is how you'd do it with straight-up JS:
var table = document.getElementById('tableId');
var headers = table.getElementsByTagName('th');
var headersIWant = [];
for (var i = 0; i < headers.length; i++) {
if ((' ' + headers[i].className + ' ').indexOf(' sortasc ') >= 0) {
headersIWant.push(headers[i]);
}
}
return headersIWant;
A:
The CSS selector would be something like '#foo th.sortasc'. In jQuery that would be $('#foo th.sortasc').
A:
With a nested table, like:
<table id="foo">
<th class="sortasc">Header</th>
<tr><td>
<table id="nestedFoo">
<th class="sortasc">Nested Header</th>
</table>
</td></tr>
</table>
$('table#foo th.sortasc') will give you all the th's because you're using a descendant selector. If you only want foo's th's, then you should use the child selector - $('table#foo > th.sortasc').
Note that the child selector is not supported in CSS for IE6, though JQuery will still correctly do it from JavaScript.
|
How do I select all elements of class "sortasc" within a table with a specific id?
|
Let's say I have the following HTML:
<table id="foo">
<th class="sortasc">Header</th>
</table>
<table id="bar">
<th class="sortasc">Header</th>
</table>
I know that I can do the following to get all of the th elements that have class="sortasc"
$$('th.sortasc').each()
However that gives me the th elements from both table foo and table bar.
How can I tell it to give me just the th elements from table foo?
|
[
"table#foo th.sortasc\n",
"This is how you'd do it with straight-up JS:\nvar table = document.getElementById('tableId');\nvar headers = table.getElementsByTagName('th');\nvar headersIWant = [];\nfor (var i = 0; i < headers.length; i++) {\n if ((' ' + headers[i].className + ' ').indexOf(' sortasc ') >= 0) {\n headersIWant.push(headers[i]);\n }\n}\nreturn headersIWant;\n\n",
"The CSS selector would be something like '#foo th.sortasc'. In jQuery that would be $('#foo th.sortasc').\n",
"With a nested table, like:\n<table id=\"foo\">\n <th class=\"sortasc\">Header</th>\n <tr><td>\n <table id=\"nestedFoo\">\n <th class=\"sortasc\">Nested Header</th>\n </table>\n </td></tr>\n</table>\n\n$('table#foo th.sortasc') will give you all the th's because you're using a descendant selector. If you only want foo's th's, then you should use the child selector - $('table#foo > th.sortasc').\nNote that the child selector is not supported in CSS for IE6, though JQuery will still correctly do it from JavaScript.\n"
] |
[
8,
3,
0,
0
] |
[] |
[] |
[
"javascript",
"prototypejs"
] |
stackoverflow_0000101597_javascript_prototypejs.txt
|
Q:
Linq to SQL Association combo box order
This is kind of a weird question and more of an annoyance than technical brick wall.
When I'm adding tables and such using the Linq-to-SQL designer and I want to create an association using the dialogs. I right click on one of the target tables and choose Add > Association as normal and I am presented with the Association Editor.
The Parent Class: and Child Class: combo boxes are filled with the tables that currently exist in the designer.
But they are not in alphabetical order they are in the order that they were added to the designer.
Can I change the order of these combo boxes? And if I can, where do I do this?
A:
I went poking around some and found an answer.
The dbml file is an XML file that hold all of the basic information about the SQL tables, connections, etc. needed for Linq-to-SQL. By reordering the Table elements, you affect the order of the combo boxes used in the Association editor.
|
Linq to SQL Association combo box order
|
This is kind of a weird question and more of an annoyance than technical brick wall.
When I'm adding tables and such using the Linq-to-SQL designer and I want to create an association using the dialogs. I right click on one of the target tables and choose Add > Association as normal and I am presented with the Association Editor.
The Parent Class: and Child Class: combo boxes are filled with the tables that currently exist in the designer.
But they are not in alphabetical order they are in the order that they were added to the designer.
Can I change the order of these combo boxes? And if I can, where do I do this?
|
[
"I went poking around some and found an answer.\nThe dbml file is an XML file that hold all of the basic information about the SQL tables, connections, etc. needed for Linq-to-SQL. By reordering the Table elements, you affect the order of the combo boxes used in the Association editor.\n"
] |
[
1
] |
[] |
[] |
[
"ide",
"linq_to_sql",
"visual_studio_2008"
] |
stackoverflow_0000102470_ide_linq_to_sql_visual_studio_2008.txt
|
Q:
Prevent a PostBack from showing up in the History
I have a page where I submit some data, and return to the original form with a "Save Successful" message. However, the user would like the ability to return to the previous page they were at (which contains search results) by clicking the browser's "Back" button. However, due to the postback, when they click the "Back" button they do not go to the previous page ,they simply go to the same page (but at its previous state). I read that enabling SmartNavigation will take care of this issue (postbacks appearing in the history) however, it has been deprecated. What's the "new" best practice?
*Edit - I added a ScriptManager control, and wrapped the buttons in an UpdatePanel, however now I'm receiving the following error:
Type 'System.Web.UI.UpdatePanel' does not have a public property named 'Button'
Am I missing a reference?
*Disregard the above edit, I simply forgot to add the < ContentTemplate > section to the UpdatePanel :P
A:
If you put your "Save" button in an UpdatePanel, the postback will not show in the users history.
A:
I would avoid if possible. A better solution would be to have a button that just returns them to their search results on the "Save Successful" screen.
The problem with the ajaxy saving and such is that you violate the "Back" rules that users expect. This user might want the Back button to go back to the Search page, but other users might expect that clicking Back would return them to the Add/Update page. So if another user tries to update something, clicks save, and then "woops, i forgot something on the update", they'll click back, and now they're at search results, instead of the expected Update page.
|
Prevent a PostBack from showing up in the History
|
I have a page where I submit some data, and return to the original form with a "Save Successful" message. However, the user would like the ability to return to the previous page they were at (which contains search results) by clicking the browser's "Back" button. However, due to the postback, when they click the "Back" button they do not go to the previous page ,they simply go to the same page (but at its previous state). I read that enabling SmartNavigation will take care of this issue (postbacks appearing in the history) however, it has been deprecated. What's the "new" best practice?
*Edit - I added a ScriptManager control, and wrapped the buttons in an UpdatePanel, however now I'm receiving the following error:
Type 'System.Web.UI.UpdatePanel' does not have a public property named 'Button'
Am I missing a reference?
*Disregard the above edit, I simply forgot to add the < ContentTemplate > section to the UpdatePanel :P
|
[
"If you put your \"Save\" button in an UpdatePanel, the postback will not show in the users history.\n",
"I would avoid if possible. A better solution would be to have a button that just returns them to their search results on the \"Save Successful\" screen.\nThe problem with the ajaxy saving and such is that you violate the \"Back\" rules that users expect. This user might want the Back button to go back to the Search page, but other users might expect that clicking Back would return them to the Add/Update page. So if another user tries to update something, clicks save, and then \"woops, i forgot something on the update\", they'll click back, and now they're at search results, instead of the expected Update page.\n"
] |
[
4,
0
] |
[] |
[] |
[
".net_2.0",
"asp.net",
"vb.net"
] |
stackoverflow_0000102657_.net_2.0_asp.net_vb.net.txt
|
Q:
Number of nodes meeting a conditional based on attributes
Below is part of the XML which I am processing with PHP's XSLTProcessor:
<result>
<uf x="20" y="0"/>
<uf x="22" y="22"/>
<uf x="4" y="3"/>
<uf x="15" y="15"/>
</result>
I need to know how many "uf" nodes exist where x == y.
In the above example, that would be 2.
I've tried looping and incrementing a counter variable, but I can't redefine variables.
I've tried lots of combinations of xsl:number, with count/from, but couldn't get the XPath expression right.
Thanks!
A:
<xsl:value-of select="count(/result/uf[@y=@x])" />
A:
count('/result/uf[@x = @y]')
|
Number of nodes meeting a conditional based on attributes
|
Below is part of the XML which I am processing with PHP's XSLTProcessor:
<result>
<uf x="20" y="0"/>
<uf x="22" y="22"/>
<uf x="4" y="3"/>
<uf x="15" y="15"/>
</result>
I need to know how many "uf" nodes exist where x == y.
In the above example, that would be 2.
I've tried looping and incrementing a counter variable, but I can't redefine variables.
I've tried lots of combinations of xsl:number, with count/from, but couldn't get the XPath expression right.
Thanks!
|
[
"<xsl:value-of select=\"count(/result/uf[@y=@x])\" />\n\n",
"count('/result/uf[@x = @y]')\n\n"
] |
[
5,
1
] |
[] |
[] |
[
"conditional_statements",
"loops",
"xml",
"xpath",
"xslt"
] |
stackoverflow_0000102606_conditional_statements_loops_xml_xpath_xslt.txt
|
Q:
ASP.NET Control/Page Library Question
I'm working on a drop in assembly that has predefined pages and usable controls. I am having no difficulties with creating server controls, but I'm wondering what the "best practices" are with dealing with pages in an assembly. Can you compile a page into an assembly and release it as just a dll? How would this be accessed from the client browser's perspective as far as the address they would type or be directed to with a link? As an example, I have a simple login page with the standard username and password text boxes, and the log in button and a "remember me" checkbox, with a "I can't remember my username and/or password" hyperlink. Can I access that page as like a webresource? such as "http://www.site.name/webresource.axd?related_resource_id_codes"
A:
Your best bet if you want to be able to code it and treat it like a real page is to implement a VirtualPathProvider. Using a virtualpathprovider would allow you to embed the actual aspx as a resource (or put it in a database, whatever) and serve it from there, and still use the asp.net page compilation engine.
This would let you still use the visual studio design time tools easily, and prevent you from having to do vast amounts of build customization to precompile the pages. You can see here as well
If you don't want to do that, you can try using the aspnet_compiler tool to precompile the aspx and such pages into a dll. This will require some build customization, and tricks to allow serving the pages from the dll.
A:
You can add an httpHandler element to web.config pointing to your page. Something like:
<httpHandlers>
<add verb="*" path="login.aspx" type="MyPages.LoginPage, MyPages" />
</httpHandlers>
|
ASP.NET Control/Page Library Question
|
I'm working on a drop in assembly that has predefined pages and usable controls. I am having no difficulties with creating server controls, but I'm wondering what the "best practices" are with dealing with pages in an assembly. Can you compile a page into an assembly and release it as just a dll? How would this be accessed from the client browser's perspective as far as the address they would type or be directed to with a link? As an example, I have a simple login page with the standard username and password text boxes, and the log in button and a "remember me" checkbox, with a "I can't remember my username and/or password" hyperlink. Can I access that page as like a webresource? such as "http://www.site.name/webresource.axd?related_resource_id_codes"
|
[
"Your best bet if you want to be able to code it and treat it like a real page is to implement a VirtualPathProvider. Using a virtualpathprovider would allow you to embed the actual aspx as a resource (or put it in a database, whatever) and serve it from there, and still use the asp.net page compilation engine. \nThis would let you still use the visual studio design time tools easily, and prevent you from having to do vast amounts of build customization to precompile the pages. You can see here as well\nIf you don't want to do that, you can try using the aspnet_compiler tool to precompile the aspx and such pages into a dll. This will require some build customization, and tricks to allow serving the pages from the dll.\n",
"You can add an httpHandler element to web.config pointing to your page. Something like:\n<httpHandlers>\n <add verb=\"*\" path=\"login.aspx\" type=\"MyPages.LoginPage, MyPages\" />\n</httpHandlers>\n\n"
] |
[
2,
1
] |
[] |
[] |
[
"asp.net"
] |
stackoverflow_0000102587_asp.net.txt
|
Q:
Is there any way to define a constant value to Java at compile time
When I used to write libraries in C/C++ I got into the habit of having a method to return the compile date/time. This was always a compiled into the library so would differentiate builds of the library. I got this by returning a #define in the code:
C++:
#ifdef _BuildDateTime_
char* SomeClass::getBuildDateTime() {
return _BuildDateTime_;
}
#else
char* SomeClass::getBuildDateTime() {
return "Undefined";
}
#endif
Then on the compile I had a '-D_BuildDateTime_=Date' in the build script.
Is there any way to achieve this or similar in Java without needing to remember to edit any files manually or distributing any seperate files.
One suggestion I got from a co-worker was to get the ant file to create a file on the classpath and to package that into the JAR and have it read by the method.
Something like (assuming the file created was called 'DateTime.dat'):
// I know Exceptions and proper open/closing
// of the file are not done. This is just
// to explain the point!
String getBuildDateTime() {
return new BufferedReader(getClass()
.getResourceAsStream("DateTime.dat")).readLine();
}
To my mind that's a hack and could be circumvented/broken by someone having a similarly named file outside the JAR, but on the classpath.
Anyway, my question is whether there is any way to inject a constant into a class at compile time
EDIT
The reason I consider using an externally generated file in the JAR a hack is because this is) a library and will be embedded in client apps. These client apps may define their own classloaders meaning I can't rely on the standard JVM class loading rules.
My personal preference would be to go with using the date from the JAR file as suggested by serg10.
A:
I would favour the standards based approach. Put your version information (along with other useful publisher stuff such as build number, subversion revision number, author, company details, etc) in the jar's Manifest File.
This is a well documented and understood Java specification. Strong tool support exists for creating manifest files (a core Ant task for example, or the maven jar plugin). These can help with setting some of the attributes automatically - I have maven configured to put the jar's maven version number, Subversion revision and timestamp into the manifest for me at build time.
You can read the contents of the manifest at runtime with standard java api calls - something like:
import java.util.jar.*;
...
JarFile myJar = new JarFile("nameOfJar.jar"); // various constructors available
Manifest manifest = myJar.getManifest();
Map<String,Attributes> manifestContents = manifest.getAttributes();
To me, that feels like a more Java standard approach, so will probably prove more easy for subsequent code maintainers to follow.
A:
I remember seeing something similar in an open source project:
class Version... {
public static String tstamp() {
return "@BUILDTIME@";
}
}
in a template file. With Ant's filtering copy you can give this macro a value:
<copy src="templatefile" dst="Version.java" filtering="true">
<filter token="BUILDTIME" value="${build.tstamp}" />
</copy>
use this to create a Version.java source file in your build process, before the compilation step.
A:
AFAIK there is not a way to do this with javac. This can easily be done with Ant -- I would create a first class object called BuildTimestamp.java and generate that file at compile time via an Ant target.
Here's an Ant type that will be helpful.
A:
Unless you want to run your Java source through a C/C++ Preprocessor (which is a BIG NO-NO), use the jar method. There are other ways to get the correct resources out of a jar to make sure someone didn't put a duplicate resource on the classpath. You could also consider using the Jar manifest for this. My project does exactly what you're trying to do (with build dates, revisions, author, etc) using the manifest.
You'll want to use this:
Enumeration<URL> resources = Thread.currentThread().getContextClassLoader().getResources("META-INF/MANIFEST.MF");
This will get you ALL of the manifests on the classpath. You can figure out which jar they can from by parsing the URL.
A:
Personally I'd go for a separate properties file in your jar that you'd load at runtime... The classloader has a defined order for searching for files - I can't remember how it works exactly off hand, but I don't think another file with the same name somewhere on the classpath would be likely to cause issues.
But another way you could do it would be to use Ant to copy your .java files into a different directory before compiling them, filtering in String constants as appropriate. You could use something like:
public String getBuildDateTime() {
return "@BUILD_DATE_TIME@";
}
and write a filter in your Ant file to replace that with a build property.
A:
Perhaps a more Java-style way of indicating your library's version would be to add a version number to the JAR's manifest, as described in the manifest documentation.
A:
One suggestion I got from a co-worker
was to get the ant file to create a
file on the classpath and to package
that into the JAR and have it read by
the method. ... To my mind that's a
hack and could be circumvented/broken
by someone having a similarly named
file outside the JAR, but on the
classpath.
I'm not sure that getting Ant to generate a file is a terribly egregious hack, if it's a hack at all. Why not generate a properties file and use java.util.Properties to handle it?
|
Is there any way to define a constant value to Java at compile time
|
When I used to write libraries in C/C++ I got into the habit of having a method to return the compile date/time. This was always a compiled into the library so would differentiate builds of the library. I got this by returning a #define in the code:
C++:
#ifdef _BuildDateTime_
char* SomeClass::getBuildDateTime() {
return _BuildDateTime_;
}
#else
char* SomeClass::getBuildDateTime() {
return "Undefined";
}
#endif
Then on the compile I had a '-D_BuildDateTime_=Date' in the build script.
Is there any way to achieve this or similar in Java without needing to remember to edit any files manually or distributing any seperate files.
One suggestion I got from a co-worker was to get the ant file to create a file on the classpath and to package that into the JAR and have it read by the method.
Something like (assuming the file created was called 'DateTime.dat'):
// I know Exceptions and proper open/closing
// of the file are not done. This is just
// to explain the point!
String getBuildDateTime() {
return new BufferedReader(getClass()
.getResourceAsStream("DateTime.dat")).readLine();
}
To my mind that's a hack and could be circumvented/broken by someone having a similarly named file outside the JAR, but on the classpath.
Anyway, my question is whether there is any way to inject a constant into a class at compile time
EDIT
The reason I consider using an externally generated file in the JAR a hack is because this is) a library and will be embedded in client apps. These client apps may define their own classloaders meaning I can't rely on the standard JVM class loading rules.
My personal preference would be to go with using the date from the JAR file as suggested by serg10.
|
[
"I would favour the standards based approach. Put your version information (along with other useful publisher stuff such as build number, subversion revision number, author, company details, etc) in the jar's Manifest File. \nThis is a well documented and understood Java specification. Strong tool support exists for creating manifest files (a core Ant task for example, or the maven jar plugin). These can help with setting some of the attributes automatically - I have maven configured to put the jar's maven version number, Subversion revision and timestamp into the manifest for me at build time.\nYou can read the contents of the manifest at runtime with standard java api calls - something like:\nimport java.util.jar.*;\n\n...\n\nJarFile myJar = new JarFile(\"nameOfJar.jar\"); // various constructors available\nManifest manifest = myJar.getManifest();\nMap<String,Attributes> manifestContents = manifest.getAttributes();\n\nTo me, that feels like a more Java standard approach, so will probably prove more easy for subsequent code maintainers to follow.\n",
"I remember seeing something similar in an open source project:\nclass Version... {\n public static String tstamp() {\n return \"@BUILDTIME@\";\n }\n}\n\nin a template file. With Ant's filtering copy you can give this macro a value:\n<copy src=\"templatefile\" dst=\"Version.java\" filtering=\"true\">\n <filter token=\"BUILDTIME\" value=\"${build.tstamp}\" />\n</copy>\n\nuse this to create a Version.java source file in your build process, before the compilation step.\n",
"AFAIK there is not a way to do this with javac. This can easily be done with Ant -- I would create a first class object called BuildTimestamp.java and generate that file at compile time via an Ant target. \nHere's an Ant type that will be helpful.\n",
"Unless you want to run your Java source through a C/C++ Preprocessor (which is a BIG NO-NO), use the jar method. There are other ways to get the correct resources out of a jar to make sure someone didn't put a duplicate resource on the classpath. You could also consider using the Jar manifest for this. My project does exactly what you're trying to do (with build dates, revisions, author, etc) using the manifest.\nYou'll want to use this:\nEnumeration<URL> resources = Thread.currentThread().getContextClassLoader().getResources(\"META-INF/MANIFEST.MF\");\n\nThis will get you ALL of the manifests on the classpath. You can figure out which jar they can from by parsing the URL.\n",
"Personally I'd go for a separate properties file in your jar that you'd load at runtime... The classloader has a defined order for searching for files - I can't remember how it works exactly off hand, but I don't think another file with the same name somewhere on the classpath would be likely to cause issues.\nBut another way you could do it would be to use Ant to copy your .java files into a different directory before compiling them, filtering in String constants as appropriate. You could use something like:\npublic String getBuildDateTime() {\n return \"@BUILD_DATE_TIME@\";\n}\n\nand write a filter in your Ant file to replace that with a build property.\n",
"Perhaps a more Java-style way of indicating your library's version would be to add a version number to the JAR's manifest, as described in the manifest documentation.\n",
"\nOne suggestion I got from a co-worker\n was to get the ant file to create a\n file on the classpath and to package\n that into the JAR and have it read by\n the method. ... To my mind that's a\n hack and could be circumvented/broken\n by someone having a similarly named\n file outside the JAR, but on the\n classpath.\n\nI'm not sure that getting Ant to generate a file is a terribly egregious hack, if it's a hack at all. Why not generate a properties file and use java.util.Properties to handle it?\n"
] |
[
16,
11,
2,
1,
1,
1,
0
] |
[] |
[] |
[
"c",
"c++",
"java"
] |
stackoverflow_0000101267_c_c++_java.txt
|
Q:
How would you get the Sql Command objects for a Given TableAdaptor and SqlDataAdaptor in C# (.NET 2.0)
I am creating a generic error handling / logging class for our applications. The goal is to log the exception info, info about the class and function (as well as parameters) and if relevant, the information about the System.Data.SqlClient.SqlCommand object.
I would like to be able to handle passing in SqlCommands, TableAdaptors, and SqlDataAdaptors.
I am new to using reflection and I know that it is possible to do this, I am just not sure how to go about it. Please advise.
A:
Is this what you're talking about?
SqlDataAdapter da = new SqlDataAdapter();
var cmd1 = ((IDbDataAdapter)da).DeleteCommand;
var cmd2 = ((IDbDataAdapter)da).UpdateCommand;
var cmd3 = ((IDbDataAdapter)da).SelectCommand;
var cmd4 = ((IDbDataAdapter)da).InsertCommand;
The SqlDataAdapter implements IDbDataAdapter, which has getters/setters for all the CRUD commands. The SqlDataAdapter implements these explicitly, so they don't show up in the signature of the class unless you first cast it to the interface. No reflection necessary.
|
How would you get the Sql Command objects for a Given TableAdaptor and SqlDataAdaptor in C# (.NET 2.0)
|
I am creating a generic error handling / logging class for our applications. The goal is to log the exception info, info about the class and function (as well as parameters) and if relevant, the information about the System.Data.SqlClient.SqlCommand object.
I would like to be able to handle passing in SqlCommands, TableAdaptors, and SqlDataAdaptors.
I am new to using reflection and I know that it is possible to do this, I am just not sure how to go about it. Please advise.
|
[
"Is this what you're talking about?\nSqlDataAdapter da = new SqlDataAdapter();\nvar cmd1 = ((IDbDataAdapter)da).DeleteCommand;\nvar cmd2 = ((IDbDataAdapter)da).UpdateCommand;\nvar cmd3 = ((IDbDataAdapter)da).SelectCommand;\nvar cmd4 = ((IDbDataAdapter)da).InsertCommand;\n\nThe SqlDataAdapter implements IDbDataAdapter, which has getters/setters for all the CRUD commands. The SqlDataAdapter implements these explicitly, so they don't show up in the signature of the class unless you first cast it to the interface. No reflection necessary. \n"
] |
[
0
] |
[] |
[] |
[
"c#",
"error_handling",
"exception",
"reflection",
"sql"
] |
stackoverflow_0000102623_c#_error_handling_exception_reflection_sql.txt
|
Q:
When is it best to use the stack instead of the heap and vice versa?
In C++, when is it best to use the stack? When is it best to use the heap?
A:
Use the stack when your variable will not be used after the current function returns. Use the heap when the data in the variable is needed beyond the lifetime of the current function.
A:
As a rule of thumb, avoid creating huge objects on the stack.
Creating an object on the stack frees you from the burden of remembering to cleanup(read delete) the object. But creating too many objects on the stack will increase the chances of stack overflow.
If you use heap for the object, you get the as much memory the OS can provide, much larger than the stack, but then again you must make sure to free the memory when you are done. Also, creating too many objects too frequently in the heap will tend to fragment the memory, which in turn will affect the performance of your application.
A:
Use the stack when the memory being used is strictly limited to the scope in which you are creating it. This is useful to avoid memory leaks because you know exactly where you want to use the memory, and you know when you no longer need it, so the memory will be cleaned up for you.
int main()
{
if (...)
{
int i = 0;
}
// I know that i is no longer needed here, so declaring i in the above block
// limits the scope appropriately
}
The heap, however, is useful when your memory may be accessed outside of the scope of its creation and you do not wish to copy a stack variable. This can give you explicit control over how memory is allocated and deallocated.
Object* CreateObject();
int main()
{
Object* obj = CreateObject();
// I can continue to manipulate object and I decide when I'm done with it
// ..
// I'm done
delete obj;
// .. keep going if you wish
return 0;
}
Object* CreateObject()
{
Object* returnValue = new Object();
// ... do a bunch of stuff to returnValue
return returnValue;
// Note the object created via new here doesn't go away, its passed back using
// a pointer
}
Obviously a common problem here is that you may forget to delete your object. This is called a memory leak. These problems are more prevalent as your program becomes less and less trivial where "ownership" (or who exactly is responsible for deleting things) becomes more difficult to define.
Common solutions in more managed languages (C#, Java) are to implement garbage collection so you don't have to think about deleting things. However, this means there's something in the background that runs aperiodically to check on your heap data. In a non-trivial program, this can become rather inefficient as a "garbage collection" thread pops up and chugs away, looking for data that should be deleted, while the rest of your program is blocked from executing.
In C++, the most common, and best (in my opinion) solution to dealing with memory leaks is to use a smart pointer. The most common of these is boost::shared_ptr which is (reference counted)
So to recreate the example above
boost::shared_ptr CreateObject();
int main()
{
boost::shared_ptr<Object> obj = CreateObject();
// I can continue to manipulate object and I decide when I'm done with it
// ..
// I'm done, manually delete
obj.reset(NULL);
// .. keep going if you wish
// here, if you forget to delete obj, the shared_ptr's destructor will note
// that if no other shared_ptr's point to this memory
// it will automatically get deleted.
return 0;
}
boost::shared_ptr<Object> CreateObject()
{
boost::shared_ptr<Object> returnValue(new Object());
// ... do a bunch of stuff to returnValue
return returnValue;
// Note the object created via new here doesn't go away, its passed back to
// the receiving shared_ptr, shared_ptr knows that another reference exists
// to this memory, so it shouldn't delete the memory
}
A:
An exception to the rule mentioned above that you should generally use the stack for local variables that are not needed outside the scope of the function:
Recursive functions can exhaust the stack space if they allocate large local variables or if they are recursively invoked many times. If you have a recursive function that utilizes memory, it might be a good idea to use heap-based memory instead of stack-based memory.
A:
as a rule of thumb use the stack whenever you can. i.e. when the variable is never needed outside of that scope.
its faster, causes less fragmentation and is going to avoid the other overheads associated with calling malloc or new. allocating off of the stack is a couple of assembler operations, malloc or new is several hundred lines of code in an efficient implementation.
its never best to use the heap... just unavoidable. :)
A:
This question is related (though not really a dupe) to What and where are the stack and heap, which was asked a couple days ago.
A:
Use the heap for only allocating space for objects at runtime. If you know the size at compile time, use the stack. Instead of returning heap-allocated objects from a function, pass a buffer into the function for it to write to. That way the buffer can be allocated where the function is called as an array or other stack-based structure.
The fewer malloc() statements you have, the fewer chances for memory leaks.
A:
The question is ill formed.
There are situations where you need the stack, others where you need the heap, others where you need the static storage, others where you need the const memory data, others where you need the free store.
The stack is fast, because allocation is just an "increment" over the SP, and all "allocation" is performed at invocation time of the function you are in. Heap (or free store) allocation/deallocation is more time expensive and error prone.
|
When is it best to use the stack instead of the heap and vice versa?
|
In C++, when is it best to use the stack? When is it best to use the heap?
|
[
"Use the stack when your variable will not be used after the current function returns. Use the heap when the data in the variable is needed beyond the lifetime of the current function.\n",
"As a rule of thumb, avoid creating huge objects on the stack.\n\nCreating an object on the stack frees you from the burden of remembering to cleanup(read delete) the object. But creating too many objects on the stack will increase the chances of stack overflow.\nIf you use heap for the object, you get the as much memory the OS can provide, much larger than the stack, but then again you must make sure to free the memory when you are done. Also, creating too many objects too frequently in the heap will tend to fragment the memory, which in turn will affect the performance of your application.\n\n",
"Use the stack when the memory being used is strictly limited to the scope in which you are creating it. This is useful to avoid memory leaks because you know exactly where you want to use the memory, and you know when you no longer need it, so the memory will be cleaned up for you.\nint main()\n{ \n if (...)\n {\n int i = 0;\n }\n // I know that i is no longer needed here, so declaring i in the above block \n // limits the scope appropriately\n}\n\nThe heap, however, is useful when your memory may be accessed outside of the scope of its creation and you do not wish to copy a stack variable. This can give you explicit control over how memory is allocated and deallocated.\nObject* CreateObject();\n\nint main()\n{\n Object* obj = CreateObject();\n // I can continue to manipulate object and I decide when I'm done with it\n\n // ..\n // I'm done\n delete obj;\n // .. keep going if you wish\n return 0;\n}\n\nObject* CreateObject()\n{\n Object* returnValue = new Object();\n // ... do a bunch of stuff to returnValue\n return returnValue;\n // Note the object created via new here doesn't go away, its passed back using \n // a pointer\n}\n\nObviously a common problem here is that you may forget to delete your object. This is called a memory leak. These problems are more prevalent as your program becomes less and less trivial where \"ownership\" (or who exactly is responsible for deleting things) becomes more difficult to define.\nCommon solutions in more managed languages (C#, Java) are to implement garbage collection so you don't have to think about deleting things. However, this means there's something in the background that runs aperiodically to check on your heap data. In a non-trivial program, this can become rather inefficient as a \"garbage collection\" thread pops up and chugs away, looking for data that should be deleted, while the rest of your program is blocked from executing.\nIn C++, the most common, and best (in my opinion) solution to dealing with memory leaks is to use a smart pointer. The most common of these is boost::shared_ptr which is (reference counted)\nSo to recreate the example above\n boost::shared_ptr CreateObject();\nint main()\n{\n boost::shared_ptr<Object> obj = CreateObject();\n // I can continue to manipulate object and I decide when I'm done with it\n\n // ..\n // I'm done, manually delete\n obj.reset(NULL);\n // .. keep going if you wish\n // here, if you forget to delete obj, the shared_ptr's destructor will note\n // that if no other shared_ptr's point to this memory \n // it will automatically get deleted.\n return 0;\n}\n\nboost::shared_ptr<Object> CreateObject()\n{\n boost::shared_ptr<Object> returnValue(new Object());\n // ... do a bunch of stuff to returnValue\n return returnValue;\n // Note the object created via new here doesn't go away, its passed back to \n // the receiving shared_ptr, shared_ptr knows that another reference exists\n // to this memory, so it shouldn't delete the memory\n}\n\n",
"An exception to the rule mentioned above that you should generally use the stack for local variables that are not needed outside the scope of the function:\nRecursive functions can exhaust the stack space if they allocate large local variables or if they are recursively invoked many times. If you have a recursive function that utilizes memory, it might be a good idea to use heap-based memory instead of stack-based memory.\n",
"as a rule of thumb use the stack whenever you can. i.e. when the variable is never needed outside of that scope.\nits faster, causes less fragmentation and is going to avoid the other overheads associated with calling malloc or new. allocating off of the stack is a couple of assembler operations, malloc or new is several hundred lines of code in an efficient implementation.\nits never best to use the heap... just unavoidable. :)\n",
"This question is related (though not really a dupe) to What and where are the stack and heap, which was asked a couple days ago.\n",
"Use the heap for only allocating space for objects at runtime. If you know the size at compile time, use the stack. Instead of returning heap-allocated objects from a function, pass a buffer into the function for it to write to. That way the buffer can be allocated where the function is called as an array or other stack-based structure.\nThe fewer malloc() statements you have, the fewer chances for memory leaks. \n",
"The question is ill formed.\nThere are situations where you need the stack, others where you need the heap, others where you need the static storage, others where you need the const memory data, others where you need the free store.\nThe stack is fast, because allocation is just an \"increment\" over the SP, and all \"allocation\" is performed at invocation time of the function you are in. Heap (or free store) allocation/deallocation is more time expensive and error prone.\n"
] |
[
75,
37,
18,
9,
8,
4,
3,
0
] |
[] |
[] |
[
"c++"
] |
stackoverflow_0000102009_c++.txt
|
Q:
CSS Margin Collapsing
So essentially does margin collapsing occur when you don't set any margin or padding or border to a given div element?
A:
No. When you have two adjacent vertical margins, the greater of the two is used and the other is ignored.
So, for instance, if you have two block-display elements, A, followed by B beneath it, and A has a bottom-margin of 3em, while B has a top-margin of 2em, then the distance between them will be 3em.
If you set a border or padding, this prevents the collapsing from occurring. In the above example, the distance between the two elements will then be 5em.
If you don't set any margins, then there won't be any margins to collapse. It has nothing whatsoever to do with the element type in use - it is applicable to all element types, not just <div> elements.
Read the CSS 2.1 specification for more details.
A:
"the expression collapsing margins means that adjoining margins (no non-empty content, padding or border areas or clearance separate them) of two or more boxes (which may be next to one another or nested) combine to form a single margin."
Source: Box Model - 8.3.1 Collapsing margins
|
CSS Margin Collapsing
|
So essentially does margin collapsing occur when you don't set any margin or padding or border to a given div element?
|
[
"No. When you have two adjacent vertical margins, the greater of the two is used and the other is ignored.\nSo, for instance, if you have two block-display elements, A, followed by B beneath it, and A has a bottom-margin of 3em, while B has a top-margin of 2em, then the distance between them will be 3em.\nIf you set a border or padding, this prevents the collapsing from occurring. In the above example, the distance between the two elements will then be 5em.\nIf you don't set any margins, then there won't be any margins to collapse. It has nothing whatsoever to do with the element type in use - it is applicable to all element types, not just <div> elements.\nRead the CSS 2.1 specification for more details.\n",
"\n\"the expression collapsing margins means that adjoining margins (no non-empty content, padding or border areas or clearance separate them) of two or more boxes (which may be next to one another or nested) combine to form a single margin.\"\n\nSource: Box Model - 8.3.1 Collapsing margins\n"
] |
[
75,
4
] |
[] |
[] |
[
"css",
"margin"
] |
stackoverflow_0000102640_css_margin.txt
|
Q:
SQL strip text and convert to integer
In my database (SQL 2005) I have a field which holds a comment but in the comment I have an id and I would like to strip out just the id, and IF possible convert it to an int:
activation successful of id 1010101
The line above is the exact structure of the data in the db field.
And no I don't want to do this in the code of the application, I actually don't want to touch it, just in case you were wondering ;-)
A:
This should do the trick:
SELECT SUBSTRING(column, PATINDEX('%[0-9]%', column), 999)
FROM table
Based on your sample data, this that there is only one occurence of an integer in the string and that it is at the end.
A:
-- Test table, you will probably use some query
DECLARE @testTable TABLE(comment VARCHAR(255))
INSERT INTO @testTable(comment)
VALUES ('activation successful of id 1010101')
-- Use Charindex to find "id " then isolate the numeric part
-- Finally check to make sure the number is numeric before converting
SELECT CASE WHEN ISNUMERIC(JUSTNUMBER)=1 THEN CAST(JUSTNUMBER AS INTEGER) ELSE -1 END
FROM (
select right(comment, len(comment) - charindex('id ', comment)-2) as justnumber
from @testtable) TT
I would also add that this approach is more set based and hence more efficient for a bunch of data values. But it is super easy to do it just for one value as a variable. Instead of using the column comment you can use a variable like @chvComment.
A:
I don't have a means to test it at the moment, but:
select convert(int, substring(fieldName, len('activation successful of id '), len(fieldName) - len('activation successful of id '))) from tableName
A:
If the comment string is EXACTLY like that you can use replace.
select replace(comment_col, 'activation successful of id ', '') as id from ....
It almost certainly won't be though - what about unsuccessful Activations?
You might end up with nested replace statements
select replace(replace(comment_col, 'activation not successful of id ', ''), 'activation successful of id ', '') as id from ....
[sorry can't tell from this edit screen if that's entirely valid sql]
That starts to get messy; you might consider creating a function and putting the replace statements in that.
If this is a one off job, it won't really matter. You could also use a regex, but that's quite slow (and in any case mean you now have 2 problems).
A:
Would you be open to writing a bit of code? One option, create a CLR User Defined function, then use Regex. You can find more details here. This will handle complex strings.
If your above line is always formatted as 'activation successful of id #######', with your number at the end of the field, then:
declare @myColumn varchar(100)
set @myColumn = 'activation successful of id 1010102'
SELECT
@myColumn as [OriginalColumn]
, CONVERT(int, REVERSE(LEFT(REVERSE(@myColumn), CHARINDEX(' ', REVERSE(@myColumn))))) as [DesiredColumn]
Will give you:
OriginalColumn DesiredColumn
---------------------------------------- -------------
activation successful of id 1010102 1010102
(1 row(s) affected)
A:
CAST(REVERSE(LEFT(REVERSE(@Test),CHARINDEX(' ',REVERSE(@Test))-1)) AS INTEGER)
A:
select cast(right(column_name,charindex(' ',reverse(column_name))) as int)
|
SQL strip text and convert to integer
|
In my database (SQL 2005) I have a field which holds a comment but in the comment I have an id and I would like to strip out just the id, and IF possible convert it to an int:
activation successful of id 1010101
The line above is the exact structure of the data in the db field.
And no I don't want to do this in the code of the application, I actually don't want to touch it, just in case you were wondering ;-)
|
[
"This should do the trick:\nSELECT SUBSTRING(column, PATINDEX('%[0-9]%', column), 999)\nFROM table\n\nBased on your sample data, this that there is only one occurence of an integer in the string and that it is at the end.\n",
"-- Test table, you will probably use some query \nDECLARE @testTable TABLE(comment VARCHAR(255)) \nINSERT INTO @testTable(comment) \n VALUES ('activation successful of id 1010101')\n\n-- Use Charindex to find \"id \" then isolate the numeric part \n-- Finally check to make sure the number is numeric before converting \nSELECT CASE WHEN ISNUMERIC(JUSTNUMBER)=1 THEN CAST(JUSTNUMBER AS INTEGER) ELSE -1 END \nFROM ( \n select right(comment, len(comment) - charindex('id ', comment)-2) as justnumber \n from @testtable) TT\n\nI would also add that this approach is more set based and hence more efficient for a bunch of data values. But it is super easy to do it just for one value as a variable. Instead of using the column comment you can use a variable like @chvComment.\n",
"I don't have a means to test it at the moment, but:\nselect convert(int, substring(fieldName, len('activation successful of id '), len(fieldName) - len('activation successful of id '))) from tableName\n\n",
"If the comment string is EXACTLY like that you can use replace.\nselect replace(comment_col, 'activation successful of id ', '') as id from ....\n\nIt almost certainly won't be though - what about unsuccessful Activations?\nYou might end up with nested replace statements\nselect replace(replace(comment_col, 'activation not successful of id ', ''), 'activation successful of id ', '') as id from ....\n\n[sorry can't tell from this edit screen if that's entirely valid sql]\nThat starts to get messy; you might consider creating a function and putting the replace statements in that.\nIf this is a one off job, it won't really matter. You could also use a regex, but that's quite slow (and in any case mean you now have 2 problems).\n",
"Would you be open to writing a bit of code? One option, create a CLR User Defined function, then use Regex. You can find more details here. This will handle complex strings.\nIf your above line is always formatted as 'activation successful of id #######', with your number at the end of the field, then:\ndeclare @myColumn varchar(100)\nset @myColumn = 'activation successful of id 1010102'\n\n\nSELECT\n @myColumn as [OriginalColumn]\n, CONVERT(int, REVERSE(LEFT(REVERSE(@myColumn), CHARINDEX(' ', REVERSE(@myColumn))))) as [DesiredColumn]\n\nWill give you:\nOriginalColumn DesiredColumn\n---------------------------------------- -------------\nactivation successful of id 1010102 1010102\n\n(1 row(s) affected)\n\n",
"CAST(REVERSE(LEFT(REVERSE(@Test),CHARINDEX(' ',REVERSE(@Test))-1)) AS INTEGER)\n\n",
"select cast(right(column_name,charindex(' ',reverse(column_name))) as int)\n\n"
] |
[
1,
0,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"sql",
"strip",
"text"
] |
stackoverflow_0000102591_sql_strip_text.txt
|
Q:
Why does std::stack use std::deque by default?
Since the only operations required for a container to be used in a stack are:
back()
push_back()
pop_back()
Why is the default container for it a deque instead of a vector?
Don't deque reallocations give a buffer of elements before front() so that push_front() is an efficient operation? Aren't these elements wasted since they will never ever be used in the context of a stack?
If there is no overhead for using a deque this way instead of a vector, why is the default for priority_queue a vector not a deque also? (priority_queue requires front(), push_back(), and pop_back() - essentially the same as for stack)
Updated based on the Answers below:
It appears that the way deque is usually implemented is a variable size array of fixed size arrays. This makes growing faster than a vector (which requires reallocation and copying), so for something like a stack which is all about adding and removing elements, deque is likely a better choice.
priority_queue requires indexing heavily, as every removal and insertion requires you to run pop_heap() or push_heap(). This probably makes vector a better choice there since adding an element is still amortized constant anyways.
A:
As the container grows, a reallocation for a vector requires copying all the elements into the new block of memory. Growing a deque allocates a new block and links it to the list of blocks - no copies are required.
Of course you can specify that a different backing container be used if you like. So if you have a stack that you know is not going to grow much, tell it to use a vector instead of a deque if that's your preference.
A:
See Herb Sutter's Guru of the Week 54 for the relative merits of vector and deque where either would do.
I imagine the inconsistency between priority_queue and queue is simply that different people implemented them.
|
Why does std::stack use std::deque by default?
|
Since the only operations required for a container to be used in a stack are:
back()
push_back()
pop_back()
Why is the default container for it a deque instead of a vector?
Don't deque reallocations give a buffer of elements before front() so that push_front() is an efficient operation? Aren't these elements wasted since they will never ever be used in the context of a stack?
If there is no overhead for using a deque this way instead of a vector, why is the default for priority_queue a vector not a deque also? (priority_queue requires front(), push_back(), and pop_back() - essentially the same as for stack)
Updated based on the Answers below:
It appears that the way deque is usually implemented is a variable size array of fixed size arrays. This makes growing faster than a vector (which requires reallocation and copying), so for something like a stack which is all about adding and removing elements, deque is likely a better choice.
priority_queue requires indexing heavily, as every removal and insertion requires you to run pop_heap() or push_heap(). This probably makes vector a better choice there since adding an element is still amortized constant anyways.
|
[
"As the container grows, a reallocation for a vector requires copying all the elements into the new block of memory. Growing a deque allocates a new block and links it to the list of blocks - no copies are required.\nOf course you can specify that a different backing container be used if you like. So if you have a stack that you know is not going to grow much, tell it to use a vector instead of a deque if that's your preference.\n",
"See Herb Sutter's Guru of the Week 54 for the relative merits of vector and deque where either would do.\nI imagine the inconsistency between priority_queue and queue is simply that different people implemented them.\n"
] |
[
83,
12
] |
[] |
[] |
[
"c++",
"containers",
"stl"
] |
stackoverflow_0000102459_c++_containers_stl.txt
|
Q:
How do I use ASP.NET with Visual Studio 2008
I haven't used Visual Studio since VB 3 and am trying to give it a shot with ASP.NET. It seems that it should be able to connect to a website (via some sort of ftp like protocol I figure) and allow to edit without having to manually upload/download the files. Is this the way it is supposed to work or am I mis-understanding? I have tried using 'create new website' and 'open website' using my testing domain (hosted by godaddy, wondering if that may be the issue as well), each time it gives me errors. I'm not sure if I'm doing something wrong or trying to do something it wasn't meant to.
A:
You really don't want to be working directly on a live web site, do you? That's just crazy. One little mistake and you've hosed the site.
Visual Studio now has it's own built in web server. You use that for testing. If you really don't want to use that you can put IIS on your local machine or set up a Dev/QA server somewhere. In that case, you'd edit it via a file share.
You should be using some kind of source control. Even for a single developer it's very important. When finished with a programming session, you check your updates back into source control.
Finally, only after the site's gone through a suitable QA process, the production server is updated from source control, not from within visual studio.
A:
I would develop your website locally and ftp it to your godaddy website after or use the publish website feature in VS
|
How do I use ASP.NET with Visual Studio 2008
|
I haven't used Visual Studio since VB 3 and am trying to give it a shot with ASP.NET. It seems that it should be able to connect to a website (via some sort of ftp like protocol I figure) and allow to edit without having to manually upload/download the files. Is this the way it is supposed to work or am I mis-understanding? I have tried using 'create new website' and 'open website' using my testing domain (hosted by godaddy, wondering if that may be the issue as well), each time it gives me errors. I'm not sure if I'm doing something wrong or trying to do something it wasn't meant to.
|
[
"You really don't want to be working directly on a live web site, do you? That's just crazy. One little mistake and you've hosed the site.\nVisual Studio now has it's own built in web server. You use that for testing. If you really don't want to use that you can put IIS on your local machine or set up a Dev/QA server somewhere. In that case, you'd edit it via a file share.\nYou should be using some kind of source control. Even for a single developer it's very important. When finished with a programming session, you check your updates back into source control. \nFinally, only after the site's gone through a suitable QA process, the production server is updated from source control, not from within visual studio.\n",
"I would develop your website locally and ftp it to your godaddy website after or use the publish website feature in VS\n"
] |
[
4,
1
] |
[] |
[] |
[
"asp.net",
"visual_studio"
] |
stackoverflow_0000102833_asp.net_visual_studio.txt
|
Q:
Protect embedded password
I have a properties file in java, in which I store all information of my app, like logo image filename, database name, database user and database password.
I can store the password encrypted on the properties file. But, the key or passphrase can be read out of the jar using a decompiler.
Is there a way to store the db pass in a properties file securely?
A:
There are multiple ways to manage this. If you can figure out a way to have a user provide a password for a keystore when the application starts up the most appropriate way would be to encrypt all the values using a key, and store this key in the keystore. The command line interface to the keystore is by using keytool. However JSE has APIs to programmatically access the keystore as well.
If you do not have an ability to have a user manually provide a password to the keystore on startup (say for a web application), one way to do it is to write an exceptionally complex obfuscation routine which can obfuscate the key and store it in a property file as well. Important things to remember is that the obfuscation and deobfuscation logic should be multi layered (could involve scrambling, encoding, introduction of spurious characters etc. etc.) and should itself have at least one key which could be hidden away in other classes in the application using non intuitive names. This is not a fully safe mechanism since someone with a decompiler and a fair amount of time and intelligence can still work around it but is the only one I know of which does not require you to break into native (ie. non easily decompilable) code.
A:
You store a SHA1 hash of the password in your properties file. Then when you validate a users password, you hash their login attempt and make sure that the two hashes match.
This is the code that will hash some bytes for you. You can easily ger bytes from a String using the getBytes() method.
/**
* Returns the hash value of the given chars
*
* Uses the default hash algorithm described above
*
* @param in
* the byte[] to hash
* @return a byte[] of hashed values
*/
public static byte[] getHashedBytes(byte[] in)
{
MessageDigest msg;
try
{
msg = MessageDigest.getInstance(hashingAlgorithmUsed);
}
catch (NoSuchAlgorithmException e)
{
throw new AssertionError("Someone chose to use a hashing algorithm that doesn't exist. Epic fail, go change it in the Util file. SHA(1) or MD5");
}
msg.update(in);
return msg.digest();
}
A:
No there is not. Even if you encrypt it, somebody will decompile the code that decrypts it.
A:
You could make a separate properties file (outside the jar) for passwords (either direct DB password or or key passphrase) and not include that properties file with the distribution. Or you might be able to make the server only accept that login from a specific machine so that spoofing would be required.
A:
In addition to encrypting the passwords as described above put any passwords in separate properties file and on deployment try to give this file the most locked down permissions possible.
For example, if your Application Server runs on Linux/Unix as root then make the password properties file owned by root with 400/-r-------- permissions.
A:
Couldn't you have the app contact a server over https and download the password, after authenticating in some way of course?
|
Protect embedded password
|
I have a properties file in java, in which I store all information of my app, like logo image filename, database name, database user and database password.
I can store the password encrypted on the properties file. But, the key or passphrase can be read out of the jar using a decompiler.
Is there a way to store the db pass in a properties file securely?
|
[
"There are multiple ways to manage this. If you can figure out a way to have a user provide a password for a keystore when the application starts up the most appropriate way would be to encrypt all the values using a key, and store this key in the keystore. The command line interface to the keystore is by using keytool. However JSE has APIs to programmatically access the keystore as well.\nIf you do not have an ability to have a user manually provide a password to the keystore on startup (say for a web application), one way to do it is to write an exceptionally complex obfuscation routine which can obfuscate the key and store it in a property file as well. Important things to remember is that the obfuscation and deobfuscation logic should be multi layered (could involve scrambling, encoding, introduction of spurious characters etc. etc.) and should itself have at least one key which could be hidden away in other classes in the application using non intuitive names. This is not a fully safe mechanism since someone with a decompiler and a fair amount of time and intelligence can still work around it but is the only one I know of which does not require you to break into native (ie. non easily decompilable) code.\n",
"You store a SHA1 hash of the password in your properties file. Then when you validate a users password, you hash their login attempt and make sure that the two hashes match.\nThis is the code that will hash some bytes for you. You can easily ger bytes from a String using the getBytes() method.\n/**\n * Returns the hash value of the given chars\n * \n * Uses the default hash algorithm described above\n * \n * @param in\n * the byte[] to hash\n * @return a byte[] of hashed values\n */\n public static byte[] getHashedBytes(byte[] in)\n {\n MessageDigest msg;\n try\n {\n msg = MessageDigest.getInstance(hashingAlgorithmUsed);\n }\n catch (NoSuchAlgorithmException e)\n {\n throw new AssertionError(\"Someone chose to use a hashing algorithm that doesn't exist. Epic fail, go change it in the Util file. SHA(1) or MD5\");\n }\n msg.update(in);\n return msg.digest();\n }\n\n",
"No there is not. Even if you encrypt it, somebody will decompile the code that decrypts it.\n",
"You could make a separate properties file (outside the jar) for passwords (either direct DB password or or key passphrase) and not include that properties file with the distribution. Or you might be able to make the server only accept that login from a specific machine so that spoofing would be required.\n",
"In addition to encrypting the passwords as described above put any passwords in separate properties file and on deployment try to give this file the most locked down permissions possible.\nFor example, if your Application Server runs on Linux/Unix as root then make the password properties file owned by root with 400/-r-------- permissions. \n",
"Couldn't you have the app contact a server over https and download the password, after authenticating in some way of course?\n"
] |
[
4,
2,
1,
0,
0,
0
] |
[] |
[] |
[
"copy_protection",
"embedded_resource",
"java",
"passwords",
"security"
] |
stackoverflow_0000102425_copy_protection_embedded_resource_java_passwords_security.txt
|
Q:
Best way to encapsulate complex Oracle PL/SQL cursor logic as a view?
I've written PL/SQL code to denormalize a table into a much-easer-to-query form. The code uses a temporary table to do some of its work, merging some rows from the original table together.
The logic is written as a pipelined table function, following the pattern from the linked article. The table function uses a PRAGMA AUTONOMOUS_TRANSACTION declaration to permit the temporary table manipulation, and also accepts a cursor input parameter to restrict the denormalization to certain ID values.
I then created a view to query the table function, passing in all possible ID values as a cursor (other uses of the function will be more restrictive).
My question: is this all really necessary? Have I completely missed a much more simple way of accomplishing the same thing?
Every time I touch PL/SQL I get the impression that I'm typing way too much.
Update: I'll add a sketch of the table I'm dealing with to give everyone an idea of the denormalization that I'm talking about. The table stores a history of employee jobs, each with an activation row, and (possibly) a termination row. It's possible for an employee to have multiple simultaneous jobs, as well as the same job over and over again in non-contiguous date ranges. For example:
| EMP_ID | JOB_ID | STATUS | EFF_DATE | other columns...
| 1 | 10 | A | 10-JAN-2008 |
| 2 | 11 | A | 13-JAN-2008 |
| 1 | 12 | A | 20-JAN-2008 |
| 2 | 11 | T | 01-FEB-2008 |
| 1 | 10 | T | 02-FEB-2008 |
| 2 | 11 | A | 20-FEB-2008 |
Querying that to figure out who is working when in what job is non-trivial. So, my denormalization function populates the temporary table with just the date ranges for each job, for any EMP_IDs passed in though the cursor. Passing in EMP_IDs 1 and 2 would produce the following:
| EMP_ID | JOB_ID | START_DATE | END_DATE |
| 1 | 10 | 10-JAN-2008 | 02-FEB-2008 |
| 2 | 11 | 13-JAN-2008 | 01-FEB-2008 |
| 1 | 12 | 20-JAN-2008 | |
| 2 | 11 | 20-FEB-2008 | |
(END_DATE allows NULLs for jobs that don't have a predetermined termination date.)
As you can imagine, this denormalized form is much, much easier to query, but creating it--so far as I can tell--requires a temporary table to store the intermediate results (e.g., job records for which the activation row has been found, but not the termination...yet). Using the pipelined table function to populate the temporary table and then return its rows is the only way I've figured out how to do it.
A:
I think a way to approach this is to use analytic functions...
I set up your test case using:
create table employee_job (
emp_id integer,
job_id integer,
status varchar2(1 char),
eff_date date
);
insert into employee_job values (1,10,'A',to_date('10-JAN-2008','DD-MON-YYYY'));
insert into employee_job values (2,11,'A',to_date('13-JAN-2008','DD-MON-YYYY'));
insert into employee_job values (1,12,'A',to_date('20-JAN-2008','DD-MON-YYYY'));
insert into employee_job values (2,11,'T',to_date('01-FEB-2008','DD-MON-YYYY'));
insert into employee_job values (1,10,'T',to_date('02-FEB-2008','DD-MON-YYYY'));
insert into employee_job values (2,11,'A',to_date('20-FEB-2008','DD-MON-YYYY'));
commit;
I've used the lead function to get the next date and then wrapped it all as a sub-query just to get the "A" records and add the end date if there is one.
select
emp_id,
job_id,
eff_date start_date,
decode(next_status,'T',next_eff_date,null) end_date
from
(
select
emp_id,
job_id,
eff_date,
status,
lead(eff_date,1,null) over (partition by emp_id, job_id order by eff_date, status) next_eff_date,
lead(status,1,null) over (partition by emp_id, job_id order by eff_date, status) next_status
from
employee_job
)
where
status = 'A'
order by
start_date,
emp_id,
job_id
I'm sure there's some use cases I've missed but you get the idea. Analytic functions are your friend :)
EMP_ID JOB_ID START_DATE END_DATE
1 10 10-JAN-2008 02-FEB-2008
2 11 13-JAN-2008 01-FEB-2008
2 11 20-FEB-2008
1 12 20-JAN-2008
A:
Rather than having the input parameter as a cursor, I would have a table variable (don't know if Oracle has such a thing I'm a TSQL guy) or populate another temp table with the ID values and join on it in the view/function or wherever you need to.
The only time for cursors in my honest opinion is when you have to loop. And when you have to loop I always recommend to do that outside of the database in the application logic.
A:
It sounds like you are giving away some read consistency here ie: it will be possible for the contents of your temporary table to be out of sync with the source data, if you have concurrent modification data modification.
Without knowing the requirements, nor complexity of what you want to achieve. I would attempt
to define a view, containing (possibly complex) logic in SQL, else I'd add some PL/SQL to the mix with;
A pipelined table function, but using an SQL collection type (instead of the temporary table ). A simple example is here: http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:4447489221109
Number 2 would give you less moving parts and solve your consistency issue.
Mathew Butler
A:
The real problem here is the "write-only" table design - by which I mean, it's easy to insert data into it, but tricky and inefficient to get useful information out of it! Your "temporary" table has the structure the "permanent" table should have had in the first place.
Could you perhaps do this:
Create a permanent table with the better structure
Populate it to match the data in the first table
Define a database trigger on the original table to keep the new table in sync from now on
Then you can just select from the new table to perform your reporting.
A:
I couldn't agree with you more, HollyStyles. I also used to be a TSQL guy, and find some of Oracle's idiosyncrasies more than a little perplexing. Unfortunately, temp tables aren't as convenient in Oracle, and in this case, other existing SQL logic is expecting to directly query a table, so I give it this view instead. There's really no application logic that exists outside of the database in this system.
Oracle developers do seem to use cursors much more eagerly than I would have thought. Given the bondage & discipline nature of PL/SQL, that's all the more surprising.
A:
The simplest solution is:
Create a global temporary table containing just IDs you need:
CREATE GLOBAL TEMPORARY TABLE tab_ids (id INTEGER)
ON COMMIT DELETE ROWS;
Populate the temporary table with the IDs you need.
Use EXISTS operation in your procedure to select the rows that are only in the IDs table:
SELECT yt.col1, yt.col2 FROM your\_table yt
WHERE EXISTS (
SELECT 'X' FROM tab_ids ti
WHERE ti.id = yt.id
)
You can also pass a comma-separated string of IDs as a function parameter and parse it into a table. This is performed by a single SELECT. Want to know more - ask me how :-) But it's got to be a separate question.
|
Best way to encapsulate complex Oracle PL/SQL cursor logic as a view?
|
I've written PL/SQL code to denormalize a table into a much-easer-to-query form. The code uses a temporary table to do some of its work, merging some rows from the original table together.
The logic is written as a pipelined table function, following the pattern from the linked article. The table function uses a PRAGMA AUTONOMOUS_TRANSACTION declaration to permit the temporary table manipulation, and also accepts a cursor input parameter to restrict the denormalization to certain ID values.
I then created a view to query the table function, passing in all possible ID values as a cursor (other uses of the function will be more restrictive).
My question: is this all really necessary? Have I completely missed a much more simple way of accomplishing the same thing?
Every time I touch PL/SQL I get the impression that I'm typing way too much.
Update: I'll add a sketch of the table I'm dealing with to give everyone an idea of the denormalization that I'm talking about. The table stores a history of employee jobs, each with an activation row, and (possibly) a termination row. It's possible for an employee to have multiple simultaneous jobs, as well as the same job over and over again in non-contiguous date ranges. For example:
| EMP_ID | JOB_ID | STATUS | EFF_DATE | other columns...
| 1 | 10 | A | 10-JAN-2008 |
| 2 | 11 | A | 13-JAN-2008 |
| 1 | 12 | A | 20-JAN-2008 |
| 2 | 11 | T | 01-FEB-2008 |
| 1 | 10 | T | 02-FEB-2008 |
| 2 | 11 | A | 20-FEB-2008 |
Querying that to figure out who is working when in what job is non-trivial. So, my denormalization function populates the temporary table with just the date ranges for each job, for any EMP_IDs passed in though the cursor. Passing in EMP_IDs 1 and 2 would produce the following:
| EMP_ID | JOB_ID | START_DATE | END_DATE |
| 1 | 10 | 10-JAN-2008 | 02-FEB-2008 |
| 2 | 11 | 13-JAN-2008 | 01-FEB-2008 |
| 1 | 12 | 20-JAN-2008 | |
| 2 | 11 | 20-FEB-2008 | |
(END_DATE allows NULLs for jobs that don't have a predetermined termination date.)
As you can imagine, this denormalized form is much, much easier to query, but creating it--so far as I can tell--requires a temporary table to store the intermediate results (e.g., job records for which the activation row has been found, but not the termination...yet). Using the pipelined table function to populate the temporary table and then return its rows is the only way I've figured out how to do it.
|
[
"I think a way to approach this is to use analytic functions...\nI set up your test case using:\ncreate table employee_job (\n emp_id integer,\n job_id integer,\n status varchar2(1 char),\n eff_date date\n ); \n\ninsert into employee_job values (1,10,'A',to_date('10-JAN-2008','DD-MON-YYYY'));\ninsert into employee_job values (2,11,'A',to_date('13-JAN-2008','DD-MON-YYYY'));\ninsert into employee_job values (1,12,'A',to_date('20-JAN-2008','DD-MON-YYYY'));\ninsert into employee_job values (2,11,'T',to_date('01-FEB-2008','DD-MON-YYYY'));\ninsert into employee_job values (1,10,'T',to_date('02-FEB-2008','DD-MON-YYYY'));\ninsert into employee_job values (2,11,'A',to_date('20-FEB-2008','DD-MON-YYYY'));\n\ncommit;\n\nI've used the lead function to get the next date and then wrapped it all as a sub-query just to get the \"A\" records and add the end date if there is one.\nselect\n emp_id,\n job_id,\n eff_date start_date,\n decode(next_status,'T',next_eff_date,null) end_date\nfrom\n (\n select\n emp_id,\n job_id,\n eff_date,\n status,\n lead(eff_date,1,null) over (partition by emp_id, job_id order by eff_date, status) next_eff_date,\n lead(status,1,null) over (partition by emp_id, job_id order by eff_date, status) next_status\n from\n employee_job\n )\nwhere\n status = 'A'\norder by\n start_date,\n emp_id,\n job_id\n\nI'm sure there's some use cases I've missed but you get the idea. Analytic functions are your friend :)\nEMP_ID JOB_ID START_DATE END_DATE \n 1 10 10-JAN-2008 02-FEB-2008 \n 2 11 13-JAN-2008 01-FEB-2008 \n 2 11 20-FEB-2008 \n 1 12 20-JAN-2008 \n\n",
"Rather than having the input parameter as a cursor, I would have a table variable (don't know if Oracle has such a thing I'm a TSQL guy) or populate another temp table with the ID values and join on it in the view/function or wherever you need to.\nThe only time for cursors in my honest opinion is when you have to loop. And when you have to loop I always recommend to do that outside of the database in the application logic.\n",
"It sounds like you are giving away some read consistency here ie: it will be possible for the contents of your temporary table to be out of sync with the source data, if you have concurrent modification data modification.\nWithout knowing the requirements, nor complexity of what you want to achieve. I would attempt\n\nto define a view, containing (possibly complex) logic in SQL, else I'd add some PL/SQL to the mix with;\nA pipelined table function, but using an SQL collection type (instead of the temporary table ). A simple example is here: http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:4447489221109\n\nNumber 2 would give you less moving parts and solve your consistency issue.\nMathew Butler\n",
"The real problem here is the \"write-only\" table design - by which I mean, it's easy to insert data into it, but tricky and inefficient to get useful information out of it! Your \"temporary\" table has the structure the \"permanent\" table should have had in the first place.\nCould you perhaps do this:\n\nCreate a permanent table with the better structure\nPopulate it to match the data in the first table\nDefine a database trigger on the original table to keep the new table in sync from now on\n\nThen you can just select from the new table to perform your reporting.\n",
"I couldn't agree with you more, HollyStyles. I also used to be a TSQL guy, and find some of Oracle's idiosyncrasies more than a little perplexing. Unfortunately, temp tables aren't as convenient in Oracle, and in this case, other existing SQL logic is expecting to directly query a table, so I give it this view instead. There's really no application logic that exists outside of the database in this system.\nOracle developers do seem to use cursors much more eagerly than I would have thought. Given the bondage & discipline nature of PL/SQL, that's all the more surprising.\n",
"The simplest solution is:\n\nCreate a global temporary table containing just IDs you need:\nCREATE GLOBAL TEMPORARY TABLE tab_ids (id INTEGER) \nON COMMIT DELETE ROWS;\n\nPopulate the temporary table with the IDs you need.\nUse EXISTS operation in your procedure to select the rows that are only in the IDs table:\n SELECT yt.col1, yt.col2 FROM your\\_table yt \n WHERE EXISTS ( \n SELECT 'X' FROM tab_ids ti \n WHERE ti.id = yt.id \n )\n\n\nYou can also pass a comma-separated string of IDs as a function parameter and parse it into a table. This is performed by a single SELECT. Want to know more - ask me how :-) But it's got to be a separate question.\n"
] |
[
4,
1,
1,
1,
0,
0
] |
[] |
[] |
[
"oracle",
"plsql",
"sql"
] |
stackoverflow_0000020081_oracle_plsql_sql.txt
|
Q:
Problems with sound on a 6265i Nokia using J2ME and Netbeans 6.1
Currently, I have some basic code to play a simple tone whenever a button is pressed in the command item menu.
Using: Manager.playTone(note, duration, volume);
I also have a blackberry that I'm testing this same midlet on and the sound works fine. So, is this something specific to Nokia phones that aren't allowing me to play the sound?
I've made sure to build it using the correct CLDC and MIDP versions.
I've also tried the audio demos that are in the Netbeans IDE, and still no luck. It throws a "cannot create player" message.
A:
http://discussion.forum.nokia.com/forum/showthread.php?t=91500
This thread on Forum Nokia seems to suggest that certain Nokia models have problems playing tones with the Manager.playTone() function, more specifically a MediaException is thrown, as you are having (MediaException is just the default exception if any problem occurs when trying to play a tone).
You can try sleeping the thread after calling Manager.playTone for greater than the length of the tone. There is a possibility that you get into a state where you are trying to play two or more tones at once and the phone might not allow more than one player to be created at a time.
If all else fails you can use the Nokia UI Sound class (com.nokia.mid.sound.Sound) to play the tone. It is deprecated and replaced with the call you are making, but it might be your only solution for this device. Just make your own playTone method and have it call the Nokia function for this device (and maybe other Nokia devices if need be) and the J2ME standard call on all other devices. You can accomplish this with the Netbeans ME Preprocessor.
http://www.theoreticlabs.com/dev/api/nokia-ui-1.1/com/nokia/mid/sound/Sound.html
|
Problems with sound on a 6265i Nokia using J2ME and Netbeans 6.1
|
Currently, I have some basic code to play a simple tone whenever a button is pressed in the command item menu.
Using: Manager.playTone(note, duration, volume);
I also have a blackberry that I'm testing this same midlet on and the sound works fine. So, is this something specific to Nokia phones that aren't allowing me to play the sound?
I've made sure to build it using the correct CLDC and MIDP versions.
I've also tried the audio demos that are in the Netbeans IDE, and still no luck. It throws a "cannot create player" message.
|
[
"http://discussion.forum.nokia.com/forum/showthread.php?t=91500\nThis thread on Forum Nokia seems to suggest that certain Nokia models have problems playing tones with the Manager.playTone() function, more specifically a MediaException is thrown, as you are having (MediaException is just the default exception if any problem occurs when trying to play a tone).\nYou can try sleeping the thread after calling Manager.playTone for greater than the length of the tone. There is a possibility that you get into a state where you are trying to play two or more tones at once and the phone might not allow more than one player to be created at a time.\nIf all else fails you can use the Nokia UI Sound class (com.nokia.mid.sound.Sound) to play the tone. It is deprecated and replaced with the call you are making, but it might be your only solution for this device. Just make your own playTone method and have it call the Nokia function for this device (and maybe other Nokia devices if need be) and the J2ME standard call on all other devices. You can accomplish this with the Netbeans ME Preprocessor.\nhttp://www.theoreticlabs.com/dev/api/nokia-ui-1.1/com/nokia/mid/sound/Sound.html\n"
] |
[
2
] |
[] |
[] |
[
"cldc",
"java_me",
"midp",
"netbeans6.1",
"nokia"
] |
stackoverflow_0000098476_cldc_java_me_midp_netbeans6.1_nokia.txt
|
Q:
Is there a good tutorial on Websphere 6.1 ND deployments?
I need to deploy an application on the WAS ND 6.1 and do not know anything about it and cannot afford to go to training...
A:
There is the RedBook WebSphere Application Server V6.1: System Management and Configuration with Chapter 14 talking about application deployment, this could help.
A:
Getting started with WAS ND can be a bit overwhelming. The redbooks mentioned above to give you a good introduction, especially the first few chapters but they are often over 500 pages long. IBM also provides an educational assistant which is a presentation style overview and that may give a good point to start with. The link to the educational assistant is shown below:
http://publib.boulder.ibm.com/infocenter/ieduasst/v1r1m0/index.jsp
A:
This course from IBM would also be excellent but a bit pricy!!!
http://www.redbooks.ibm.com/abstracts/sg247304.html?Open
|
Is there a good tutorial on Websphere 6.1 ND deployments?
|
I need to deploy an application on the WAS ND 6.1 and do not know anything about it and cannot afford to go to training...
|
[
"There is the RedBook WebSphere Application Server V6.1: System Management and Configuration with Chapter 14 talking about application deployment, this could help.\n",
"Getting started with WAS ND can be a bit overwhelming. The redbooks mentioned above to give you a good introduction, especially the first few chapters but they are often over 500 pages long. IBM also provides an educational assistant which is a presentation style overview and that may give a good point to start with. The link to the educational assistant is shown below:\nhttp://publib.boulder.ibm.com/infocenter/ieduasst/v1r1m0/index.jsp\n",
"This course from IBM would also be excellent but a bit pricy!!!\nhttp://www.redbooks.ibm.com/abstracts/sg247304.html?Open\n"
] |
[
1,
1,
0
] |
[] |
[] |
[
"websphere"
] |
stackoverflow_0000064873_websphere.txt
|
Q:
What is the best option for running a Jabber/XMPP on Windows 2003?
I'm looking to run a Jabber server on a Windows 2003 server(web farm) and like some practical advice from anyone who has run a live environment with ~500 concurrent users.
Criteria for comment:
Performance
Capacity (ie ~number of concurrent users)
Stability
A:
OpenFire is a good gpl java implementation of a jabber server.
It has plenty of option plugins you can use and it can intergrate quite well with Active Directory OpenFire
A:
I think you're going to need to be a bit more explicit - you looking for server configurations, or software e.g. Jabber Server?
If you're thinking Jabber server, EJabberD is probably the most stable, flexible, capable of being clustered etc.
Really useful comparison of Open Source servers here...
http://www.saint-andre.com/jabber/jsc/
|
What is the best option for running a Jabber/XMPP on Windows 2003?
|
I'm looking to run a Jabber server on a Windows 2003 server(web farm) and like some practical advice from anyone who has run a live environment with ~500 concurrent users.
Criteria for comment:
Performance
Capacity (ie ~number of concurrent users)
Stability
|
[
"OpenFire is a good gpl java implementation of a jabber server.\nIt has plenty of option plugins you can use and it can intergrate quite well with Active Directory OpenFire\n",
"I think you're going to need to be a bit more explicit - you looking for server configurations, or software e.g. Jabber Server?\nIf you're thinking Jabber server, EJabberD is probably the most stable, flexible, capable of being clustered etc.\nReally useful comparison of Open Source servers here...\nhttp://www.saint-andre.com/jabber/jsc/\n"
] |
[
5,
1
] |
[] |
[] |
[
"instant_messaging",
"xmpp"
] |
stackoverflow_0000102704_instant_messaging_xmpp.txt
|
Q:
Design Pattern for multithreaded observers
In a digital signal acquisition system, often data is pushed into an observer in the system by one thread.
example from Wikipedia/Observer_pattern:
foreach (IObserver observer in observers)
observer.Update(message);
When e.g. a user action from e.g. a GUI-thread requires the data to stop flowing, you want to break the subject-observer connection, and even dispose of the observer alltogether.
One may argue: you should just stop the data source, and wait for a sentinel value to dispose of the connection. But that would incur more latency in the system.
Of course, if the data pumping thread has just asked for the address of the observer, it might find it's sending a message to a destroyed object.
Has someone created an 'official' Design Pattern countering this situation? Shouldn't they?
A:
If you want to have the data source to always be on the safe side of concurrency, you should have at least one pointer that is always safe for him to use.
So the Observer object should have a lifetime that isn't ended before that of the data source.
This can be done by only adding Observers, but never removing them.
You could have each observer not do the core implementation itself, but have it delegate this task to an ObserverImpl object.
You lock access to this impl object. This is no big deal, it just means the GUI unsubscriber would be blocked for a little while in case the observer is busy using the ObserverImpl object. If GUI responsiveness would be an issue, you can use some kind of concurrent job-queue mechanism with an unsubscription job pushed onto it. ( like PostMessage in Windows )
When unsubscribing, you just substitute the core implementation for a dummy implementation. Again this operation should grab the lock. This would indeed introduce some waiting for the data source, but since it's just a [ lock - pointer swap - unlock ] you could say that this is fast enough for real-time applications.
If you want to avoid stacking Observer objects that just contain a dummy, you have to do some kind of bookkeeping, but this could boil down to something trivial like an object holding a pointer to the Observer object he needs from the list.
Optimization :
If you also keep the implementations ( the real one + the dummy ) alive as long as the Observer itself, you can do this without an actual lock, and use something like InterlockedExchangePointer to swap the pointers.
Worst case scenario : delegating call is going on while pointer is swapped --> no big deal all objects stay alive and delegating can continue. Next delegating call will be to new implementation object. ( Barring any new swaps of course )
A:
You could send a message to all observers informing them the data source is terminating and let the observers remove themselves from the list.
In response to the comment, the implementation of the subject-observer pattern should allow for dynamic addition / removal of observers. In C#, the event system is a subject/observer pattern where observers are added using event += observer and removed using event -= observer.
|
Design Pattern for multithreaded observers
|
In a digital signal acquisition system, often data is pushed into an observer in the system by one thread.
example from Wikipedia/Observer_pattern:
foreach (IObserver observer in observers)
observer.Update(message);
When e.g. a user action from e.g. a GUI-thread requires the data to stop flowing, you want to break the subject-observer connection, and even dispose of the observer alltogether.
One may argue: you should just stop the data source, and wait for a sentinel value to dispose of the connection. But that would incur more latency in the system.
Of course, if the data pumping thread has just asked for the address of the observer, it might find it's sending a message to a destroyed object.
Has someone created an 'official' Design Pattern countering this situation? Shouldn't they?
|
[
"If you want to have the data source to always be on the safe side of concurrency, you should have at least one pointer that is always safe for him to use. \nSo the Observer object should have a lifetime that isn't ended before that of the data source.\nThis can be done by only adding Observers, but never removing them. \nYou could have each observer not do the core implementation itself, but have it delegate this task to an ObserverImpl object. \nYou lock access to this impl object. This is no big deal, it just means the GUI unsubscriber would be blocked for a little while in case the observer is busy using the ObserverImpl object. If GUI responsiveness would be an issue, you can use some kind of concurrent job-queue mechanism with an unsubscription job pushed onto it. ( like PostMessage in Windows )\nWhen unsubscribing, you just substitute the core implementation for a dummy implementation. Again this operation should grab the lock. This would indeed introduce some waiting for the data source, but since it's just a [ lock - pointer swap - unlock ] you could say that this is fast enough for real-time applications.\nIf you want to avoid stacking Observer objects that just contain a dummy, you have to do some kind of bookkeeping, but this could boil down to something trivial like an object holding a pointer to the Observer object he needs from the list.\nOptimization :\nIf you also keep the implementations ( the real one + the dummy ) alive as long as the Observer itself, you can do this without an actual lock, and use something like InterlockedExchangePointer to swap the pointers.\nWorst case scenario : delegating call is going on while pointer is swapped --> no big deal all objects stay alive and delegating can continue. Next delegating call will be to new implementation object. ( Barring any new swaps of course )\n",
"You could send a message to all observers informing them the data source is terminating and let the observers remove themselves from the list.\nIn response to the comment, the implementation of the subject-observer pattern should allow for dynamic addition / removal of observers. In C#, the event system is a subject/observer pattern where observers are added using event += observer and removed using event -= observer.\n"
] |
[
2,
0
] |
[] |
[] |
[
"dataflow",
"design_patterns",
"multithreading",
"observer_pattern"
] |
stackoverflow_0000082074_dataflow_design_patterns_multithreading_observer_pattern.txt
|
Q:
Keyboard scancodes?
GNU/Linux text console, X11 not involved, indeed not even
installed. Keyboard is US layout, keymap US default. Kernel
version 2.20.x or later.
An application written in C is getting keyboard input in
translation mode, i.e. XLATE or UNICODE. When a key is
pressed, the application receives the corresponding
keystring. As an example, you press F1, the application
reads "\033[[A".
Before the kernel sends the keystring to the application, it
must know which key is pressed, i.e. it must know its
scancode. In the F1 example above, the scancode for the key
pressed is 59 or 0x3b.
That's to say even when the keyboard is in translation mode,
the scancodes are held somewhere in memory. How can the
application access them without switching the keyboard to
RAW or MEDIUMRAW mode? A code snippet would help.
A:
Chances are that you are issuing the ioctl commands on the wrong file descriptor, check for error codes coming back from ioctl and tcsetattr.
You should be opening the console device, and then issuing your keyboard translation commands on that device. You would have to basically mimic what the X server is doing.
This is a link to the source code on codesearch.google.com.
A:
Sure, the code you want to look at is in kbd-1.12.tar.bz2, which is the source bundle for the 'kbd' package. The 'kbd' package provides tools such as 'dumpkeys', 'showkeys' and 'loadkeys', which are useful for looking at the current keyboard mapping, checking what keys emit what scancodes, and loading a new mapping.
You will have to communicate with the kernel via ioctls, and it's quite complicated, so I recommend reading the source of that package to see how it's done.
Here's a link to the tarball: kbd-1.12.tar.bz2 (618K).
A:
At a terminal I entered
dumpkeys -f > test.txt
and there was a great deal of detailed information, including:
keycode 29 = Control
...
string F1 = "\033[[A"
string F2 = "\033[[B"
string F3 = "\033[[C"
string F4 = "\033[[D"
string F5 = "\033[[E"
string F6 = "\033[17~"
string F7 = "\033[18~"
string F8 = "\033[19~"
...
string Prior = "\033[5~"
string Next = "\033[6~"
string Macro = "\033[M"
string Pause = "\033[P"
dumpkeys was included by default with my distribution. But you should be able to find it in what jerub posted. I would start by looking kbd-1.12/src/loadkeys.y.
It looks like the kernel is responsible for holding that data, and can report to those who know how to ask.
A:
You maybe want to look at kbdev or evdev (look at your Documentation/input/input.txt file in your kernel source directory for starters.) That would work for console access.
|
Keyboard scancodes?
|
GNU/Linux text console, X11 not involved, indeed not even
installed. Keyboard is US layout, keymap US default. Kernel
version 2.20.x or later.
An application written in C is getting keyboard input in
translation mode, i.e. XLATE or UNICODE. When a key is
pressed, the application receives the corresponding
keystring. As an example, you press F1, the application
reads "\033[[A".
Before the kernel sends the keystring to the application, it
must know which key is pressed, i.e. it must know its
scancode. In the F1 example above, the scancode for the key
pressed is 59 or 0x3b.
That's to say even when the keyboard is in translation mode,
the scancodes are held somewhere in memory. How can the
application access them without switching the keyboard to
RAW or MEDIUMRAW mode? A code snippet would help.
|
[
"Chances are that you are issuing the ioctl commands on the wrong file descriptor, check for error codes coming back from ioctl and tcsetattr.\nYou should be opening the console device, and then issuing your keyboard translation commands on that device. You would have to basically mimic what the X server is doing.\nThis is a link to the source code on codesearch.google.com.\n",
"Sure, the code you want to look at is in kbd-1.12.tar.bz2, which is the source bundle for the 'kbd' package. The 'kbd' package provides tools such as 'dumpkeys', 'showkeys' and 'loadkeys', which are useful for looking at the current keyboard mapping, checking what keys emit what scancodes, and loading a new mapping.\nYou will have to communicate with the kernel via ioctls, and it's quite complicated, so I recommend reading the source of that package to see how it's done.\nHere's a link to the tarball: kbd-1.12.tar.bz2 (618K).\n",
"At a terminal I entered \ndumpkeys -f > test.txt\n\nand there was a great deal of detailed information, including:\n\nkeycode 29 = Control\n ...\n string F1 = \"\\033[[A\"\n string F2 = \"\\033[[B\"\n string F3 = \"\\033[[C\"\n string F4 = \"\\033[[D\"\n string F5 = \"\\033[[E\"\n string F6 = \"\\033[17~\"\n string F7 = \"\\033[18~\"\n string F8 = \"\\033[19~\"\n ...\n string Prior = \"\\033[5~\"\n string Next = \"\\033[6~\"\n string Macro = \"\\033[M\"\n string Pause = \"\\033[P\"\n\ndumpkeys was included by default with my distribution. But you should be able to find it in what jerub posted. I would start by looking kbd-1.12/src/loadkeys.y.\nIt looks like the kernel is responsible for holding that data, and can report to those who know how to ask.\n",
"You maybe want to look at kbdev or evdev (look at your Documentation/input/input.txt file in your kernel source directory for starters.) That would work for console access.\n"
] |
[
2,
1,
0,
0
] |
[] |
[] |
[
"console",
"linux",
"text"
] |
stackoverflow_0000090704_console_linux_text.txt
|
Q:
Where can I see what Color properties in .NET like "BlanchedAlmond" look like?
I'm in the middle of writing code in .Net to draw something in my app and I need to pick a color to use. But what does the color "Chartreuse" look like? Isn't there a nice bitmap that shows what each of the system colors look like somewhere?
Thanks!
A:
MSDN - Colors by Name
A:
Try this site.
This site is nice because it shows how the color will look as foreground and background color.
A:
I believe this is what you're looking for: http://www.cambiaresearch.com/c4/7cb36a7b-3731-48f6-b91b-1d8c503f140e/What-are-the-aspnet-Named-Colors.aspx
A:
Yes there is a site: http://adonnart.free.fr/gratuit/140coulu.htm
with Hex-Codes
A:
Check this out: http://www.w3schools.com/TAGS/ref_color_tryit.asp?color=BlanchedAlmond
(Pay attention the URL and modify as necessary)
|
Where can I see what Color properties in .NET like "BlanchedAlmond" look like?
|
I'm in the middle of writing code in .Net to draw something in my app and I need to pick a color to use. But what does the color "Chartreuse" look like? Isn't there a nice bitmap that shows what each of the system colors look like somewhere?
Thanks!
|
[
"MSDN - Colors by Name\n",
"Try this site. \nThis site is nice because it shows how the color will look as foreground and background color.\n",
"I believe this is what you're looking for: http://www.cambiaresearch.com/c4/7cb36a7b-3731-48f6-b91b-1d8c503f140e/What-are-the-aspnet-Named-Colors.aspx\n",
"Yes there is a site: http://adonnart.free.fr/gratuit/140coulu.htm\nwith Hex-Codes\n",
"Check this out: http://www.w3schools.com/TAGS/ref_color_tryit.asp?color=BlanchedAlmond\n(Pay attention the URL and modify as necessary)\n"
] |
[
4,
2,
1,
1,
0
] |
[] |
[] |
[
".net",
"colors"
] |
stackoverflow_0000103035_.net_colors.txt
|
Q:
IE Javascript Clicking Issue
First off, I'm working on an app that's written such that some of your typical debugging tools can't be used (or at least I can't figure out how :).
JavaScript, html, etc are all "cooked" and encoded (I think; I'm a little fuzzy on how the process works) before being deployed, so I can't attach VS 2005 to ie, and firebug lite doesn't work well. Also, the interface is in frames (yuck), so some other tools don't work as well.
Firebug works great in Firefox, which isn't having this problem (nor is Safari), so I'm hoping someone might spot something "obviously" wrong with the way my code will play with IE. There's more information that can be given about its quirkiness, but let's start with this.
Basically, I have a function that "collapses" tables into their headers by making normal table rows not visible. I have "onclick='toggleDisplay("theTableElement", "theCollapseImageElement")'" in the <tr> tags, and tables start off with "class='closed'".
Single clicks collapse and expand tables in FF & Safari, but IE tables require multiple clicks (a seemingly arbitrary number between 1 and 5) to expand. Sometimes after initially getting "opened", the tables will expand and collapse with a single click for a little while, only to eventually revert to requiring multiple clicks. I can tell from what little I can see in Visual Studio that the function is actually being reached each time. Thanks in advance for any advice!
Here's the JS code:
bURL_RM_RID="some image prefix";
CLOSED_TBL="closed";
OPEN_TBL="open";
CLOSED_IMG= bURL_RM_RID+'166';
OPENED_IMG= bURL_RM_RID+'167';
//collapses/expands tbl (a table) and swaps out the image tblimg
function toggleDisplay(tbl, tblimg) {
var rowVisible;
var tblclass = tbl.getAttribute("class");
var tblRows = tbl.rows;
var img = tblimg;
//Are we expanding or collapsing the table?
if (tblclass == CLOSED_TBL) rowVisible = false;
else rowVisible = true;
for (i = 0; i < tblRows.length; i++) {
if (tblRows[i].className != "headerRow") {
tblRows[i].style.display = (rowVisible) ? "none" : "";
}
}
//set the collapse images to the correct state and swap the class name
rowVisible = !rowVisible;
if (rowVisible) {
img.setAttribute("src", CLOSED_IMG);
tbl.setAttribute("class",OPEN_TBL);
}
else {
img.setAttribute("src", OPENED_IMG);
tbl.setAttribute("class",CLOSED_TBL);
}
}
A:
Have you tried changing this line
tblRows[i].style.display = (rowVisible) ? "none" : "";
to something like
tblRows[i].style.display = (rowVisible) ? "none" : "table-row";
or
tblRows[i].style.display = (rowVisible) ? "none" : "auto";
A:
setAttribute is unreliable in IE. It treats attribute access and object property access as the same thing, so because the DOM property for the 'class' attribute is called 'className', you would have to use that instead on IE.
This bug is fixed in the new IE8 beta, but it is easier simply to use the DOM Level 1 HTML property directly:
img.src= CLOSED_IMAGE;
tbl.className= OPEN_TBL;
You can also do the table folding in the stylesheet, which will be faster and will save you the bother of having to loop over the table rows in script:
table.closed tr { display: none; }
A:
You might want to place your onclick call on the actual <tr> tag rather than the individual <th> tags. This way you have less JS in your HTML which will make it more maintainable.
A:
If you enable script debugging in IE (Tools->Internet Options->Advanced) and put a 'debugger;' statement in the code, IE will automatically bring up Visual Studio when it hits the debugger statement.
A:
I have had issues with this in IE. If I remember correctly, I needed to put an initial value for the "display" style, directly on the HTML as it was initially generated. For example:
<table>
<tr style="display:none"> ... </tr>
<tr style="display:"> ... </tr>
</table>
Then I could use JavaScript to change the style, the way you're doing it.
A:
I always use style.display = "block" and style.display = "none"
|
IE Javascript Clicking Issue
|
First off, I'm working on an app that's written such that some of your typical debugging tools can't be used (or at least I can't figure out how :).
JavaScript, html, etc are all "cooked" and encoded (I think; I'm a little fuzzy on how the process works) before being deployed, so I can't attach VS 2005 to ie, and firebug lite doesn't work well. Also, the interface is in frames (yuck), so some other tools don't work as well.
Firebug works great in Firefox, which isn't having this problem (nor is Safari), so I'm hoping someone might spot something "obviously" wrong with the way my code will play with IE. There's more information that can be given about its quirkiness, but let's start with this.
Basically, I have a function that "collapses" tables into their headers by making normal table rows not visible. I have "onclick='toggleDisplay("theTableElement", "theCollapseImageElement")'" in the <tr> tags, and tables start off with "class='closed'".
Single clicks collapse and expand tables in FF & Safari, but IE tables require multiple clicks (a seemingly arbitrary number between 1 and 5) to expand. Sometimes after initially getting "opened", the tables will expand and collapse with a single click for a little while, only to eventually revert to requiring multiple clicks. I can tell from what little I can see in Visual Studio that the function is actually being reached each time. Thanks in advance for any advice!
Here's the JS code:
bURL_RM_RID="some image prefix";
CLOSED_TBL="closed";
OPEN_TBL="open";
CLOSED_IMG= bURL_RM_RID+'166';
OPENED_IMG= bURL_RM_RID+'167';
//collapses/expands tbl (a table) and swaps out the image tblimg
function toggleDisplay(tbl, tblimg) {
var rowVisible;
var tblclass = tbl.getAttribute("class");
var tblRows = tbl.rows;
var img = tblimg;
//Are we expanding or collapsing the table?
if (tblclass == CLOSED_TBL) rowVisible = false;
else rowVisible = true;
for (i = 0; i < tblRows.length; i++) {
if (tblRows[i].className != "headerRow") {
tblRows[i].style.display = (rowVisible) ? "none" : "";
}
}
//set the collapse images to the correct state and swap the class name
rowVisible = !rowVisible;
if (rowVisible) {
img.setAttribute("src", CLOSED_IMG);
tbl.setAttribute("class",OPEN_TBL);
}
else {
img.setAttribute("src", OPENED_IMG);
tbl.setAttribute("class",CLOSED_TBL);
}
}
|
[
"Have you tried changing this line\ntblRows[i].style.display = (rowVisible) ? \"none\" : \"\";\n\nto something like\ntblRows[i].style.display = (rowVisible) ? \"none\" : \"table-row\";\n\nor\ntblRows[i].style.display = (rowVisible) ? \"none\" : \"auto\";\n\n",
"setAttribute is unreliable in IE. It treats attribute access and object property access as the same thing, so because the DOM property for the 'class' attribute is called 'className', you would have to use that instead on IE.\nThis bug is fixed in the new IE8 beta, but it is easier simply to use the DOM Level 1 HTML property directly:\nimg.src= CLOSED_IMAGE;\ntbl.className= OPEN_TBL;\n\nYou can also do the table folding in the stylesheet, which will be faster and will save you the bother of having to loop over the table rows in script:\ntable.closed tr { display: none; }\n\n",
"You might want to place your onclick call on the actual <tr> tag rather than the individual <th> tags. This way you have less JS in your HTML which will make it more maintainable.\n",
"If you enable script debugging in IE (Tools->Internet Options->Advanced) and put a 'debugger;' statement in the code, IE will automatically bring up Visual Studio when it hits the debugger statement.\n",
"I have had issues with this in IE. If I remember correctly, I needed to put an initial value for the \"display\" style, directly on the HTML as it was initially generated. For example:\n<table>\n <tr style=\"display:none\"> ... </tr>\n <tr style=\"display:\"> ... </tr>\n</table>\n\nThen I could use JavaScript to change the style, the way you're doing it.\n",
"I always use style.display = \"block\" and style.display = \"none\"\n"
] |
[
3,
2,
0,
0,
0,
0
] |
[] |
[] |
[
"click",
"internet_explorer",
"javascript"
] |
stackoverflow_0000102261_click_internet_explorer_javascript.txt
|
Q:
Should I store a database ID field in ViewState?
I need to retrieve a record from a database, display it on a web page (I'm using ASP.NET) but store the ID (primary key) from that record somewhere so I can go back to the database later with that ID (perhaps to do an update).
I know there are probably a few ways to do this, such as storing the ID in ViewState or a hidden field, but what is the best method and what are the reasons I might choose this method over any others?
A:
It depends.
Do you care if anyone sees the record id? If you do then both hidden fields and viewstate are not suitable; you need to store it in session state, or encrypt viewstate.
Do you care if someone submits the form with a bogus id? If you do then you can't use a hidden field (and you need to look at CSRF protection as a bonus)
Do you want it unchangable but don't care about it being open to viewing (with some work)? Use viewstate and set enableViewStateMac="true" on your page (or globally)
Want it hidden and protected but can't use session state? Encrypt your viewstate by setting the following web.config entries
<pages enableViewState="true" enableViewStateMac="true" />
<machineKey ... validation="3DES" />
A:
Do you want the end user to know the ID? For example if the id value is a standard 1,1 seed from the database I could look at the number and see how many customers you have. If you encrypt the value (as the viewstate can) I would find it much harder to decypher the key (but not impossible).
The alternative is to store it in the session, this will put a (very small if its just an integer) performance hit on your application but mean that I as a user never see that primary key. It also exposes the object to other parts of your application, that you may or may not want it to be exposed to (session objects remain until cleared, a set time (like 5 mins) passes or the browser window is closed - whichever happens sooner.
View state values cause extra load on the client after every post back, because the viewstate not only saves objects for the page, but remembers objects if you use the back button. That means after every post back it viewstate gets slightly bigger and harder to use. They will only exist on he page until the browser goes to another page.
Whenever I store an ID in the page like this, I always create a property
public int CustomerID {
get { return ViewState("CustomerID"); }
set { ViewState("CustomerID") = value; }
}
or
Public Property CustomerID() As Integer
Get
Return ViewState("CustomerID")
End Get
Set(ByVal value As Integer)
ViewState("CustomerID") = value
End Set
End Property
That way if you decide to change it from Viewstate to a session variable or a hidden form field, it's just a case of changing it in the property reference, the rest of the page can access the variable using "Page.CustomerID".
A:
ViewState is an option. It is only valid for the page that you are on. It does not carry across requests to other resources like the Session object.
Hidden fields work too, but you are leaking and little bit of information about your application to anyone smart enough to view the source of your page.
You could also store your entire record in ViewState and maybe avoid another round trip to th server.
A:
I personally am very leery about putting anything in the session. Too many times our worker processes have cycled and we lost our session state.
As you described your problem, I would put it in a hidden field or in the viewstate of the page.
Also, when determining where to put data like this, always look at the scope of the data. Is it scoped to a single page, or to the entire session? If the answer is 'session' for us, we put it in a cookie. (Disclaimer: We write intranet apps where we know cookies are enabled.)
A:
If its a simple id will choose to pass it in querystring, that way you do not need to do postbacks and page is more accessible for users and search engines.
|
Should I store a database ID field in ViewState?
|
I need to retrieve a record from a database, display it on a web page (I'm using ASP.NET) but store the ID (primary key) from that record somewhere so I can go back to the database later with that ID (perhaps to do an update).
I know there are probably a few ways to do this, such as storing the ID in ViewState or a hidden field, but what is the best method and what are the reasons I might choose this method over any others?
|
[
"It depends.\nDo you care if anyone sees the record id? If you do then both hidden fields and viewstate are not suitable; you need to store it in session state, or encrypt viewstate.\nDo you care if someone submits the form with a bogus id? If you do then you can't use a hidden field (and you need to look at CSRF protection as a bonus)\nDo you want it unchangable but don't care about it being open to viewing (with some work)? Use viewstate and set enableViewStateMac=\"true\" on your page (or globally)\nWant it hidden and protected but can't use session state? Encrypt your viewstate by setting the following web.config entries\n<pages enableViewState=\"true\" enableViewStateMac=\"true\" />\n<machineKey ... validation=\"3DES\" />\n\n",
"Do you want the end user to know the ID? For example if the id value is a standard 1,1 seed from the database I could look at the number and see how many customers you have. If you encrypt the value (as the viewstate can) I would find it much harder to decypher the key (but not impossible). \nThe alternative is to store it in the session, this will put a (very small if its just an integer) performance hit on your application but mean that I as a user never see that primary key. It also exposes the object to other parts of your application, that you may or may not want it to be exposed to (session objects remain until cleared, a set time (like 5 mins) passes or the browser window is closed - whichever happens sooner.\nView state values cause extra load on the client after every post back, because the viewstate not only saves objects for the page, but remembers objects if you use the back button. That means after every post back it viewstate gets slightly bigger and harder to use. They will only exist on he page until the browser goes to another page.\nWhenever I store an ID in the page like this, I always create a property \npublic int CustomerID {\n get { return ViewState(\"CustomerID\"); }\n set { ViewState(\"CustomerID\") = value; }\n}\n\nor\n Public Property CustomerID() As Integer\n Get\n Return ViewState(\"CustomerID\")\n End Get\n Set(ByVal value As Integer)\n ViewState(\"CustomerID\") = value\n End Set\n End Property\n\nThat way if you decide to change it from Viewstate to a session variable or a hidden form field, it's just a case of changing it in the property reference, the rest of the page can access the variable using \"Page.CustomerID\".\n",
"ViewState is an option. It is only valid for the page that you are on. It does not carry across requests to other resources like the Session object.\nHidden fields work too, but you are leaking and little bit of information about your application to anyone smart enough to view the source of your page.\nYou could also store your entire record in ViewState and maybe avoid another round trip to th server.\n",
"I personally am very leery about putting anything in the session. Too many times our worker processes have cycled and we lost our session state. \nAs you described your problem, I would put it in a hidden field or in the viewstate of the page. \nAlso, when determining where to put data like this, always look at the scope of the data. Is it scoped to a single page, or to the entire session? If the answer is 'session' for us, we put it in a cookie. (Disclaimer: We write intranet apps where we know cookies are enabled.)\n",
"If its a simple id will choose to pass it in querystring, that way you do not need to do postbacks and page is more accessible for users and search engines.\n"
] |
[
6,
2,
0,
0,
0
] |
[
"Session[\"MyId\"]=myval;\n\nIt would be a little safer and essentially offers the same mechanics as putting it in the viewstate\n",
"I tend to stick things like that in hidden fields just do a little\n <asp:label runat=server id=lblThingID visible=false />\n\n"
] |
[
-1,
-1
] |
[
"asp.net",
"viewstate"
] |
stackoverflow_0000103000_asp.net_viewstate.txt
|
Q:
How do you move from the Proof of Concept phase to working on a production-ready solution?
I'm working on a project that's been accepted as a proof of concept and is now on the schedule as an actual production project. I'm curious how others approach this transition.
I've heard from various sources that when a project starts as a proof of concept it's often a good idea to trash all of the code written during that rapidly-evolving phase and essentially to start over with a clean slate, relying on what you learned from the conceptual phase but without working to clean up the potentially messy code that you wrote the first time around. Kind of the programming version of "throw away the first copy of that angry email you're about to send and start all over" theory.
I've done it this was in the past and I've also refactored the conceptual code to use in production, but since I'm in the transition phase for a new project I wanted to get an idea how others do this. Obviously a lot depends on the project itself, and on the conceptual code (if what you generated works but won't scale for example, it's probably best to start afresh, but if you have a very compressed timeline for the project you might be forced to build on what you've already written).
That said, if all things were equal what would you all choose as an approach?
A:
As you already kind of hinted at, the answer is, "It Depends"
Starting over is good because you can help trim out the stuff that was added while you were initially working out the kinks but isn't really needed.
It also gives you a chance to give more consideration to how you want the architecture to be -- without already being dependent on how the proof-of-concept was written...
In practice, though, unless you're in the business of selling the software to the outside world, building upon the prototype is pretty commonplace. Just don't get into the habit of thinking "I'll fix it later" if you run into some code that smells or seems like it could be done in a better way...
A:
Refactor the existing code into the solution.
A:
For me it would depend on how sloppy my POC was. If it is something I would be ashamed to pass onto another developer, I would rewrite it. Otherwise, just go with what you got.
A:
If the code works, use it. Spend a little bit of time refactoring the messiest parts in order to ease future maintenance. But don't fall into the trap of building a new system from scratch.
A:
Throw away everything from the proof of concept except for the lessons learned, and, possibly, some minor code fragments such as calculations etc.
Proof of concept applications should never be more than just the bare minimum to see if the technology in question will work and to start testing some of the boundary conditions.
Once done you are free to redesign the application with your new found knowledge.
|
How do you move from the Proof of Concept phase to working on a production-ready solution?
|
I'm working on a project that's been accepted as a proof of concept and is now on the schedule as an actual production project. I'm curious how others approach this transition.
I've heard from various sources that when a project starts as a proof of concept it's often a good idea to trash all of the code written during that rapidly-evolving phase and essentially to start over with a clean slate, relying on what you learned from the conceptual phase but without working to clean up the potentially messy code that you wrote the first time around. Kind of the programming version of "throw away the first copy of that angry email you're about to send and start all over" theory.
I've done it this was in the past and I've also refactored the conceptual code to use in production, but since I'm in the transition phase for a new project I wanted to get an idea how others do this. Obviously a lot depends on the project itself, and on the conceptual code (if what you generated works but won't scale for example, it's probably best to start afresh, but if you have a very compressed timeline for the project you might be forced to build on what you've already written).
That said, if all things were equal what would you all choose as an approach?
|
[
"As you already kind of hinted at, the answer is, \"It Depends\"\nStarting over is good because you can help trim out the stuff that was added while you were initially working out the kinks but isn't really needed.\nIt also gives you a chance to give more consideration to how you want the architecture to be -- without already being dependent on how the proof-of-concept was written...\nIn practice, though, unless you're in the business of selling the software to the outside world, building upon the prototype is pretty commonplace. Just don't get into the habit of thinking \"I'll fix it later\" if you run into some code that smells or seems like it could be done in a better way...\n",
"Refactor the existing code into the solution.\n",
"For me it would depend on how sloppy my POC was. If it is something I would be ashamed to pass onto another developer, I would rewrite it. Otherwise, just go with what you got. \n",
"If the code works, use it. Spend a little bit of time refactoring the messiest parts in order to ease future maintenance. But don't fall into the trap of building a new system from scratch.\n",
"Throw away everything from the proof of concept except for the lessons learned, and, possibly, some minor code fragments such as calculations etc.\nProof of concept applications should never be more than just the bare minimum to see if the technology in question will work and to start testing some of the boundary conditions.\nOnce done you are free to redesign the application with your new found knowledge.\n"
] |
[
3,
2,
1,
0,
0
] |
[] |
[] |
[
"project_management"
] |
stackoverflow_0000103051_project_management.txt
|
Q:
Dropping a group of tables in SQL Server
Is there a simple way to drop a group of interrelated tables in SQL Server? Ideally I'd like to avoid having to worry about what order they're being dropped in since I know the entire group will be gone by the end of the process.
A:
At the risk of sounding stupid, I don't believe SQL Server supports the delete / cascade syntax. I think you can configure a delete rule to do cascading deletes (http://msdn.microsoft.com/en-us/library/ms152507.aspx), but as far as I know the trick with SQL Server is to just to run your drop query once for each table you're dropping, then check it worked.
A:
A diferent approach could be: first get rid of the constraints, then drop the tables in a single shot.
In other words, a DROP CONSTRAINT for every constraint, then a DROP TABLE for each table; at this point the order of execution shouldn't be an issue.
A:
This requires the sp___drop___constraints script you can find at Database Journal:
sp_MSforeachtable @command1="print 'disabling constraints: ?'", @command2="sp_drop_constraints @tablename=?"
GO
sp_MSforeachtable @command1="print 'dropping: ?'", @command2="DROP TABLE ?"
GO
NOTE this - obviously - if you meant to drop ALL of the tables in your database, so be careful
A:
I don't have access to SQL Server to test this, but how about:
DROP TABLE IF EXISTS table1, table2, table3 CASCADE;
A:
I'm not sure, if Derek's approach works. You haven't mark it as best answer yet.
If not: with SQL Server 2005 it should be possible, I guess.
There they introduced exceptions (which I've not used yet). So drop the table, catch the exception, if one occurs and try the next table till they are all gone.
You can store the list of tables in a temp-table and use a cursor to traverse it, if you want to.
A:
I ended up using Apache's ddlutils to perform the dropping for me, which sorted it out in my case, though a solution which worked only within sql server would be quite a bit simpler.
@Derek Park, I didn't know you could comma separate tables there, so that's handy, but it doesn't seem to work quite as expected. Nether IF EXISTS nor CASCADE are recognised by sql server it seems, and running drop table X, Y, Z seems to work only if they should be dropped in the stated order.
See also http://msdn.microsoft.com/en-us/library/ms173790.aspx, which describes the drop table syntax.
A:
The thing holding you back from dropping the tables in any order are foreign key dependencies between the tables. So get rid of the FK's before you start.
Using the INFORMATION_SCHEMA system views, retrieve a list of all foreign keys related to any of these tables
Drop each of these foreign keys
Now you should be able to drop all of the tables, using any order that you want.
|
Dropping a group of tables in SQL Server
|
Is there a simple way to drop a group of interrelated tables in SQL Server? Ideally I'd like to avoid having to worry about what order they're being dropped in since I know the entire group will be gone by the end of the process.
|
[
"At the risk of sounding stupid, I don't believe SQL Server supports the delete / cascade syntax. I think you can configure a delete rule to do cascading deletes (http://msdn.microsoft.com/en-us/library/ms152507.aspx), but as far as I know the trick with SQL Server is to just to run your drop query once for each table you're dropping, then check it worked.\n",
"A diferent approach could be: first get rid of the constraints, then drop the tables in a single shot. \nIn other words, a DROP CONSTRAINT for every constraint, then a DROP TABLE for each table; at this point the order of execution shouldn't be an issue.\n",
"This requires the sp___drop___constraints script you can find at Database Journal:\nsp_MSforeachtable @command1=\"print 'disabling constraints: ?'\", @command2=\"sp_drop_constraints @tablename=?\"\nGO\nsp_MSforeachtable @command1=\"print 'dropping: ?'\", @command2=\"DROP TABLE ?\"\nGO\n\nNOTE this - obviously - if you meant to drop ALL of the tables in your database, so be careful\n",
"I don't have access to SQL Server to test this, but how about:\nDROP TABLE IF EXISTS table1, table2, table3 CASCADE;\n\n",
"I'm not sure, if Derek's approach works. You haven't mark it as best answer yet. \nIf not: with SQL Server 2005 it should be possible, I guess.\nThere they introduced exceptions (which I've not used yet). So drop the table, catch the exception, if one occurs and try the next table till they are all gone.\nYou can store the list of tables in a temp-table and use a cursor to traverse it, if you want to.\n",
"I ended up using Apache's ddlutils to perform the dropping for me, which sorted it out in my case, though a solution which worked only within sql server would be quite a bit simpler.\n@Derek Park, I didn't know you could comma separate tables there, so that's handy, but it doesn't seem to work quite as expected. Nether IF EXISTS nor CASCADE are recognised by sql server it seems, and running drop table X, Y, Z seems to work only if they should be dropped in the stated order.\nSee also http://msdn.microsoft.com/en-us/library/ms173790.aspx, which describes the drop table syntax.\n",
"The thing holding you back from dropping the tables in any order are foreign key dependencies between the tables. So get rid of the FK's before you start.\n\nUsing the INFORMATION_SCHEMA system views, retrieve a list of all foreign keys related to any of these tables\nDrop each of these foreign keys\nNow you should be able to drop all of the tables, using any order that you want.\n\n"
] |
[
4,
1,
1,
0,
0,
0,
0
] |
[] |
[] |
[
"database",
"sql",
"sql_server"
] |
stackoverflow_0000007517_database_sql_sql_server.txt
|
Q:
Sorting a dict on __iter__
I am trying to sort a dict based on its key and return an iterator to the values from within an overridden iter method in a class. Is there a nicer and more efficient way of doing this than creating a new list, inserting into the list as I sort through the keys?
A:
How about something like this:
def itersorted(d):
for key in sorted(d):
yield d[key]
A:
By far the easiest approach, and almost certainly the fastest, is something along the lines of:
def sorted_dict(d):
keys = d.keys()
keys.sort()
for key in keys:
yield d[key]
You can't sort without fetching all keys. Fetching all keys into a list and then sorting that list is the most efficient way to do that; list sorting is very fast, and fetching the keys list like that is as fast as it can be. You can then either create a new list of values or yield the values as the example does. Keep in mind that you can't modify the dict if you are iterating over it (the next iteration would fail) so if you want to modify the dict before you're done with the result of sorted_dict(), make it return a list.
A:
def sortedDict(dictobj):
return (value for key, value in sorted(dictobj.iteritems()))
This will create a single intermediate list, the 'sorted()' method returns a real list. But at least it's only one.
|
Sorting a dict on __iter__
|
I am trying to sort a dict based on its key and return an iterator to the values from within an overridden iter method in a class. Is there a nicer and more efficient way of doing this than creating a new list, inserting into the list as I sort through the keys?
|
[
"How about something like this:\ndef itersorted(d):\n for key in sorted(d):\n yield d[key]\n\n",
"By far the easiest approach, and almost certainly the fastest, is something along the lines of:\ndef sorted_dict(d):\n keys = d.keys()\n keys.sort()\n for key in keys:\n yield d[key]\n\nYou can't sort without fetching all keys. Fetching all keys into a list and then sorting that list is the most efficient way to do that; list sorting is very fast, and fetching the keys list like that is as fast as it can be. You can then either create a new list of values or yield the values as the example does. Keep in mind that you can't modify the dict if you are iterating over it (the next iteration would fail) so if you want to modify the dict before you're done with the result of sorted_dict(), make it return a list.\n",
"def sortedDict(dictobj):\n return (value for key, value in sorted(dictobj.iteritems()))\n\nThis will create a single intermediate list, the 'sorted()' method returns a real list. But at least it's only one.\n"
] |
[
9,
3,
3
] |
[
"Assuming you want a default sort order, you can used sorted(list) or list.sort(). If you want your own sort logic, Python lists support the ability to sort based on a function you pass in. For example, the following would be a way to sort numbers from least to greatest (the default behavior) using a function.\ndef compareTwo(a, b):\n if a > b:\n return 1\n if a == b:\n return 0\n if a < b:\n return -1\n\nList.Sort(compareTwo)\nprint a\n\nThis approach is conceptually a bit cleaner than manually creating a new list and appending the new values and allows you to control the sort logic.\n"
] |
[
-1
] |
[
"optimization",
"python",
"refactoring"
] |
stackoverflow_0000102394_optimization_python_refactoring.txt
|
Q:
What does a "WARNING: did not see LOP_CKPT_END" message mean on SQL Server 2005?
The above error message comes up just before SQL Server marks the database as "Suspect" and refuses to open it. Does anyone know what the message means and how to fix it? I think it's a matter of grabbing the backup, but would be nice if it was possible to recover the data.
I've had a look at the kb article but there are no transactions to resolve.
A:
It appears that it means your distributed transaction coordinator failed to start correctly when bringing the SQL Server online.
please refer to this ASP.NET forum post and knowledge base article
Depending on the level of logging, you should be able to take the last known backup and slowly recover the logs using point in time recovery techniques to slowly bring the database up to the point right before failure began.
A:
run checkdb to find out why it's been marked as suspect and see if it can be recovered without any data loss (win)
|
What does a "WARNING: did not see LOP_CKPT_END" message mean on SQL Server 2005?
|
The above error message comes up just before SQL Server marks the database as "Suspect" and refuses to open it. Does anyone know what the message means and how to fix it? I think it's a matter of grabbing the backup, but would be nice if it was possible to recover the data.
I've had a look at the kb article but there are no transactions to resolve.
|
[
"It appears that it means your distributed transaction coordinator failed to start correctly when bringing the SQL Server online.\nplease refer to this ASP.NET forum post and knowledge base article\nDepending on the level of logging, you should be able to take the last known backup and slowly recover the logs using point in time recovery techniques to slowly bring the database up to the point right before failure began.\n",
"run checkdb to find out why it's been marked as suspect and see if it can be recovered without any data loss (win)\n"
] |
[
1,
0
] |
[] |
[] |
[
"database",
"sql",
"sql_server"
] |
stackoverflow_0000101538_database_sql_sql_server.txt
|
Q:
Is it correct to use the backtick / comma idiom inside a (loop ...)?
I have some code which collects points (consed integers) from a loop which looks something like this:
(loop
for x from 1 to 100
for y from 100 downto 1
collect `(,x . ,y))
My question is, is it correct to use `(,x . ,y) in this situation?
Edit: This sample is not about generating a table of 100x100 items, the code here just illustrate the use of two loop variables and the consing of their values. I have edited the loop to make this clear. The actual loop I use depends on several other functions (and is part of one itself) so it made more sense to replace the calls with literal integers and to pull the loop out of the function.
A:
It would be much 'better' to just do (cons x y).
But to answer the question, there is nothing wrong with doing that :) (except making it a tad slower).
A:
I think the answer here is resource utilization (following from This post)
for example in clisp:
[1]> (time
(progn
(loop
for x from 1 to 100000
for y from 1 to 100000 do
collect (cons x y))
()))
WARNING: LOOP: missing forms after DO: permitted by CLtL2, forbidden by ANSI
CL.
Real time: 0.469 sec.
Run time: 0.468 sec.
Space: 1609084 Bytes
GC: 1, GC time: 0.015 sec.
NIL
[2]> (time
(progn
(loop
for x from 1 to 100000
for y from 1 to 100000 do
collect `(,x . ,y)) ;`
()))
WARNING: LOOP: missing forms after DO: permitted by CLtL2, forbidden by ANSI
CL.
Real time: 0.969 sec.
Run time: 0.969 sec.
Space: 10409084 Bytes
GC: 15, GC time: 0.172 sec.
NIL
[3]>
A:
dsm: there are a couple of odd things about your code here. Note that
(loop for x from 1 to 100000
for y from 1 to 100000 do
collect `(,x . ,y))
is equivalent to:
(loop for x from 1 to 100
collecting (cons x x))
which probably isn't quite what you intended. Note three things: First, the way you've written it, x and y have the same role. You probably meant to nest loops. Second, your do after the y is incorrect, as there is not lisp form following it. Thirdly, you're right that you could use the backtick approach here but it makes your code harder to read and not idiomatic for no gain, so best avoided.
Guessing at what you actually intended, you might do something like this (using loop):
(loop for x from 1 to 100 appending
(loop for y from 1 to 100 collecting (cons x y)))
If you don't like the loop macro (like Kyle), you can use another iteration construct like
(let ((list nil))
(dotimes (n 100) ;; 0 based count, you will have to add 1 to get 1 .. 100
(dotimes (m 100)
(push (cons n m) list)))
(nreverse list))
If you find yourself doing this sort of thing a lot, you should probably write a more general function for crossing lists, then pass it these lists of integers
If you really have a problem with iteration, not just loop, you can do this sort of thing recursively (but note, this isn't scheme, your implementation may not guaranteed TCO). The function "genint" shown by Kyle here is a variant of a common (but not standard) function iota. However, appending to the list is a bad idea. An equivalent implementation like this:
(defun iota (n &optional (start 0))
(let ((end (+ n start)))
(labels ((next (n)
(when (< n end)
(cons n (next (1+ n))))))
(next start))))
should be much more efficient, but still is not a tail call. Note I've set this up for the more usual 0-based, but given you an optional parameter to start at 1 or any other integer. Of course the above can be written something like:
(defun iota (n &optional (start 0))
(loop repeat n
for i from start collecting i))
Which has the advantage of not blowing the stack for large arguments. If your implementation supports tail call elimination, you can also avoid the recursion running out of place by doing something like this:
(defun iota (n &optional (start 0))
(labels ((next (i list)
(if (>= i (+ n start))
nil
(next (1+ i) (cons i list)))))
(next start nil)))
Hope that helps!
A:
Why not just
(cons x y)
By the way, I tried to run your code in CLISP and it didn't work as expected. Since I'm not a big fan of the loop macro here's how you might accomplish the same thing recursively:
(defun genint (stop)
(if (= stop 1) '(1)
(append (genint (- stop 1)) (list stop))))
(defun genpairs (x y)
(let ((row (mapcar #'(lambda (y)
(cons x y))
(genint y))))
(if (= x 0) row
(append (genpairs (- x 1) y)
row))))
(genpairs 100 100)
|
Is it correct to use the backtick / comma idiom inside a (loop ...)?
|
I have some code which collects points (consed integers) from a loop which looks something like this:
(loop
for x from 1 to 100
for y from 100 downto 1
collect `(,x . ,y))
My question is, is it correct to use `(,x . ,y) in this situation?
Edit: This sample is not about generating a table of 100x100 items, the code here just illustrate the use of two loop variables and the consing of their values. I have edited the loop to make this clear. The actual loop I use depends on several other functions (and is part of one itself) so it made more sense to replace the calls with literal integers and to pull the loop out of the function.
|
[
"It would be much 'better' to just do (cons x y). \nBut to answer the question, there is nothing wrong with doing that :) (except making it a tad slower).\n",
"I think the answer here is resource utilization (following from This post)\nfor example in clisp:\n[1]> (time\n (progn\n (loop\n for x from 1 to 100000\n for y from 1 to 100000 do\n collect (cons x y))\n ()))\nWARNING: LOOP: missing forms after DO: permitted by CLtL2, forbidden by ANSI\n CL.\nReal time: 0.469 sec.\nRun time: 0.468 sec.\nSpace: 1609084 Bytes\nGC: 1, GC time: 0.015 sec.\nNIL\n[2]> (time\n (progn\n (loop\n for x from 1 to 100000\n for y from 1 to 100000 do\n collect `(,x . ,y)) ;`\n ()))\nWARNING: LOOP: missing forms after DO: permitted by CLtL2, forbidden by ANSI\n CL.\nReal time: 0.969 sec.\nRun time: 0.969 sec.\nSpace: 10409084 Bytes\nGC: 15, GC time: 0.172 sec.\nNIL\n[3]>\n\n",
"dsm: there are a couple of odd things about your code here. Note that\n(loop for x from 1 to 100000\n for y from 1 to 100000 do\n collect `(,x . ,y))\n\nis equivalent to:\n(loop for x from 1 to 100\n collecting (cons x x))\n\nwhich probably isn't quite what you intended. Note three things: First, the way you've written it, x and y have the same role. You probably meant to nest loops. Second, your do after the y is incorrect, as there is not lisp form following it. Thirdly, you're right that you could use the backtick approach here but it makes your code harder to read and not idiomatic for no gain, so best avoided.\nGuessing at what you actually intended, you might do something like this (using loop):\n(loop for x from 1 to 100 appending \n (loop for y from 1 to 100 collecting (cons x y)))\n\nIf you don't like the loop macro (like Kyle), you can use another iteration construct like\n(let ((list nil)) \n (dotimes (n 100) ;; 0 based count, you will have to add 1 to get 1 .. 100\n (dotimes (m 100) \n (push (cons n m) list)))\n (nreverse list))\n\nIf you find yourself doing this sort of thing a lot, you should probably write a more general function for crossing lists, then pass it these lists of integers \nIf you really have a problem with iteration, not just loop, you can do this sort of thing recursively (but note, this isn't scheme, your implementation may not guaranteed TCO). The function \"genint\" shown by Kyle here is a variant of a common (but not standard) function iota. However, appending to the list is a bad idea. An equivalent implementation like this:\n(defun iota (n &optional (start 0))\n (let ((end (+ n start)))\n (labels ((next (n)\n (when (< n end) \n (cons n (next (1+ n))))))\n (next start))))\n\nshould be much more efficient, but still is not a tail call. Note I've set this up for the more usual 0-based, but given you an optional parameter to start at 1 or any other integer. Of course the above can be written something like:\n(defun iota (n &optional (start 0))\n (loop repeat n \n for i from start collecting i))\n\nWhich has the advantage of not blowing the stack for large arguments. If your implementation supports tail call elimination, you can also avoid the recursion running out of place by doing something like this:\n(defun iota (n &optional (start 0))\n (labels ((next (i list)\n (if (>= i (+ n start))\n nil\n (next (1+ i) (cons i list)))))\n (next start nil)))\n\nHope that helps! \n",
"Why not just\n(cons x y)\n\nBy the way, I tried to run your code in CLISP and it didn't work as expected. Since I'm not a big fan of the loop macro here's how you might accomplish the same thing recursively:\n(defun genint (stop)\n (if (= stop 1) '(1)\n (append (genint (- stop 1)) (list stop))))\n\n(defun genpairs (x y)\n (let ((row (mapcar #'(lambda (y)\n (cons x y))\n (genint y))))\n (if (= x 0) row\n (append (genpairs (- x 1) y)\n row))))\n\n(genpairs 100 100)\n\n"
] |
[
7,
5,
3,
1
] |
[] |
[] |
[
"iteration",
"lisp"
] |
stackoverflow_0000101487_iteration_lisp.txt
|
Q:
An issue possibly related to Cursor/Join
Here is my situation:
Table one contains a set of data that uses an id for an unique identifier. This table has a one to many relationship with about 6 other tables such that.
Given Table 1 with Id of 001:
Table 2 might have 3 rows with foreign key: 001
Table 3 might have 12 rows with foreign key: 001
Table 4 might have 0 rows with foreign key: 001
Table 5 might have 28 rows with foreign key: 001
I need to write a report that lists all of the rows from Table 1 for a specified time frame followed by all of the data contained in the handful of tables that reference it.
My current approach in pseudo code would look like this:
select * from table 1
foreach(result) {
print result;
select * from table 2 where id = result.id;
foreach(result2) {
print result2;
}
select * from table 3 where id = result.id
foreach(result3) {
print result3;
}
//continued for each table
}
This means that the single report can run in the neighbor hood of 1000 queries. I know this is excessive however my sql-fu is a little weak and I could use some help.
A:
LEFT OUTER JOIN Tables2-N on Table1
SELECT Table1.*, Table2.*, Table3.*, Table4.*, Table5.*
FROM Table1
LEFT OUTER JOIN Table2 ON Table1.ID = Table2.ID
LEFT OUTER JOIN Table3 ON Table1.ID = Table3.ID
LEFT OUTER JOIN Table4 ON Table1.ID = Table4.ID
LEFT OUTER JOIN Table5 ON Table1.ID = Table5.ID
WHERE (CRITERIA)
A:
Join doesn't do it for me. I hate having to de-tangle the data on the client side. All those nulls from left-joining.
Here's a set-based solution that doesn't use Joins.
INSERT INTO @LocalCollection (theKey)
SELECT id
FROM Table1
WHERE ...
SELECT * FROM Table1 WHERE id in (SELECT theKey FROM @LocalCollection)
SELECT * FROM Table2 WHERE id in (SELECT theKey FROM @LocalCollection)
SELECT * FROM Table3 WHERE id in (SELECT theKey FROM @LocalCollection)
SELECT * FROM Table4 WHERE id in (SELECT theKey FROM @LocalCollection)
SELECT * FROM Table5 WHERE id in (SELECT theKey FROM @LocalCollection)
A:
Ah! Procedural! My SQL would look like this, if you needed to order the results from the other tables after the results from the first table.
Insert Into #rows Select id from Table1 where date between '12/30' and '12/31'
Select * from Table1 t join #rows r on t.id = r.id
Select * from Table2 t join #rows r on t.id = r.id
--etc
If you wanted to group the results by the initial ID, use a Left Outer Join, as mentioned previously.
A:
You may be best off to use a reporting tool like Crystal or Jasper, or even XSL-FO if you are feeling bold. They have things built in to handle specifically this. This is not something the would work well in raw SQL.
If the format of all of the rows (the headers as well as all of the details) is the same, it would also be pretty easy to do it as a stored procedure.
What I would do: Do it as a join, so you will have the header data on every row, then use a reporting tool to do the grouping.
A:
SELECT * FROM table1 t1
INNER JOIN table2 t2 ON t1.id = t2.resultid -- this could be a left join if the table is not guaranteed to have entries for t1.id
INNER JOIN table2 t3 ON t1.id = t3.resultid -- etc
OR if the data is all in the same format you could do.
SELECT cola,colb FROM table1 WHERE id = @id
UNION ALL
SELECT cola,colb FROM table2 WHERE resultid = @id
UNION ALL
SELECT cola,colb FROM table3 WHERE resultid = @id
It really depends on the format you require the data in for output to the report.
If you can give a sample of how you would like the output I could probably help more.
A:
Join all of the tables together.
select * from table_1 left join table_2 using(id) left join table_3 using(id);
Then, you'll want to roll up the columns in code to format your report as you see fit.
A:
What I would do is open up cursors on the following queries:
SELECT * from table1 order by id
SELECT * from table1 r, table2 t where t.table1_id = r.id order by r.id
SELECT * from table1 r, table3 t where t.table1_id = r.id order by r.id
And then I would walk those cursors in parallel, printing your results. You can do this because all appear in the same order. (Note that I would suggest that while the primary ID for table1 might be named id, it won't have that name in the other tables.)
A:
Do all the tables have the same format? If not, then if you have to have a report that can display the n different types of rows. If you are only interested in the same columns then it is easier.
Most databases have some form of dynamic SQL. In that case you can do the following:
create temporary table from
select * from table1 where rows within time frame
x integer
sql varchar(something)
x = 1
while x <= numresults {
sql = 'SELECT * from table' + CAST(X as varchar) + ' where id in (select id from temporary table'
execute sql
x = x + 1
}
But I mean basically here you are running one query on your main table to get the rows that you need, then running one query for each sub table to get rows that match your main table.
If the report requires the same 2 or 3 columns for each table you could change the select * from tablex to be an insert into and get a single result set at the end...
|
An issue possibly related to Cursor/Join
|
Here is my situation:
Table one contains a set of data that uses an id for an unique identifier. This table has a one to many relationship with about 6 other tables such that.
Given Table 1 with Id of 001:
Table 2 might have 3 rows with foreign key: 001
Table 3 might have 12 rows with foreign key: 001
Table 4 might have 0 rows with foreign key: 001
Table 5 might have 28 rows with foreign key: 001
I need to write a report that lists all of the rows from Table 1 for a specified time frame followed by all of the data contained in the handful of tables that reference it.
My current approach in pseudo code would look like this:
select * from table 1
foreach(result) {
print result;
select * from table 2 where id = result.id;
foreach(result2) {
print result2;
}
select * from table 3 where id = result.id
foreach(result3) {
print result3;
}
//continued for each table
}
This means that the single report can run in the neighbor hood of 1000 queries. I know this is excessive however my sql-fu is a little weak and I could use some help.
|
[
"LEFT OUTER JOIN Tables2-N on Table1\nSELECT Table1.*, Table2.*, Table3.*, Table4.*, Table5.*\nFROM Table1\nLEFT OUTER JOIN Table2 ON Table1.ID = Table2.ID\nLEFT OUTER JOIN Table3 ON Table1.ID = Table3.ID\nLEFT OUTER JOIN Table4 ON Table1.ID = Table4.ID\nLEFT OUTER JOIN Table5 ON Table1.ID = Table5.ID\nWHERE (CRITERIA)\n\n",
"Join doesn't do it for me. I hate having to de-tangle the data on the client side. All those nulls from left-joining.\nHere's a set-based solution that doesn't use Joins.\nINSERT INTO @LocalCollection (theKey)\nSELECT id\nFROM Table1\nWHERE ...\n\n\nSELECT * FROM Table1 WHERE id in (SELECT theKey FROM @LocalCollection)\n\nSELECT * FROM Table2 WHERE id in (SELECT theKey FROM @LocalCollection)\n\nSELECT * FROM Table3 WHERE id in (SELECT theKey FROM @LocalCollection)\n\nSELECT * FROM Table4 WHERE id in (SELECT theKey FROM @LocalCollection)\n\nSELECT * FROM Table5 WHERE id in (SELECT theKey FROM @LocalCollection)\n\n",
"Ah! Procedural! My SQL would look like this, if you needed to order the results from the other tables after the results from the first table.\n\nInsert Into #rows Select id from Table1 where date between '12/30' and '12/31'\nSelect * from Table1 t join #rows r on t.id = r.id\nSelect * from Table2 t join #rows r on t.id = r.id\n--etc\n\n\nIf you wanted to group the results by the initial ID, use a Left Outer Join, as mentioned previously.\n",
"You may be best off to use a reporting tool like Crystal or Jasper, or even XSL-FO if you are feeling bold. They have things built in to handle specifically this. This is not something the would work well in raw SQL.\nIf the format of all of the rows (the headers as well as all of the details) is the same, it would also be pretty easy to do it as a stored procedure.\nWhat I would do: Do it as a join, so you will have the header data on every row, then use a reporting tool to do the grouping.\n",
"SELECT * FROM table1 t1\nINNER JOIN table2 t2 ON t1.id = t2.resultid -- this could be a left join if the table is not guaranteed to have entries for t1.id\nINNER JOIN table2 t3 ON t1.id = t3.resultid -- etc\n\nOR if the data is all in the same format you could do.\nSELECT cola,colb FROM table1 WHERE id = @id\nUNION ALL\nSELECT cola,colb FROM table2 WHERE resultid = @id\nUNION ALL\nSELECT cola,colb FROM table3 WHERE resultid = @id\n\nIt really depends on the format you require the data in for output to the report.\nIf you can give a sample of how you would like the output I could probably help more.\n",
"Join all of the tables together.\nselect * from table_1 left join table_2 using(id) left join table_3 using(id);\n\nThen, you'll want to roll up the columns in code to format your report as you see fit.\n",
"What I would do is open up cursors on the following queries:\nSELECT * from table1 order by id\nSELECT * from table1 r, table2 t where t.table1_id = r.id order by r.id\nSELECT * from table1 r, table3 t where t.table1_id = r.id order by r.id\n\nAnd then I would walk those cursors in parallel, printing your results. You can do this because all appear in the same order. (Note that I would suggest that while the primary ID for table1 might be named id, it won't have that name in the other tables.)\n",
"Do all the tables have the same format? If not, then if you have to have a report that can display the n different types of rows. If you are only interested in the same columns then it is easier.\nMost databases have some form of dynamic SQL. In that case you can do the following:\ncreate temporary table from\nselect * from table1 where rows within time frame\n\nx integer\nsql varchar(something)\nx = 1\nwhile x <= numresults {\n sql = 'SELECT * from table' + CAST(X as varchar) + ' where id in (select id from temporary table'\n execute sql\n x = x + 1\n}\n\nBut I mean basically here you are running one query on your main table to get the rows that you need, then running one query for each sub table to get rows that match your main table.\nIf the report requires the same 2 or 3 columns for each table you could change the select * from tablex to be an insert into and get a single result set at the end...\n"
] |
[
3,
2,
1,
1,
1,
0,
0,
0
] |
[] |
[] |
[
"database_cursor",
"join",
"optimization",
"sql"
] |
stackoverflow_0000103005_database_cursor_join_optimization_sql.txt
|
Q:
Game UI HUD
What are some good tools and techniques for making in game UI? I'm looking for things that help artists-types create and animate game HUD (heads up display) UI and can be added to the game engine for real time playback.
A:
If you are working with a middleware environment like Torque or Unity3D, they include a GUI framework to build on. Flash is an ideal tool, but to use in anything other than a Flash or Shockwave3d game you need to purchase ScaleForm too, which is expensive and isn't easy to get hold of for indie developers. WPF and Silverlight look promising for this purpose, but so far haven't been set up for game integration.
Unfortunately, for many developers the only solution is to roll their own UI components from scratch.
A:
Using flash will give the highest productivity for the graphical artist (well - if he knows flash).
You may want to have a look at gameswf. It's a bit dated but seems like a perfect match for your problem.
http://tulrich.com/geekstuff/gameswf.html
Another option would be to just do the entire UI in your 3D content-tool and use your animation system to play back the transitions.
A:
One option is to use Flash in conjunction with a package called ScaleForm. This allows the artist to make the UI in flash and then ScaleForm executes the flash in game.
|
Game UI HUD
|
What are some good tools and techniques for making in game UI? I'm looking for things that help artists-types create and animate game HUD (heads up display) UI and can be added to the game engine for real time playback.
|
[
"If you are working with a middleware environment like Torque or Unity3D, they include a GUI framework to build on. Flash is an ideal tool, but to use in anything other than a Flash or Shockwave3d game you need to purchase ScaleForm too, which is expensive and isn't easy to get hold of for indie developers. WPF and Silverlight look promising for this purpose, but so far haven't been set up for game integration. \nUnfortunately, for many developers the only solution is to roll their own UI components from scratch.\n",
"Using flash will give the highest productivity for the graphical artist (well - if he knows flash). \nYou may want to have a look at gameswf. It's a bit dated but seems like a perfect match for your problem.\nhttp://tulrich.com/geekstuff/gameswf.html\nAnother option would be to just do the entire UI in your 3D content-tool and use your animation system to play back the transitions. \n",
"One option is to use Flash in conjunction with a package called ScaleForm. This allows the artist to make the UI in flash and then ScaleForm executes the flash in game.\n"
] |
[
5,
3,
2
] |
[] |
[] |
[
"hud",
"user_interface"
] |
stackoverflow_0000097681_hud_user_interface.txt
|
Q:
What is the best way to store a knowledge base of business rules for helpdesk?
Does anyone know of any software or a good way for developers to build up a knowledge base of business rules that are built in to the software for help desk to use?
We already have a helpdesk software but we are not looking to replace this.
A:
A wiki is definitely the way to go. Processes change, sometimes frequently, and in a fast-paced environment like a help desk a tool that allows quick, easy access and management of that type of content is extremely important to allow people to do their jobs effectively.
One of the greatest benefits I've found is the heiarchical sturcture of many wikis, allowing employees to find the correct content from a number of different customer angles.
A:
Can you be more specific?
This may fall under "policies and procedures" management software. Here are some:
http://www.softscout.com/software/Human-Resources/Policy-and-Procedures.html
I'd like to find one that's more wiki-like or easier to integrate into a a website serving as a more general company knowlege base.
A:
I would recommend a wiki wiht a "Wiki Gardener" role- someone who cleans up the duplicate entries and sorts.
Wiki technology with a Rich Text Editor option would useful if your Support Desk are not totally technical.
Having some structure is imperative, developing something in any Wiki that makes sense to the general editing populace, and has a low threshold to get from reading to editing. You will also possibly need a migration strategy for taking hundereds of little notes into something more readable and searchable.
|
What is the best way to store a knowledge base of business rules for helpdesk?
|
Does anyone know of any software or a good way for developers to build up a knowledge base of business rules that are built in to the software for help desk to use?
We already have a helpdesk software but we are not looking to replace this.
|
[
"A wiki is definitely the way to go. Processes change, sometimes frequently, and in a fast-paced environment like a help desk a tool that allows quick, easy access and management of that type of content is extremely important to allow people to do their jobs effectively.\nOne of the greatest benefits I've found is the heiarchical sturcture of many wikis, allowing employees to find the correct content from a number of different customer angles.\n",
"Can you be more specific?\nThis may fall under \"policies and procedures\" management software. Here are some:\nhttp://www.softscout.com/software/Human-Resources/Policy-and-Procedures.html\nI'd like to find one that's more wiki-like or easier to integrate into a a website serving as a more general company knowlege base.\n",
"I would recommend a wiki wiht a \"Wiki Gardener\" role- someone who cleans up the duplicate entries and sorts.\nWiki technology with a Rich Text Editor option would useful if your Support Desk are not totally technical. \nHaving some structure is imperative, developing something in any Wiki that makes sense to the general editing populace, and has a low threshold to get from reading to editing. You will also possibly need a migration strategy for taking hundereds of little notes into something more readable and searchable. \n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"knowledge_management",
"tool_rec"
] |
stackoverflow_0000080958_knowledge_management_tool_rec.txt
|
Q:
What happens if MySQL connections continually aren't closed on PHP pages?
At the beginning of each PHP page I open up the connection to MySQL, use it throughout the page and close it at the end of the page. However, I often redirect in the middle of the page to another page and so in those cases the connection does not be closed. I understand that this is not bad for performance of the web server since PHP automatically closes all MySQL connections at the end of each page anyway. Are there any other issues here to keep in mind, or is it really true that you don't have to worry about closing your database connections in PHP?
$mysqli = new mysqli("localhost", "root", "", "test");
...do stuff, perhaps redirect to another page...
$mysqli->close();
A:
From: http://us3.php.net/manual/en/mysqli.close.php
"Open connections (and similar resources) are automatically destroyed at the end of script execution. However, you should still close or free all connections, result sets and statement handles as soon as they are no longer required. This will help return resources to PHP and MySQL faster."
A:
Just because you redirect doesn't mean the script stops executing. A redirect is just a header being sent. If you don't exit() right after, the rest of your script will continue running. When the script does finish running, it will close off all open connections (or release them back to the pool if you're using persistent connections). Don't worry about it.
A:
There might be a limit of how many connections can be open at once, so if you have many user you might run out of SQL connections. In effect, users will see SQL errors instead of nice web pages.
It's better to open a connection to read data, then close it, then display data and once the user clicks "submit" you open another connection and then submit all changes.
|
What happens if MySQL connections continually aren't closed on PHP pages?
|
At the beginning of each PHP page I open up the connection to MySQL, use it throughout the page and close it at the end of the page. However, I often redirect in the middle of the page to another page and so in those cases the connection does not be closed. I understand that this is not bad for performance of the web server since PHP automatically closes all MySQL connections at the end of each page anyway. Are there any other issues here to keep in mind, or is it really true that you don't have to worry about closing your database connections in PHP?
$mysqli = new mysqli("localhost", "root", "", "test");
...do stuff, perhaps redirect to another page...
$mysqli->close();
|
[
"From: http://us3.php.net/manual/en/mysqli.close.php\n\n\"Open connections (and similar resources) are automatically destroyed at the end of script execution. However, you should still close or free all connections, result sets and statement handles as soon as they are no longer required. This will help return resources to PHP and MySQL faster.\"\n\n",
"Just because you redirect doesn't mean the script stops executing. A redirect is just a header being sent. If you don't exit() right after, the rest of your script will continue running. When the script does finish running, it will close off all open connections (or release them back to the pool if you're using persistent connections). Don't worry about it.\n",
"There might be a limit of how many connections can be open at once, so if you have many user you might run out of SQL connections. In effect, users will see SQL errors instead of nice web pages.\nIt's better to open a connection to read data, then close it, then display data and once the user clicks \"submit\" you open another connection and then submit all changes.\n"
] |
[
19,
5,
1
] |
[] |
[] |
[
"mysql",
"php"
] |
stackoverflow_0000103281_mysql_php.txt
|
Q:
Does Weblogic 9.x support the 2.4 Servlet standard?
Seems like a simple enough question but I can't seem to find the answer. And hey, dead simple questions like this with dead simple answers is what Joel and Jeff want SO to be all about, right?
A:
http://e-docs.bea.com/wls/docs92/compatibility/compatibility.html
BEA WebLogic Server is one hundred percent J2EE 1.4 compatible
...which means that it supports the servlet 2.4 specification.
|
Does Weblogic 9.x support the 2.4 Servlet standard?
|
Seems like a simple enough question but I can't seem to find the answer. And hey, dead simple questions like this with dead simple answers is what Joel and Jeff want SO to be all about, right?
|
[
"http://e-docs.bea.com/wls/docs92/compatibility/compatibility.html\n\nBEA WebLogic Server is one hundred percent J2EE 1.4 compatible\n\n...which means that it supports the servlet 2.4 specification.\n"
] |
[
2
] |
[] |
[] |
[
"java",
"servlets",
"weblogic"
] |
stackoverflow_0000103271_java_servlets_weblogic.txt
|
Q:
IIS ASP Caching
I'm trying to configure ASP caching in IIS, following the instructions of a software I purchased. This is supposed to make it run faster.
http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/a5766228-828e-4e31-a92b-51da7d24d569.mspx?mfr=true
The software instructions point to that article.
The problem i'm having is that the "ASP File Cache section" that's mentioned there does not exist in my IIS dialog...
Am I missing something?
Is there any configuration that'll make it appear?
I'm running IIS 6.0 on W2003 Server Enterprise Edition.
Update 1: I am logged in as the local administrator (the box is not in a domain)
A:
Right click on "Web sites" in IIS manager. Choose Properties->Home directory->Configuration and you'll see "Cache options" tab.
The trick is to click on "Web sites" as opposed to proceeding to specific web site.
|
IIS ASP Caching
|
I'm trying to configure ASP caching in IIS, following the instructions of a software I purchased. This is supposed to make it run faster.
http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/a5766228-828e-4e31-a92b-51da7d24d569.mspx?mfr=true
The software instructions point to that article.
The problem i'm having is that the "ASP File Cache section" that's mentioned there does not exist in my IIS dialog...
Am I missing something?
Is there any configuration that'll make it appear?
I'm running IIS 6.0 on W2003 Server Enterprise Edition.
Update 1: I am logged in as the local administrator (the box is not in a domain)
|
[
"Right click on \"Web sites\" in IIS manager. Choose Properties->Home directory->Configuration and you'll see \"Cache options\" tab.\nThe trick is to click on \"Web sites\" as opposed to proceeding to specific web site.\n"
] |
[
7
] |
[] |
[] |
[
"asp_classic",
"caching",
"iis"
] |
stackoverflow_0000103158_asp_classic_caching_iis.txt
|
Q:
NDepend CQL Count Query
I want to query a table of public methods of a specific class and a count of each methods usage in NDepend CQL. Currently query looks like this:
SELECT METHODS
FROM TYPES "AE.DataAccess.DBHelper"
WHERE IsPublic
Is it possible to aggregate queries in CQL?
A:
It looks like the following query will generate a nice table with the values I was looking for that can be exported to Excel. What an awesome tool.
SELECT METHODS FROM TYPES
"AE.DataAccess.DBHelper" WHERE
IsPublic ORDER BY MethodCa DESC
|
NDepend CQL Count Query
|
I want to query a table of public methods of a specific class and a count of each methods usage in NDepend CQL. Currently query looks like this:
SELECT METHODS
FROM TYPES "AE.DataAccess.DBHelper"
WHERE IsPublic
Is it possible to aggregate queries in CQL?
|
[
"It looks like the following query will generate a nice table with the values I was looking for that can be exported to Excel. What an awesome tool.\n\nSELECT METHODS FROM TYPES\n \"AE.DataAccess.DBHelper\" WHERE\n IsPublic ORDER BY MethodCa DESC\n\n"
] |
[
2
] |
[] |
[] |
[
"aggregate",
"cql",
"ndepend"
] |
stackoverflow_0000098186_aggregate_cql_ndepend.txt
|
Q:
What's the easiest way to convert latitude and longitude to double values
I've got a CSV file containing latitude and longitude values, such as:
"25°36'55.57""E","45°39'12.52""N"
Anyone have a quick and simple piece of C# code to convert this to double values?
Thanks
A:
If you mean C# code to do this:
result = 25 + (36 / 60) + (55.57 / 3600)
First you'll need to parse the expression with Regex or some other mechanism and split it into the individual parts. Then:
String hour = "25";
String minute = "36";
String second = "55.57";
Double result = (hour) + (minute) / 60 + (second) / 3600;
And of course a switch to flip sign depending on N/S or E/S. Wikipedia has a little on that:
For calculations, the West/East suffix is replaced by a negative sign in the western hemisphere. Confusingly, the convention of negative for East is also sometimes seen. The preferred convention -- that East be positive -- is consistent with a right-handed Cartesian coordinate system with the North Pole up. A specific longitude may then be combined with a specific latitude (usually positive in the northern hemisphere) to give a precise position on the Earth's surface.
(http://en.wikipedia.org/wiki/Longitude)
A:
Thanks for all the quick answers. Based on the answer by amdfan, I put this code together that does the job in C#.
/// <summary>The regular expression parser used to parse the lat/long</summary>
private static Regex Parser = new Regex("^(?<deg>[-+0-9]+)[^0-9]+(?<min>[0-9]+)[^0-9]+(?<sec>[0-9.,]+)[^0-9.,ENSW]+(?<pos>[ENSW]*)$");
/// <summary>Parses the lat lon value.</summary>
/// <param name="value">The value.</param>
/// <remarks>It must have at least 3 parts 'degrees' 'minutes' 'seconds'. If it
/// has E/W and N/S this is used to change the sign.</remarks>
/// <returns></returns>
public static double ParseLatLonValue(string value)
{
// If it starts and finishes with a quote, strip them off
if (value.StartsWith("\"") && value.EndsWith("\""))
{
value = value.Substring(1, value.Length - 2).Replace("\"\"", "\"");
}
// Now parse using the regex parser
Match match = Parser.Match(value);
if (!match.Success)
{
throw new ArgumentException(string.Format(CultureInfo.CurrentUICulture, "Lat/long value of '{0}' is not recognised", value));
}
// Convert - adjust the sign if necessary
double deg = double.Parse(match.Groups["deg"].Value);
double min = double.Parse(match.Groups["min"].Value);
double sec = double.Parse(match.Groups["sec"].Value);
double result = deg + (min / 60) + (sec / 3600);
if (match.Groups["pos"].Success)
{
char ch = match.Groups["pos"].Value[0];
result = ((ch == 'S') || (ch == 'W')) ? -result : result;
}
return result;
}
A:
What are you wanting to represent it as? Arc seconds?
Then 60 min in every degree, 60 seconds in every minute.
You would then have to keep E and N by yourself.
This is not how it's done generally though.
The easiest representation I've seen to work with is a point plotted on the globe on a grid system that has its origin through the center of the earth.[Thus a nice position vector.] The problem with this is that while it's easy to use the data, getting it into and out of the system correctly can be tough, because the earth is not round, or for that matter uniform.
|
What's the easiest way to convert latitude and longitude to double values
|
I've got a CSV file containing latitude and longitude values, such as:
"25°36'55.57""E","45°39'12.52""N"
Anyone have a quick and simple piece of C# code to convert this to double values?
Thanks
|
[
"If you mean C# code to do this:\nresult = 25 + (36 / 60) + (55.57 / 3600)\nFirst you'll need to parse the expression with Regex or some other mechanism and split it into the individual parts. Then:\nString hour = \"25\";\nString minute = \"36\";\nString second = \"55.57\";\nDouble result = (hour) + (minute) / 60 + (second) / 3600;\n\nAnd of course a switch to flip sign depending on N/S or E/S. Wikipedia has a little on that:\n\nFor calculations, the West/East suffix is replaced by a negative sign in the western hemisphere. Confusingly, the convention of negative for East is also sometimes seen. The preferred convention -- that East be positive -- is consistent with a right-handed Cartesian coordinate system with the North Pole up. A specific longitude may then be combined with a specific latitude (usually positive in the northern hemisphere) to give a precise position on the Earth's surface.\n (http://en.wikipedia.org/wiki/Longitude)\n\n",
"Thanks for all the quick answers. Based on the answer by amdfan, I put this code together that does the job in C#.\n/// <summary>The regular expression parser used to parse the lat/long</summary>\nprivate static Regex Parser = new Regex(\"^(?<deg>[-+0-9]+)[^0-9]+(?<min>[0-9]+)[^0-9]+(?<sec>[0-9.,]+)[^0-9.,ENSW]+(?<pos>[ENSW]*)$\");\n\n/// <summary>Parses the lat lon value.</summary>\n/// <param name=\"value\">The value.</param>\n/// <remarks>It must have at least 3 parts 'degrees' 'minutes' 'seconds'. If it \n/// has E/W and N/S this is used to change the sign.</remarks>\n/// <returns></returns>\npublic static double ParseLatLonValue(string value)\n{\n // If it starts and finishes with a quote, strip them off\n if (value.StartsWith(\"\\\"\") && value.EndsWith(\"\\\"\"))\n {\n value = value.Substring(1, value.Length - 2).Replace(\"\\\"\\\"\", \"\\\"\");\n }\n\n // Now parse using the regex parser\n Match match = Parser.Match(value);\n if (!match.Success)\n {\n throw new ArgumentException(string.Format(CultureInfo.CurrentUICulture, \"Lat/long value of '{0}' is not recognised\", value));\n }\n\n // Convert - adjust the sign if necessary\n double deg = double.Parse(match.Groups[\"deg\"].Value);\n double min = double.Parse(match.Groups[\"min\"].Value);\n double sec = double.Parse(match.Groups[\"sec\"].Value);\n double result = deg + (min / 60) + (sec / 3600);\n if (match.Groups[\"pos\"].Success)\n {\n char ch = match.Groups[\"pos\"].Value[0];\n result = ((ch == 'S') || (ch == 'W')) ? -result : result;\n }\n return result;\n}\n\n",
"What are you wanting to represent it as? Arc seconds?\nThen 60 min in every degree, 60 seconds in every minute.\nYou would then have to keep E and N by yourself. \nThis is not how it's done generally though.\nThe easiest representation I've seen to work with is a point plotted on the globe on a grid system that has its origin through the center of the earth.[Thus a nice position vector.] The problem with this is that while it's easy to use the data, getting it into and out of the system correctly can be tough, because the earth is not round, or for that matter uniform.\n"
] |
[
11,
8,
0
] |
[] |
[] |
[
"c#",
"gis"
] |
stackoverflow_0000103006_c#_gis.txt
|
Q:
How to convert a unmanaged double to a managed string?
From managed C++, I am calling an unmanaged C++ method which returns a double. How can I convert this double into a managed string?
A:
I assume something like
(gcnew System::Double(d))->ToString()
A:
C++ is definitely not my strongest skillset. Misread the question, but this should convert to a std::string, not exactly what you are looking for though, but leaving it since it was the original post....
double d = 123.45;
std::ostringstream oss;
oss << d;
std::string s = oss.str();
This should convert to a managed string however..
double d = 123.45
String^ s = System::Convert::ToString(d);
|
How to convert a unmanaged double to a managed string?
|
From managed C++, I am calling an unmanaged C++ method which returns a double. How can I convert this double into a managed string?
|
[
"I assume something like\n(gcnew System::Double(d))->ToString()\n\n",
"C++ is definitely not my strongest skillset. Misread the question, but this should convert to a std::string, not exactly what you are looking for though, but leaving it since it was the original post....\ndouble d = 123.45;\nstd::ostringstream oss;\noss << d;\nstd::string s = oss.str();\n\nThis should convert to a managed string however..\ndouble d = 123.45\nString^ s = System::Convert::ToString(d);\n\n"
] |
[
7,
2
] |
[] |
[] |
[
"c++",
"double",
"managed",
"unmanaged"
] |
stackoverflow_0000103298_c++_double_managed_unmanaged.txt
|
Q:
How to optimize compiling a 32 bit application in Visual C++ 2005 on a 64 bit windows sever 2008
I have just installed a build server with a 64 bit windows server 2008 for continuous integration.
The reason I choose a 64 bit server was to have more than ~3Gb of RAM. I had hopes that this machine would provide blazing fast builds.
Unfortunately, the result are lacking greatly to say the least. My desktop provides faster builds than this server equipped with a Xeon quad core, 15k RPM SAS and 8 Gigs of RAM.
We use Visual C++ 2005 to compile our 32 bit application with Cygwin.
Could the WOW64 emulator be the bottleneck that is slowing down the build process?
Any pointers, comments would be greatly appreciated.
Regards,
A:
WOW64 is not an emulator on x64. The processor natively executes 32-bit x86 code. At the bottom of the user-mode stack, under kernel32 et al, are DLLs which map system calls to the 64-bit call interface.
See WOW64 Implementation Details.
A:
We use Visual C++ 2005 to compile our
32 bit application with Cygwin.
I think that's the problem. I like Cygwin a lot, but it is really slow when it comes to file I/O. It helps a bit to deactivate the NTFS filesystem feature to keep track of the last file-access.
To get a better speed boost port your build-script / makefile to use the native command shell if pssible and only call cygwin-tools if there is really no replacement available.
If you use the gcc compiler try the mingw version. That one is a lot faster.
|
How to optimize compiling a 32 bit application in Visual C++ 2005 on a 64 bit windows sever 2008
|
I have just installed a build server with a 64 bit windows server 2008 for continuous integration.
The reason I choose a 64 bit server was to have more than ~3Gb of RAM. I had hopes that this machine would provide blazing fast builds.
Unfortunately, the result are lacking greatly to say the least. My desktop provides faster builds than this server equipped with a Xeon quad core, 15k RPM SAS and 8 Gigs of RAM.
We use Visual C++ 2005 to compile our 32 bit application with Cygwin.
Could the WOW64 emulator be the bottleneck that is slowing down the build process?
Any pointers, comments would be greatly appreciated.
Regards,
|
[
"WOW64 is not an emulator on x64. The processor natively executes 32-bit x86 code. At the bottom of the user-mode stack, under kernel32 et al, are DLLs which map system calls to the 64-bit call interface.\nSee WOW64 Implementation Details.\n",
"\nWe use Visual C++ 2005 to compile our\n 32 bit application with Cygwin.\n\nI think that's the problem. I like Cygwin a lot, but it is really slow when it comes to file I/O. It helps a bit to deactivate the NTFS filesystem feature to keep track of the last file-access. \nTo get a better speed boost port your build-script / makefile to use the native command shell if pssible and only call cygwin-tools if there is really no replacement available. \nIf you use the gcc compiler try the mingw version. That one is a lot faster.\n"
] |
[
1,
1
] |
[] |
[] |
[
"cygwin",
"visual_c++",
"visual_studio_2005",
"windows_server_2008",
"wow64"
] |
stackoverflow_0000102377_cygwin_visual_c++_visual_studio_2005_windows_server_2008_wow64.txt
|
Q:
Setting the 'audience' in a SharePoint-NavigationNode?
Hallo, i am using WSS 3.0 and i need to display certain entries of a website's navigation ("Quicklaunch") to specified groups only. According to this blogpost this can be done using properties of the SPNavigationNode - but it seems the solution to the problem is 'MOSS only'. Is there a way to do this in WSS?
A:
The QuickLaunch (QL) will do security trimming for the default items on the menu. In other words, if a user doesn't have access to what the QL nav item points to, it won't be displayed to her. However, the QL unfortunately does not do security trimming on nav items you add manually through the GUI. If you add items via the object model and indicate that they should be security-trimmed, it will work.
I was able to both add and remove security-trimmed QL nav items to WSS using the code in this blog post. (Actually, I did it via PowerShell, but that's still using the same object model code.) I hope that helps.
|
Setting the 'audience' in a SharePoint-NavigationNode?
|
Hallo, i am using WSS 3.0 and i need to display certain entries of a website's navigation ("Quicklaunch") to specified groups only. According to this blogpost this can be done using properties of the SPNavigationNode - but it seems the solution to the problem is 'MOSS only'. Is there a way to do this in WSS?
|
[
"The QuickLaunch (QL) will do security trimming for the default items on the menu. In other words, if a user doesn't have access to what the QL nav item points to, it won't be displayed to her. However, the QL unfortunately does not do security trimming on nav items you add manually through the GUI. If you add items via the object model and indicate that they should be security-trimmed, it will work. \nI was able to both add and remove security-trimmed QL nav items to WSS using the code in this blog post. (Actually, I did it via PowerShell, but that's still using the same object model code.) I hope that helps. \n"
] |
[
1
] |
[] |
[] |
[
"sharepoint"
] |
stackoverflow_0000101704_sharepoint.txt
|
Q:
How can I overcome inconsistent behaviour of snprintf in different UNIX-like operating systems?
Per man pages, snprintf is returning number of bytes written from glibc version 2.2 onwards. But on lower versions of libc2.2 and HP-UX, it returns a positive integer, which could lead to a buffer overflow.
How can one overcome this and write portable code?
Edit : For want of more clarity
This code is working perfectly in lib 2.3
if ( snprintf( cmd, cmdLen + 1, ". %s%s", myVar1, myVar2 ) != cmdLen )
{
fprintf( stderr, "\nError: Unable to copy bmake command!!!");
returnCode = ERR_COPY_FILENAME_FAILED;
}
It returns the length of the string (10) on Linux. But the same code is returning a positive number that is greater than the number of characters printed on HP-UX machine. Hope this explanation is fine.
A:
you could create a snprintf wrapper that returns -1 for each case when there is not enough space in the buffer.
See the man page for more docs. It has also an example which threats all the cases.
while (1) {
/* Try to print in the allocated space. */
va_start(ap, fmt);
n = vsnprintf (p, size, fmt, ap);
va_end(ap);
/* If that worked, return the string. */
if (n > -1 && n < size)
return p;
/* Else try again with more space. */
if (n > -1) /* glibc 2.1 */
size = n+1; /* precisely what is needed */
else /* glibc 2.0 */
size *= 2; /* twice the old size */
if ((np = realloc (p, size)) == NULL) {
free(p);
return NULL;
} else {
p = np;
}
}
A:
Have you considered a portable implementation of printf? I looked for one a little while ago and settled on trio.
http://daniel.haxx.se/projects/trio/
A:
Your question is still unclear. The man page linked to speaks thus:
The functions snprintf() and vsnprintf() do not write more than size bytes (including
the trailing '\0'). If the output was truncated due to this limit then the return value is the number of characters (not including the trailing '\0') which would have been written to the final string if enough space had been available. Thus, a return value of size or more means that the output was truncated.
So, if you want to know if your output was truncated:
int ret = snprintf(cmd, cmdLen + 1, ". %s%s", myVar1, myVar2 ) == -1)
if(ret == -1 || ret > cmdLen)
{
//output was truncated
}
else
{
//everything is groovy
}
A:
There are a whole host of problems with *printf portability, and realistically you probably want to follow one of three paths:
Require a c99 compliant *printf, because 9 years should be enough for anyone, and just say the platform is broken otherwise.
Have a my_snprintf() with a bunch of #ifdef's for the specific platforms you want to support all calling the vsnprintf() underneath (understanding the lowest common denominator is what you have).
Just carry around a copy of vsnprintf() with your code, for simple usecases it's actually pretty simple and for others you'd probably want to look at vstr and you'll get customer formatters for free.
...as other people have suggested you can do a hack merging #1 and #2, just for the -1 case, but that is risky due to the fact that c99 *printf can/does return -1 in certain conditions.
Personally I'd recommend just going with a string library like ustr, which does the simple workarounds for you and gives you managed strings for free. If you really care you can combine with vstr.
A:
I have found one portable way to predict and/or limit the number of characters returned by sprintf and related functions, but it's inefficient and many consider it inelegant.
What you do is create a temporary file with tmpfile(), fprintf() to that (which reliably returns the number of bytes written), then rewind and read all or part of the text into a buffer.
Example:
int my_snprintf(char *buf, size_t n, const char *fmt, ...)
{
va_list va;
int nchars;
FILE *tf = tmpfile();
va_start(va, n);
nchars = vfprintf(tf, fmt, va);
if (nchars >= (int) n)
nchars = (int) n - 1;
va_end(va);
memset(buf, 0, 1 + (size_t) nchars);
if (nchars > 0)
{
rewind(tf);
fread(buf, 1, (size_t) nchars, tf);
}
fclose(tf);
return nchars;
}
A:
Use the much superior asprintf() instead.
It's a GNU extension, but it's worth copying to the target platform in the event that it's not natively available.
|
How can I overcome inconsistent behaviour of snprintf in different UNIX-like operating systems?
|
Per man pages, snprintf is returning number of bytes written from glibc version 2.2 onwards. But on lower versions of libc2.2 and HP-UX, it returns a positive integer, which could lead to a buffer overflow.
How can one overcome this and write portable code?
Edit : For want of more clarity
This code is working perfectly in lib 2.3
if ( snprintf( cmd, cmdLen + 1, ". %s%s", myVar1, myVar2 ) != cmdLen )
{
fprintf( stderr, "\nError: Unable to copy bmake command!!!");
returnCode = ERR_COPY_FILENAME_FAILED;
}
It returns the length of the string (10) on Linux. But the same code is returning a positive number that is greater than the number of characters printed on HP-UX machine. Hope this explanation is fine.
|
[
"you could create a snprintf wrapper that returns -1 for each case when there is not enough space in the buffer.\nSee the man page for more docs. It has also an example which threats all the cases.\n while (1) {\n /* Try to print in the allocated space. */\n va_start(ap, fmt);\n n = vsnprintf (p, size, fmt, ap);\n va_end(ap);\n /* If that worked, return the string. */\n if (n > -1 && n < size)\n return p;\n /* Else try again with more space. */\n if (n > -1) /* glibc 2.1 */\n size = n+1; /* precisely what is needed */\n else /* glibc 2.0 */\n size *= 2; /* twice the old size */\n if ((np = realloc (p, size)) == NULL) {\n free(p);\n return NULL;\n } else {\n p = np;\n }\n }\n\n",
"Have you considered a portable implementation of printf? I looked for one a little while ago and settled on trio. \nhttp://daniel.haxx.se/projects/trio/\n",
"Your question is still unclear. The man page linked to speaks thus:\n\nThe functions snprintf() and vsnprintf() do not write more than size bytes (including\n the trailing '\\0'). If the output was truncated due to this limit then the return value is the number of characters (not including the trailing '\\0') which would have been written to the final string if enough space had been available. Thus, a return value of size or more means that the output was truncated.\n\nSo, if you want to know if your output was truncated:\nint ret = snprintf(cmd, cmdLen + 1, \". %s%s\", myVar1, myVar2 ) == -1)\nif(ret == -1 || ret > cmdLen)\n{\n //output was truncated\n}\nelse\n{\n //everything is groovy\n}\n\n",
"There are a whole host of problems with *printf portability, and realistically you probably want to follow one of three paths:\n\nRequire a c99 compliant *printf, because 9 years should be enough for anyone, and just say the platform is broken otherwise.\nHave a my_snprintf() with a bunch of #ifdef's for the specific platforms you want to support all calling the vsnprintf() underneath (understanding the lowest common denominator is what you have).\nJust carry around a copy of vsnprintf() with your code, for simple usecases it's actually pretty simple and for others you'd probably want to look at vstr and you'll get customer formatters for free.\n\n...as other people have suggested you can do a hack merging #1 and #2, just for the -1 case, but that is risky due to the fact that c99 *printf can/does return -1 in certain conditions.\nPersonally I'd recommend just going with a string library like ustr, which does the simple workarounds for you and gives you managed strings for free. If you really care you can combine with vstr.\n",
"I have found one portable way to predict and/or limit the number of characters returned by sprintf and related functions, but it's inefficient and many consider it inelegant.\nWhat you do is create a temporary file with tmpfile(), fprintf() to that (which reliably returns the number of bytes written), then rewind and read all or part of the text into a buffer.\nExample:\nint my_snprintf(char *buf, size_t n, const char *fmt, ...)\n{\n va_list va;\n int nchars;\n FILE *tf = tmpfile();\n\n va_start(va, n);\n nchars = vfprintf(tf, fmt, va);\n if (nchars >= (int) n)\n nchars = (int) n - 1;\n va_end(va);\n memset(buf, 0, 1 + (size_t) nchars);\n\n if (nchars > 0)\n {\n rewind(tf);\n fread(buf, 1, (size_t) nchars, tf);\n }\n\n fclose(tf);\n\n return nchars; \n}\n\n",
"Use the much superior asprintf() instead.\nIt's a GNU extension, but it's worth copying to the target platform in the event that it's not natively available.\n"
] |
[
4,
2,
1,
1,
0,
0
] |
[] |
[] |
[
"buffer_overflow",
"buffer_overrun",
"c",
"cross_platform"
] |
stackoverflow_0000100904_buffer_overflow_buffer_overrun_c_cross_platform.txt
|
Q:
Minimal latency objects pooling technique in multithread application
In the application we have something about 30 types of objects that are created repeatedly.
Some of them have long life (hours) some have short (milliseconds).
Objects could be created in one thread and destroyed in another.
Does anybody have any clue what could be good pooling technique in the sense of minimal creation/destruction latency, low lock contention and reasonable memory utilization?
Append 1.
1.1. Object pool/memory allocations for one type usually is not related to another type (see 1.3 for an exception)
1.2. Memory allocation is performed for only one type (class) at time, usually for several objects at time.
1.3. If a type aggregates another type using pointer (for some reason) these types allocated together in the one continuous piece of memory.
Append 2.
2.1. Using a collection with access serialization per type is known to be worse than new/delete.
2.2. Application is used on different platforms/compilers and cannot use compiler/platform specific tricks.
Append 3.
It becomes obvious that the fastest (with lowest latency) implementation should organize object pooling as star-like factories network. Where the central factory is global for other thread specific factories. Regular object provision/recycling is more effective to do in a thread specific factory while the central factory could be used for object balancing between threads.
3.1. What is the most effective way to organize communications between the central factory and thread specific factories?
A:
I assume you have profile and measured your code after doing all that creation and verified that create/destroy is actually causing an issue. Else this is what you should do first.
If you still want to do the object pooling, as a first step, you should ensure your objects are stateless coz, that would be the prerequisite for reusing an object. Similarly you should ensure the members of the object and the object itself has no issue with being used from a different threads other than the one which created it. (COM STA objects / window handles etc)
If you use windows and COM, one way to use system provided pooling would be to write Free Threaded objects and enable object pooling, which will make the COM+ run time (earlier known as MTS) do this for you. If you use some other platform like Java perhaps you could use application servers that define interfaces that your objects should implement and the COM+ server could do the pooling for you.
or you could roll your own code. But you should try to find if there is pattern for this and if yes use that instead of what follows below
If you need to roll your own code, create a dynamically growable collection which tracks the objects already created. Use a vector preferrably for the collection since you would only be adding to the collection and it would be easy to traverse it searching for a free object. (assuming you do not delete objects in pool). Change the collection type according to your delete policies (vector of pointers/references to objects if you are using C++ so that delete and recreate an object at the same location)
Each object should be tracked using a flag which can be read in a volatile manner and changed using an interlock function to mark it as being used/ not used.
If all objects are used, you need to create a new object and add it to the collection. Before adding, you can acquire a lock (critical section), mark the new object as being used and exit the lock.
Measure and proceed - probably if you implemented the above collection as a class you could easily create different collections for different object types so as to reduce lock contention from threads that do different work.
Finally you could implement an overloaded class factory interface that can create all kinds of pooled objects and knows which collection holds which class
You could then optimize on this design from there.
Hope that helps.
A:
To minimize construct/destruct latency, you need fully constructed objects at hand, so you will eliminate the new/ctor/dtor/delete time. These "free" objects can be kept in a list so you just pop/push the element at the end.
You may lock the object pools (one for each type) one by one. It is a bit more efficient than a system-wide lock, but does not have the overhead of a by-object locking.
A:
If you haven't looked at tcmalloc, you might want to take a look. Basing your implementation off of its concepts might be a good start. Key points:
Determine a set of size classes. (Each allocation will be fulfilled by using an entry from an equal or greater sized allocation.)
Use one size-class per page. (All instances in a page are the same size.)
Use per-thread freelists to avoid atomic operations on every alloc/dealloc
When a per-thread freelist is too large, move some of the instances back to the central freelist. Try to move back allocations from the same page.
When a per-thread freelist is empty, take some from the central freelist. Try to take contiguous entries.
Important: You probably know this, but make sure your design will minimize false sharing.
Additional things you can do that tcmalloc can't:
Try to enable locality of reference by using finer-grained allocation pools. For example, if a few thousand objects will be accessed together, then it is best if they are close together in memory. (To minimize cache missed and TLB faults.) If you allocate these instances from their own threadcache, then they should have fairly good locality.
If you know in advance which instances will be long-lived and which will not, then allocate them from separate thread caches. If you do not know, then periodically copy the old instances using a threadcache for allocation and update old references to the new instances.
A:
If you have some guess of the preferred size of the pool you can create fixed size pool using stack structure using array (the fastest possible solution). Then you need to implement four phases of object life time hard initialization (and memory allocation), soft initialization, soft cleanup and hard cleanup (and memory release). Now in pseudo code:
Object* ObjectPool::AcquireObject()
{
Object* object = 0;
lock( _stackLock );
if( _stackIndex )
object = _stack[ --_stackIndex ];
unlock( _stackLock );
if( !object )
object = HardInit();
SoftInit( object );
}
void ObjectPool::ReleaseObject(Object* object)
{
SoftCleanup( object );
lock( _stackLock );
if( _stackIndex < _maxSize )
{
object = _stack[ _stackIndex++ ];
unlock( _stackLock );
}
else
{
unlock( _stack );
HardCleanup( object );
}
}
HardInit/HardCleanup method performs full object initialization and destruction and they are executed only if the pool is empty or if the freed object cannot fit the pool because it is full. SoftIniti performs soft initialization of objects, it initializes only those aspect of objects that can be changed since it was released. SoftCleanup method free resources used by the object which should be freed as fast as possible or those resources which can become invalid during the time its owner resides in the pool. As you can see locking is minimal, only two lines of code (or only few instructions).
These four methods can be implemented in separate (template) classes so you can implement fine tuned operations per object type or usage. Also you may consider using smart pointers to automatically return object to its pool when it is no longer needed.
A:
Have you tried the hoard allocator? It provides better performance than the default allocator on many systems.
A:
Why do you have multiple threads destroying objects they did not create? It's a simple way to handle object lifetime, but the costs can vary widely depending on use.
Anyways, if you haven't started implementing this yet, at the very least you can put the create/destroy functionality behind an interface so that you can test/change/optimize this at a later date when you have more information about what your system actually does.
|
Minimal latency objects pooling technique in multithread application
|
In the application we have something about 30 types of objects that are created repeatedly.
Some of them have long life (hours) some have short (milliseconds).
Objects could be created in one thread and destroyed in another.
Does anybody have any clue what could be good pooling technique in the sense of minimal creation/destruction latency, low lock contention and reasonable memory utilization?
Append 1.
1.1. Object pool/memory allocations for one type usually is not related to another type (see 1.3 for an exception)
1.2. Memory allocation is performed for only one type (class) at time, usually for several objects at time.
1.3. If a type aggregates another type using pointer (for some reason) these types allocated together in the one continuous piece of memory.
Append 2.
2.1. Using a collection with access serialization per type is known to be worse than new/delete.
2.2. Application is used on different platforms/compilers and cannot use compiler/platform specific tricks.
Append 3.
It becomes obvious that the fastest (with lowest latency) implementation should organize object pooling as star-like factories network. Where the central factory is global for other thread specific factories. Regular object provision/recycling is more effective to do in a thread specific factory while the central factory could be used for object balancing between threads.
3.1. What is the most effective way to organize communications between the central factory and thread specific factories?
|
[
"I assume you have profile and measured your code after doing all that creation and verified that create/destroy is actually causing an issue. Else this is what you should do first.\nIf you still want to do the object pooling, as a first step, you should ensure your objects are stateless coz, that would be the prerequisite for reusing an object. Similarly you should ensure the members of the object and the object itself has no issue with being used from a different threads other than the one which created it. (COM STA objects / window handles etc)\nIf you use windows and COM, one way to use system provided pooling would be to write Free Threaded objects and enable object pooling, which will make the COM+ run time (earlier known as MTS) do this for you. If you use some other platform like Java perhaps you could use application servers that define interfaces that your objects should implement and the COM+ server could do the pooling for you. \nor you could roll your own code. But you should try to find if there is pattern for this and if yes use that instead of what follows below\nIf you need to roll your own code, create a dynamically growable collection which tracks the objects already created. Use a vector preferrably for the collection since you would only be adding to the collection and it would be easy to traverse it searching for a free object. (assuming you do not delete objects in pool). Change the collection type according to your delete policies (vector of pointers/references to objects if you are using C++ so that delete and recreate an object at the same location)\nEach object should be tracked using a flag which can be read in a volatile manner and changed using an interlock function to mark it as being used/ not used. \nIf all objects are used, you need to create a new object and add it to the collection. Before adding, you can acquire a lock (critical section), mark the new object as being used and exit the lock. \nMeasure and proceed - probably if you implemented the above collection as a class you could easily create different collections for different object types so as to reduce lock contention from threads that do different work. \nFinally you could implement an overloaded class factory interface that can create all kinds of pooled objects and knows which collection holds which class\nYou could then optimize on this design from there. \nHope that helps. \n",
"To minimize construct/destruct latency, you need fully constructed objects at hand, so you will eliminate the new/ctor/dtor/delete time. These \"free\" objects can be kept in a list so you just pop/push the element at the end.\nYou may lock the object pools (one for each type) one by one. It is a bit more efficient than a system-wide lock, but does not have the overhead of a by-object locking.\n",
"If you haven't looked at tcmalloc, you might want to take a look. Basing your implementation off of its concepts might be a good start. Key points:\n\nDetermine a set of size classes. (Each allocation will be fulfilled by using an entry from an equal or greater sized allocation.)\nUse one size-class per page. (All instances in a page are the same size.)\nUse per-thread freelists to avoid atomic operations on every alloc/dealloc\nWhen a per-thread freelist is too large, move some of the instances back to the central freelist. Try to move back allocations from the same page.\nWhen a per-thread freelist is empty, take some from the central freelist. Try to take contiguous entries.\nImportant: You probably know this, but make sure your design will minimize false sharing.\n\nAdditional things you can do that tcmalloc can't:\n\nTry to enable locality of reference by using finer-grained allocation pools. For example, if a few thousand objects will be accessed together, then it is best if they are close together in memory. (To minimize cache missed and TLB faults.) If you allocate these instances from their own threadcache, then they should have fairly good locality.\nIf you know in advance which instances will be long-lived and which will not, then allocate them from separate thread caches. If you do not know, then periodically copy the old instances using a threadcache for allocation and update old references to the new instances.\n\n",
"If you have some guess of the preferred size of the pool you can create fixed size pool using stack structure using array (the fastest possible solution). Then you need to implement four phases of object life time hard initialization (and memory allocation), soft initialization, soft cleanup and hard cleanup (and memory release). Now in pseudo code:\nObject* ObjectPool::AcquireObject()\n{\n Object* object = 0;\n lock( _stackLock );\n if( _stackIndex )\n object = _stack[ --_stackIndex ];\n unlock( _stackLock );\n if( !object )\n object = HardInit();\n SoftInit( object );\n}\n\nvoid ObjectPool::ReleaseObject(Object* object)\n{\n SoftCleanup( object );\n lock( _stackLock );\n if( _stackIndex < _maxSize )\n {\n object = _stack[ _stackIndex++ ];\n unlock( _stackLock );\n }\n else\n {\n unlock( _stack );\n HardCleanup( object );\n }\n}\n\nHardInit/HardCleanup method performs full object initialization and destruction and they are executed only if the pool is empty or if the freed object cannot fit the pool because it is full. SoftIniti performs soft initialization of objects, it initializes only those aspect of objects that can be changed since it was released. SoftCleanup method free resources used by the object which should be freed as fast as possible or those resources which can become invalid during the time its owner resides in the pool. As you can see locking is minimal, only two lines of code (or only few instructions).\nThese four methods can be implemented in separate (template) classes so you can implement fine tuned operations per object type or usage. Also you may consider using smart pointers to automatically return object to its pool when it is no longer needed.\n",
"Have you tried the hoard allocator? It provides better performance than the default allocator on many systems.\n",
"Why do you have multiple threads destroying objects they did not create? It's a simple way to handle object lifetime, but the costs can vary widely depending on use.\nAnyways, if you haven't started implementing this yet, at the very least you can put the create/destroy functionality behind an interface so that you can test/change/optimize this at a later date when you have more information about what your system actually does.\n"
] |
[
4,
2,
1,
0,
0,
0
] |
[] |
[] |
[
"c++",
"multithreading"
] |
stackoverflow_0000099907_c++_multithreading.txt
|
Q:
When do transactions become more of a burden than a benefit?
Transactional programming is, in this day and age, a staple in modern development. Concurrency and fault-tolerance are critical to an applications longevity and, rightly so, transactional logic has become easy to implement. As applications grow though, it seems that transactional code tends to become more and more burdensome on the scalability of the application, and when you bridge into distributed transactions and mirrored data sets the issues start to become very complicated. I'm curious what seems to be the point, in data size or application complexity, that transactions frequently start becoming the source of issues (causing timeouts, deadlocks, performance issues in mission critical code, etc) which are more bothersome to fix, troubleshoot or workaround than designing a data model that is more fault-tolerant in itself, or using other means to ensure data integrity. Also, what design patterns serve to minimize these impacts or make standard transactional logic obsolete or a non-issue?
--
EDIT: We've got some answers of reasonable quality so far, but I think I'll post an answer myself to bring up some of the things I've heard about to try to inspire some additional creativity; most of the responses I'm getting are pessimistic views of the problem.
Another important note is that not all dead-locks are a result of poorly coded procedures; sometimes there are mission critical operations that depend on similar resources in different orders, or complex joins in different queries that step on each other; this is an issue that can sometimes seem unavoidable, but I've been a part of reworking workflows to facilitate an execution order that is less likely to cause one.
A:
I think no design pattern can solve this issue in itself. Good database design, good store procedure programming and especially learning how to keep your transactions short will ease most of the problems.
There is no 100% guaranteed method of not having problems though.
In basically every case I've seen in my career though, deadlocks and slowdowns were solved by fixing the stored procedures:
making sure all tables are accessed in order prevents deadlocks
fixing indexes and statistics makes everything faster (hence diminishes the chance of deadlock)
sometimes there was no real need of transactions, it just "looked" like it
sometimes transactions could be eliminated by making multiple statement stored procedures in single statement ones.
A:
The use of shared resources is wrong in the long run. Because by reusing an existing environment you are creating more and more possibilities. Just review the busy beavers :) The way Erlang goes is the right way to produce fault-tolerant and easily verifiable systems.
But transactional memory is essential for many applications in widespread use. If you consult a bank with its millions of customers for example you can't just copy the data for the sake of efficiency.
I think monads are a cool concept to handle the difficult concept of changing state.
A:
If you are talking 'cloud computing' here, the answer would be to localize each transaction to the place where it happens in the cloud.
There is no need for the entire cloud to be consistent, as that would kill performance (as you noted). Simply, keep track of what is changed and where and handle multiple small transactions as changes propagate through the system.
The situation where user A updates record R and user B at the other end of cloud does not see it (yet) is the same as the one when user A didn't do the change yet in the current strict-transactional environment. This could lead to a discrepancy in an update-heavy system, so systems should be architectured to work with updates as less as possible - moving things to aggregation of data and pulling out the aggregates once the exact figure is critical (i.e. moving requirement for consistency from write-time to critical-read-time).
Well, just my POV. It's hard to conceive a system that is application agnostic in this case.
A:
Try to make changes at the database level in the least number of possible instructions.
The general rule is to lock a resource the lest possible time. Using T-SQL, PLSQL, Java on Oracle or any similar way you can reduce the time that each transaction locks a shared resource. I fact transactions in the database are optimized with row-level locks, multi-version, and other kinds of intelligent techniques. If you can make the transaction at the database you save the network latency. Apart from other layers like ODBC/JDBC/OLEBD.
Sometimes the programmer tries to obtain the good things of a database ( It is transactional, parallel, distributed, ) but keep a caché of the data. Then they need to add manually some of the database features.
A:
One approach I've heard of is to make a versioned insert only model where no updates ever occur. During selects the version is used to select only the latest rows. One downside I know of with this approach is that the database can get rather large very quickly.
I also know that some solutions, such as FogBugz, don't use enforced foreign keys, which I believe would also help mitigate some of these problems because the SQL query plan can lock linked tables during selects or updates even if no data is changing in them, and if it's a highly contended table that gets locked it can increase the chance of DeadLock or Timeout.
I don't know much about these approaches though since I've never used them, so I assume there are pros and cons to each that I'm not aware of, as well as some other techniques I've never heard about.
I've also been looking into some of the material from Carlo Pescio's recent post, which I've not had enough time to do it justice unfortunately, but the material seems very interesting.
|
When do transactions become more of a burden than a benefit?
|
Transactional programming is, in this day and age, a staple in modern development. Concurrency and fault-tolerance are critical to an applications longevity and, rightly so, transactional logic has become easy to implement. As applications grow though, it seems that transactional code tends to become more and more burdensome on the scalability of the application, and when you bridge into distributed transactions and mirrored data sets the issues start to become very complicated. I'm curious what seems to be the point, in data size or application complexity, that transactions frequently start becoming the source of issues (causing timeouts, deadlocks, performance issues in mission critical code, etc) which are more bothersome to fix, troubleshoot or workaround than designing a data model that is more fault-tolerant in itself, or using other means to ensure data integrity. Also, what design patterns serve to minimize these impacts or make standard transactional logic obsolete or a non-issue?
--
EDIT: We've got some answers of reasonable quality so far, but I think I'll post an answer myself to bring up some of the things I've heard about to try to inspire some additional creativity; most of the responses I'm getting are pessimistic views of the problem.
Another important note is that not all dead-locks are a result of poorly coded procedures; sometimes there are mission critical operations that depend on similar resources in different orders, or complex joins in different queries that step on each other; this is an issue that can sometimes seem unavoidable, but I've been a part of reworking workflows to facilitate an execution order that is less likely to cause one.
|
[
"I think no design pattern can solve this issue in itself. Good database design, good store procedure programming and especially learning how to keep your transactions short will ease most of the problems.\nThere is no 100% guaranteed method of not having problems though.\nIn basically every case I've seen in my career though, deadlocks and slowdowns were solved by fixing the stored procedures:\n\nmaking sure all tables are accessed in order prevents deadlocks\nfixing indexes and statistics makes everything faster (hence diminishes the chance of deadlock)\nsometimes there was no real need of transactions, it just \"looked\" like it\nsometimes transactions could be eliminated by making multiple statement stored procedures in single statement ones.\n\n",
"The use of shared resources is wrong in the long run. Because by reusing an existing environment you are creating more and more possibilities. Just review the busy beavers :) The way Erlang goes is the right way to produce fault-tolerant and easily verifiable systems. \nBut transactional memory is essential for many applications in widespread use. If you consult a bank with its millions of customers for example you can't just copy the data for the sake of efficiency.\nI think monads are a cool concept to handle the difficult concept of changing state.\n",
"If you are talking 'cloud computing' here, the answer would be to localize each transaction to the place where it happens in the cloud. \nThere is no need for the entire cloud to be consistent, as that would kill performance (as you noted). Simply, keep track of what is changed and where and handle multiple small transactions as changes propagate through the system. \nThe situation where user A updates record R and user B at the other end of cloud does not see it (yet) is the same as the one when user A didn't do the change yet in the current strict-transactional environment. This could lead to a discrepancy in an update-heavy system, so systems should be architectured to work with updates as less as possible - moving things to aggregation of data and pulling out the aggregates once the exact figure is critical (i.e. moving requirement for consistency from write-time to critical-read-time).\nWell, just my POV. It's hard to conceive a system that is application agnostic in this case.\n",
"Try to make changes at the database level in the least number of possible instructions.\nThe general rule is to lock a resource the lest possible time. Using T-SQL, PLSQL, Java on Oracle or any similar way you can reduce the time that each transaction locks a shared resource. I fact transactions in the database are optimized with row-level locks, multi-version, and other kinds of intelligent techniques. If you can make the transaction at the database you save the network latency. Apart from other layers like ODBC/JDBC/OLEBD. \nSometimes the programmer tries to obtain the good things of a database ( It is transactional, parallel, distributed, ) but keep a caché of the data. Then they need to add manually some of the database features. \n",
"One approach I've heard of is to make a versioned insert only model where no updates ever occur. During selects the version is used to select only the latest rows. One downside I know of with this approach is that the database can get rather large very quickly.\nI also know that some solutions, such as FogBugz, don't use enforced foreign keys, which I believe would also help mitigate some of these problems because the SQL query plan can lock linked tables during selects or updates even if no data is changing in them, and if it's a highly contended table that gets locked it can increase the chance of DeadLock or Timeout. \nI don't know much about these approaches though since I've never used them, so I assume there are pros and cons to each that I'm not aware of, as well as some other techniques I've never heard about.\nI've also been looking into some of the material from Carlo Pescio's recent post, which I've not had enough time to do it justice unfortunately, but the material seems very interesting.\n"
] |
[
2,
2,
0,
0,
0
] |
[] |
[] |
[
"design_patterns",
"performance",
"sql",
"transactions"
] |
stackoverflow_0000087796_design_patterns_performance_sql_transactions.txt
|
Q:
Reuse of StaticResource in Silverlight 2.0
I am currently testing with Silverlight 2.0 Beta 2, and my goal is to define a resource element once and then reuse it many times in my rendering. This simple example defines a rectangle (myRect) as a resource and then I attempt to reuse it twice -- which fails with the error:
Attribute {StaticResource myRect} value is out of range. [Line: 9 Position: 83]
BTW, this sample works fine in WPF.
<UserControl x:Class="ReuseResourceTest.Page"
xmlns="http://schemas.microsoft.com/client/2007"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
Width="200" Height="200">
<Canvas x:Name="LayoutRoot" Background="Yellow">
<Canvas.Resources>
<RectangleGeometry x:Key="myRect" Rect="25,50,25,50" />
</Canvas.Resources>
<Path Stroke="Black" StrokeThickness="10" Data="{StaticResource myRect}" />
<Path Stroke="White" StrokeThickness="4" Data="{StaticResource myRect}" />
</Canvas>
</UserControl>
Any thoughts on what's up here.
Thanks,
-- Ed
A:
I have also encountered the same problem when trying to reuse components defined as static resources. The workaround I have found is not declaring the controls as resources, but defining styles setting all the properties you need, and instantiating a new control with that style every time you need.
EDIT: The out of range exception you are getting happens when you assign a control to a container that already is inside another container. It also happens in many other scenarios (such as applying a style to an object that already has one), but I believe this is your case.
|
Reuse of StaticResource in Silverlight 2.0
|
I am currently testing with Silverlight 2.0 Beta 2, and my goal is to define a resource element once and then reuse it many times in my rendering. This simple example defines a rectangle (myRect) as a resource and then I attempt to reuse it twice -- which fails with the error:
Attribute {StaticResource myRect} value is out of range. [Line: 9 Position: 83]
BTW, this sample works fine in WPF.
<UserControl x:Class="ReuseResourceTest.Page"
xmlns="http://schemas.microsoft.com/client/2007"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
Width="200" Height="200">
<Canvas x:Name="LayoutRoot" Background="Yellow">
<Canvas.Resources>
<RectangleGeometry x:Key="myRect" Rect="25,50,25,50" />
</Canvas.Resources>
<Path Stroke="Black" StrokeThickness="10" Data="{StaticResource myRect}" />
<Path Stroke="White" StrokeThickness="4" Data="{StaticResource myRect}" />
</Canvas>
</UserControl>
Any thoughts on what's up here.
Thanks,
-- Ed
|
[
"I have also encountered the same problem when trying to reuse components defined as static resources. The workaround I have found is not declaring the controls as resources, but defining styles setting all the properties you need, and instantiating a new control with that style every time you need.\nEDIT: The out of range exception you are getting happens when you assign a control to a container that already is inside another container. It also happens in many other scenarios (such as applying a style to an object that already has one), but I believe this is your case.\n"
] |
[
2
] |
[] |
[] |
[
"silverlight",
"static_resource"
] |
stackoverflow_0000102029_silverlight_static_resource.txt
|
Q:
How can I turn a single object into something that is Enumerable in ruby
I have a method that can return either a single object or a collection of objects. I want to be able to run object.collect on the result of that method whether or not it is a single object or a collection already. How can i do this?
profiles = ProfileResource.search(params)
output = profiles.collect do | profile |
profile.to_hash
end
If profiles is a single object, I get a NoMethodError exception when I try to execute collect on that object.
A:
Careful with the flatten approach, if search() returned nested arrays then unexpected behaviour might result.
profiles = ProfileResource.search(params)
profiles = [profiles] if !profiles.respond_to?(:collect)
output = profiles.collect do |profile|
profile.to_hash
end
A:
Here's a one Liner:
[*ProfileResource.search(params)].collect { |profile| profile.to_hash }
The trick is the splat (*) that turns both individual elements and enumerables into arguments lists (in this case to the new array operator)
A:
profiles = [ProfileResource.search(params)].flatten
output = profiles.collect do |profile|
profile.to_hash
end
A:
In the search method of the ProfileResource class, always return a collection of objects (usually an Array), even if it contains only one object.
A:
If the collection is an Array you could use this technique
profiles = [*ProfileResource.search(params)]
output = profiles.collect do | profile |
profile.to_hash
end
That would guaranteed your profiles is always an array.
A:
profiles = ProfileResource.search(params)
output = Array(profiles).collect do |profile|
profile.to_hash
end
A:
You could first check to see if the object responds to the "collect" method by using "pofiles.respond_to?".
From Programming Ruby
obj.respond_to?(
aSymbol, includePriv=false ) -> true
or false
Returns true if obj responds to the
given method. Private methods are
included in the search only if the
optional second parameter evaluates to
true.
A:
You can use the Kernel#Array method as well.
profiles = Array(ProfileResource.search(params))
output = profiles.collect do | profile |
profile.to_hash
end
A:
Another way is to realise that Enumerable requires that you supply an each method.
So. you COULD mix in Enumerable to your class and give it a dummy each that works....
class YourClass
include Enumerable
... really important and earth shattering stuff ...
def each
yield(self) if block_given?
end
end
This way, if you get back a single item on its own from the search, the enumerable methods will still work as expected.
This way has the advantage that all the support for it is inside your class, not outside where it has to be duplicated many many times.
Of course, the better way is to change the implementation of search such that it returns an array irrespective of how many items is being returned.
|
How can I turn a single object into something that is Enumerable in ruby
|
I have a method that can return either a single object or a collection of objects. I want to be able to run object.collect on the result of that method whether or not it is a single object or a collection already. How can i do this?
profiles = ProfileResource.search(params)
output = profiles.collect do | profile |
profile.to_hash
end
If profiles is a single object, I get a NoMethodError exception when I try to execute collect on that object.
|
[
"Careful with the flatten approach, if search() returned nested arrays then unexpected behaviour might result.\nprofiles = ProfileResource.search(params)\nprofiles = [profiles] if !profiles.respond_to?(:collect)\noutput = profiles.collect do |profile|\n profile.to_hash\nend\n\n",
"Here's a one Liner:\n[*ProfileResource.search(params)].collect { |profile| profile.to_hash }\n\nThe trick is the splat (*) that turns both individual elements and enumerables into arguments lists (in this case to the new array operator)\n",
"profiles = [ProfileResource.search(params)].flatten\noutput = profiles.collect do |profile|\n profile.to_hash\nend\n\n",
"In the search method of the ProfileResource class, always return a collection of objects (usually an Array), even if it contains only one object.\n",
"If the collection is an Array you could use this technique\nprofiles = [*ProfileResource.search(params)]\noutput = profiles.collect do | profile |\n profile.to_hash\nend\n\nThat would guaranteed your profiles is always an array.\n",
"profiles = ProfileResource.search(params)\noutput = Array(profiles).collect do |profile|\n profile.to_hash\nend\n\n",
"You could first check to see if the object responds to the \"collect\" method by using \"pofiles.respond_to?\". \nFrom Programming Ruby\n\nobj.respond_to?(\n aSymbol, includePriv=false ) -> true\n or false \nReturns true if obj responds to the\n given method. Private methods are\n included in the search only if the\n optional second parameter evaluates to\n true.\n\n",
"You can use the Kernel#Array method as well.\nprofiles = Array(ProfileResource.search(params))\noutput = profiles.collect do | profile |\n profile.to_hash\nend\n\n",
"Another way is to realise that Enumerable requires that you supply an each method.\nSo. you COULD mix in Enumerable to your class and give it a dummy each that works....\n\nclass YourClass\n include Enumerable\n\n ... really important and earth shattering stuff ...\n\n def each\n yield(self) if block_given?\n end\nend\n\nThis way, if you get back a single item on its own from the search, the enumerable methods will still work as expected.\nThis way has the advantage that all the support for it is inside your class, not outside where it has to be duplicated many many times.\nOf course, the better way is to change the implementation of search such that it returns an array irrespective of how many items is being returned.\n"
] |
[
6,
6,
1,
0,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"ruby"
] |
stackoverflow_0000079352_ruby.txt
|
Q:
Attaching an ID/Label to LINQ to SQL generated code?
I'm interested in tracing database calls made by LINQ to SQL back to the .NET code that generated the call. For instance, a DBA might have a concern that a particular cached execution plan is doing poorly. If for example a DBA were to tell a developer to address the following code...
exec sp_executesql N'SELECT [t0].[CustomerID]
FROM [dbo].[Customers] AS [t0]
WHERE [t0].[ContactName] LIKE @p0
ORDER BY [t0].[CompanyName]',
'N'@p0 nvarchar(2)',@p0=N'c%'
...it's not immediately obvious which LINQ statement produced the call. Sure you could search through the "Customers" class in the auto-generated data context, but that'd just be a start. With a large application this could quickly become unmanageable.
Is there a way to attach an ID or label to SQL code generated and executed by LINQ to SQL? Thinking out loud, here's an extension function called "TagWith" that illustrates conceptually what I'm interested in doing.
var customers = from c in context.Customers
where c.CompanyName.StartsWith("c")
orderby c.CompanyName
select c.CustomerID;
foreach (var CustomerID in customers.TagWith("CustomerList4"))
{
Console.WriteLine(CustomerID);
}
If the "CustomerList4" ID/label ends up in the automatically-generated SQL, I'd be set. Thanks.
A:
Have you looked at capturing the T-SQL with the DataContext.Log property? If you were able to capure it into an object that also had your tag property you might be able to catalog the SQL your application executes.
A:
There is no public way to modify the underlying SQL that LINQ to SQL generates to implement such a tagging facility. You could implement the Log property in such a way it writes out a text log file with some call stack information to show which methods generated which SQL statements for reference.
|
Attaching an ID/Label to LINQ to SQL generated code?
|
I'm interested in tracing database calls made by LINQ to SQL back to the .NET code that generated the call. For instance, a DBA might have a concern that a particular cached execution plan is doing poorly. If for example a DBA were to tell a developer to address the following code...
exec sp_executesql N'SELECT [t0].[CustomerID]
FROM [dbo].[Customers] AS [t0]
WHERE [t0].[ContactName] LIKE @p0
ORDER BY [t0].[CompanyName]',
'N'@p0 nvarchar(2)',@p0=N'c%'
...it's not immediately obvious which LINQ statement produced the call. Sure you could search through the "Customers" class in the auto-generated data context, but that'd just be a start. With a large application this could quickly become unmanageable.
Is there a way to attach an ID or label to SQL code generated and executed by LINQ to SQL? Thinking out loud, here's an extension function called "TagWith" that illustrates conceptually what I'm interested in doing.
var customers = from c in context.Customers
where c.CompanyName.StartsWith("c")
orderby c.CompanyName
select c.CustomerID;
foreach (var CustomerID in customers.TagWith("CustomerList4"))
{
Console.WriteLine(CustomerID);
}
If the "CustomerList4" ID/label ends up in the automatically-generated SQL, I'd be set. Thanks.
|
[
"Have you looked at capturing the T-SQL with the DataContext.Log property? If you were able to capure it into an object that also had your tag property you might be able to catalog the SQL your application executes.\n",
"There is no public way to modify the underlying SQL that LINQ to SQL generates to implement such a tagging facility. You could implement the Log property in such a way it writes out a text log file with some call stack information to show which methods generated which SQL statements for reference.\n"
] |
[
1,
1
] |
[] |
[] |
[
"linq_to_sql",
"sql_server"
] |
stackoverflow_0000099827_linq_to_sql_sql_server.txt
|
Q:
Managed C++ Method naming
I'm using managed c++ to implement a method that returns a string. I declare the method in my header file using the following signature:
String^ GetWindowText()
However, when I'm using this method from C#, the signature is:
string GetWindowTextW();
How do I get rid of the extra "W" at the end of the method's name?
A:
To get around the preprocessor hackery of the Windows header files, declare it like this:
#undef GetWindowText
String^ GetWindowText()
Note that, if you actually use the Win32 or MFC GetWindowText() routines in your code, you'll need to either redefine the macro or call them as GetWindowTextW().
A:
GetWindowText is a win32 api call that is aliased via a macro to GetWindowTextW in your C++ project.
Try adding #undef GetWindowText to you C++ project.
A:
Not Managed c++ but C++/CLI for the .net platform. A set of Microsoft extensions to C++ for use with their .Net system.
Bjarne Stroustrup's FAQ http://www.research.att.com/~bs/bs_faq.html#CppCLI
C++/CLI is not C++, don't tag it as such. Txs
|
Managed C++ Method naming
|
I'm using managed c++ to implement a method that returns a string. I declare the method in my header file using the following signature:
String^ GetWindowText()
However, when I'm using this method from C#, the signature is:
string GetWindowTextW();
How do I get rid of the extra "W" at the end of the method's name?
|
[
"To get around the preprocessor hackery of the Windows header files, declare it like this:\n#undef GetWindowText\nString^ GetWindowText()\n\nNote that, if you actually use the Win32 or MFC GetWindowText() routines in your code, you'll need to either redefine the macro or call them as GetWindowTextW().\n",
"GetWindowText is a win32 api call that is aliased via a macro to GetWindowTextW in your C++ project.\nTry adding #undef GetWindowText to you C++ project.\n",
"Not Managed c++ but C++/CLI for the .net platform. A set of Microsoft extensions to C++ for use with their .Net system.\nBjarne Stroustrup's FAQ http://www.research.att.com/~bs/bs_faq.html#CppCLI\nC++/CLI is not C++, don't tag it as such. Txs\n"
] |
[
2,
0,
0
] |
[] |
[] |
[
".net",
"c++_cli",
"windows"
] |
stackoverflow_0000103382_.net_c++_cli_windows.txt
|
Q:
How do I figure out who has a SQL Server 2005 database in Single User Mode?
I have a database in single user mode and I am trying to drop it so I can re-run the creation scripts on it, but I'm being locked out from it.
How do I figure out who has the lock on it?
How do I disable that lock?
A:
run sp_who, find the spid with the database name you require, kill the spid.
A:
From SQL Server Management Studio:
open the object explorer
expand the database server
expand "Management"
double-click on "Activity Monitor"
locate the process using the desired database
right-click on process
click "Kill Process"
|
How do I figure out who has a SQL Server 2005 database in Single User Mode?
|
I have a database in single user mode and I am trying to drop it so I can re-run the creation scripts on it, but I'm being locked out from it.
How do I figure out who has the lock on it?
How do I disable that lock?
|
[
"run sp_who, find the spid with the database name you require, kill the spid.\n",
"From SQL Server Management Studio:\n\nopen the object explorer\nexpand the database server\nexpand \"Management\"\ndouble-click on \"Activity Monitor\"\nlocate the process using the desired database\nright-click on process\nclick \"Kill Process\"\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"sql_server_2005"
] |
stackoverflow_0000103121_sql_server_2005.txt
|
Q:
What's the right way to dynamically choose menu items for a context menu in WinForms?
I'm trying to make a context menu for a control that is "linked" to a main menu item. There are two fixed menu items that are always there and an arbitrary number of additional menu items that might need to be on the menu.
I've tried solving the problem by keeping a class-level reference to the fixed menu items and a list of the dynamic menu items. I am handling both menus' Opening events by clearing the current list of items, then adding the appropriate items to the menu. This works fine for the main menu, but the context menu behaves oddly.
The major problem seems to be that by the time Opening is raised, the menu has already decided which items it is going to display. This form demonstrates:
using System.Collections.Generic;
using System.ComponentModel;
using System.Windows.Forms;
namespace WindowsFormsApplication1
{
public class DemoForm : Form
{
private List _items;
public DemoForm()
{
var contextMenu = new ContextMenuStrip();
contextMenu.Opening += contextMenu_Opening;
_items = new List();
_items.Add(new ToolStripMenuItem("item 1"));
_items.Add(new ToolStripMenuItem("item 2"));
this.ContextMenuStrip = contextMenu;
}
void contextMenu_Opening(object sender, CancelEventArgs e)
{
var menu = sender as ContextMenuStrip;
if (menu != null)
{
foreach (var item in _items)
{
menu.Items.Add(item);
}
}
}
}
}
When you right-click the form the first time, nothing is displayed. The second time, the menu is displayed as expected. Is there another event that is raised where I could update the items? Is it a bad practice to dynamically choose menu items?
(Note: This is for an example I started making for someone who wanted such functionality and I was curious about how difficult it is, so I can't provide details about why this might be done. This person wants to "link" a main menu item to the context menu, and since menu items can only be the child of a single menu this seemed a reasonable way to do so. Any alternative suggestions for an approach are welcome.)
A:
You could work out the items during the MouseDown event of the control. Check that it is the right mouse button too.
|
What's the right way to dynamically choose menu items for a context menu in WinForms?
|
I'm trying to make a context menu for a control that is "linked" to a main menu item. There are two fixed menu items that are always there and an arbitrary number of additional menu items that might need to be on the menu.
I've tried solving the problem by keeping a class-level reference to the fixed menu items and a list of the dynamic menu items. I am handling both menus' Opening events by clearing the current list of items, then adding the appropriate items to the menu. This works fine for the main menu, but the context menu behaves oddly.
The major problem seems to be that by the time Opening is raised, the menu has already decided which items it is going to display. This form demonstrates:
using System.Collections.Generic;
using System.ComponentModel;
using System.Windows.Forms;
namespace WindowsFormsApplication1
{
public class DemoForm : Form
{
private List _items;
public DemoForm()
{
var contextMenu = new ContextMenuStrip();
contextMenu.Opening += contextMenu_Opening;
_items = new List();
_items.Add(new ToolStripMenuItem("item 1"));
_items.Add(new ToolStripMenuItem("item 2"));
this.ContextMenuStrip = contextMenu;
}
void contextMenu_Opening(object sender, CancelEventArgs e)
{
var menu = sender as ContextMenuStrip;
if (menu != null)
{
foreach (var item in _items)
{
menu.Items.Add(item);
}
}
}
}
}
When you right-click the form the first time, nothing is displayed. The second time, the menu is displayed as expected. Is there another event that is raised where I could update the items? Is it a bad practice to dynamically choose menu items?
(Note: This is for an example I started making for someone who wanted such functionality and I was curious about how difficult it is, so I can't provide details about why this might be done. This person wants to "link" a main menu item to the context menu, and since menu items can only be the child of a single menu this seemed a reasonable way to do so. Any alternative suggestions for an approach are welcome.)
|
[
"You could work out the items during the MouseDown event of the control. Check that it is the right mouse button too.\n"
] |
[
2
] |
[] |
[] |
[
".net",
"menu",
"winforms"
] |
stackoverflow_0000103514_.net_menu_winforms.txt
|
Q:
"get() const" vs. "getAsConst() const"
Someone told me about a C++ style difference in their team. I have my own viewpoint on the subject, but I would be interested by pros and cons coming from everyone.
So, in case you have a class property you want to expose via two getters, one read/write, and the other, readonly (i.e. there is no set method). There are at least two ways of doing it:
class T ;
class MethodA
{
public :
const T & get() const ;
T & get() ;
// etc.
} ;
class MethodB
{
public :
const T & getAsConst() const ;
T & get() ;
// etc.
} ;
What would be the pros and the cons of each method?
I am interested more by C++ technical/semantic reasons, but style reasons are welcome, too.
Note that MethodB has one major technical drawback (hint: in generic code).
A:
C++ should be perfectly capable to cope with method A in almost all situations. I always use it, and I never had a problem.
Method B is, in my opinion, a case of violation of OnceAndOnlyOnce. And, now you need to go figure out whether you're dealing with const reference to write the code that compiles first time.
I guess this is a stylistic thing - technically they both works, but MethodA makes the compiler to work a bit harder. To me, it's a good thing.
A:
Well, for one thing, getAsConst must be called when the 'this' pointer is const -- not when you want to receive a const object. So, alongside any other issues, it's subtly misnamed. (You can still call it when 'this' is non-const, but that's neither here nor there.)
Ignoring that, getAsConst earns you nothing, and puts an undue burden on the developer using the interface. Instead of just calling "get" and knowing he's getting what he needs, now he has to ascertain whether or not he's currently using a const variable, and if the new object he's grabbing needs to be const. And later, if both objects become non-const due to some refactoring, he's got to switch out his call.
A:
Personally, I prefer the first method, because it makes for a more consistent interface. Also, to me getAsConst() sounds just about as silly as getAsInt().
On a different note, you really should think twice before returning a non-const reference or a non-const pointer to a data member of your class. This is an invitation for people to exploit the inner workings of your class, which ideally should be hidden. In other words it breaks encapsulation. I would use a get() const and a set(), and return a non-const reference only if there is no other way, or when it really makes sense, such as to give read/write access to an element of an array or a matrix.
A:
Given the style precedent set by the standard library (ie begin() and begin() const to name just one example), it should be obvious that method A is the correct choice. I question the person's sanity that chooses method B.
A:
So, the first style is generally preferable.
We do use a variation of the second style quite a bit in the codebase I'm currently working on though, because we want a big distinction between const and non-const usage.
In my specific example, we have getTessellation and getMutableTessellation. It's implemented with a copy-on-write pointer. For performance reasons we want the const version to be use wherever possible, so we make the name shorter, and we make it a different name so people don't accidentally cause a copy when they weren't going to write anyway.
A:
While it appears your question only addresses one method, I'd be happy to give my input on style. Personally, for style reasons, I prefer the former. Most IDEs will pop up the type signature of functions for you.
A:
I would prefer the first. It looks better in code when two things that essentially do the same thing look the same. Also, it is rare for you to have a non-const object but want to call the const method, so that isn't much of a consern (and in the worst case, you'd only need a const_cast<>).
A:
The first allows changes to the variable type (whether it is const or not) without further modification of the code. Of course, this means that there is no notification to the developer that this might have changed from the intended path. So it's really whether you value being able to quickly refactor, or having the extra safety net.
A:
The second one is something related to Hungarian notation which I personally DON'T like so I will stick with the first method.
I don't like Hungarian notation because it adds redundancy which I usually detest in programming. It is just my opinion.
A:
Since you hide the names of the classes, this food for thought on style may or may not apply:
Does it make sense to tell these two objects, MethodA and MethodB, to "get" or "getAsConst"? Would you send "get" or "getAsConst" as messages to either object?
The way I see it, as the sender of the message / invoker of the method, you are the one getting the value; so in response to this "get" message, you are sending some message to MethodA / MethodB, the result of which is the value you need to get.
Example: If the caller of MethodA is, say, a service in SOA, and MethodA is a repository, then inside the service's get_A(), call MethodA.find_A_by_criteria(...).
A:
The major technological drawback of MethodB I saw is that when applying generic code to it, we must double the code to handle both the const and the non-const version. For example:
Let's say T is an order-able object (ie, we can compare to objects of type T with operator <), and let's say we want to find the max between two MethodA (resp. two MethodB).
For MethodA, all we need to code is:
template <typename T>
T & getMax(T & p_oLeft, T & p_oRight)
{
if(p_oLeft.get() > p_oRight.get())
{
return p_oLeft ;
}
else
{
return p_oRight ;
}
}
This code will work both with const objects and non-const objects of type T:
// Ok
const MethodA oA_C0(), oA_C1() ;
const MethodA & oA_CResult = getMax(oA_C0, oA_C1) ;
// Ok again
MethodA oA_0(), oA_1() ;
MethodA & oA_Result = getMax(oA_0, oA_1) ;
The problem comes when we want to apply this easy code to something following the MethodB convention:
// NOT Ok
const MethodB oB_C0(), oB_C1() ;
const MethodB & oB_CResult = getMax(oB_C0, oB_C1) ; // Won't compile
// Ok
MethodA oB_0(), oB_1() ;
MethodA & oB_Result = getMax(oB_0, oB_1) ;
For the MethodB to work on both const and non-const version, we must both use the already defined getMax, but add to it the following version of getMax:
template <typename T>
const T & getMax(const T & p_oLeft, const T & p_oRight)
{
if(p_oLeft.getAsConst() > p_oRight.getAsConst())
{
return p_oLeft ;
}
else
{
return p_oRight ;
}
}
Conclusion, by not trusting the compiler on const-ness use, we burden ourselves with the creation of two generic functions when one should have been enough.
Of course, with enough paranoia, the secondth template function should have been called getMaxAsConst... And thus, the problem would propagate itself through all the code...
:-p
|
"get() const" vs. "getAsConst() const"
|
Someone told me about a C++ style difference in their team. I have my own viewpoint on the subject, but I would be interested by pros and cons coming from everyone.
So, in case you have a class property you want to expose via two getters, one read/write, and the other, readonly (i.e. there is no set method). There are at least two ways of doing it:
class T ;
class MethodA
{
public :
const T & get() const ;
T & get() ;
// etc.
} ;
class MethodB
{
public :
const T & getAsConst() const ;
T & get() ;
// etc.
} ;
What would be the pros and the cons of each method?
I am interested more by C++ technical/semantic reasons, but style reasons are welcome, too.
Note that MethodB has one major technical drawback (hint: in generic code).
|
[
"C++ should be perfectly capable to cope with method A in almost all situations. I always use it, and I never had a problem.\nMethod B is, in my opinion, a case of violation of OnceAndOnlyOnce. And, now you need to go figure out whether you're dealing with const reference to write the code that compiles first time.\nI guess this is a stylistic thing - technically they both works, but MethodA makes the compiler to work a bit harder. To me, it's a good thing.\n",
"Well, for one thing, getAsConst must be called when the 'this' pointer is const -- not when you want to receive a const object. So, alongside any other issues, it's subtly misnamed. (You can still call it when 'this' is non-const, but that's neither here nor there.)\nIgnoring that, getAsConst earns you nothing, and puts an undue burden on the developer using the interface. Instead of just calling \"get\" and knowing he's getting what he needs, now he has to ascertain whether or not he's currently using a const variable, and if the new object he's grabbing needs to be const. And later, if both objects become non-const due to some refactoring, he's got to switch out his call.\n",
"Personally, I prefer the first method, because it makes for a more consistent interface. Also, to me getAsConst() sounds just about as silly as getAsInt(). \nOn a different note, you really should think twice before returning a non-const reference or a non-const pointer to a data member of your class. This is an invitation for people to exploit the inner workings of your class, which ideally should be hidden. In other words it breaks encapsulation. I would use a get() const and a set(), and return a non-const reference only if there is no other way, or when it really makes sense, such as to give read/write access to an element of an array or a matrix.\n",
"Given the style precedent set by the standard library (ie begin() and begin() const to name just one example), it should be obvious that method A is the correct choice. I question the person's sanity that chooses method B.\n",
"So, the first style is generally preferable.\nWe do use a variation of the second style quite a bit in the codebase I'm currently working on though, because we want a big distinction between const and non-const usage.\nIn my specific example, we have getTessellation and getMutableTessellation. It's implemented with a copy-on-write pointer. For performance reasons we want the const version to be use wherever possible, so we make the name shorter, and we make it a different name so people don't accidentally cause a copy when they weren't going to write anyway.\n",
"While it appears your question only addresses one method, I'd be happy to give my input on style. Personally, for style reasons, I prefer the former. Most IDEs will pop up the type signature of functions for you.\n",
"I would prefer the first. It looks better in code when two things that essentially do the same thing look the same. Also, it is rare for you to have a non-const object but want to call the const method, so that isn't much of a consern (and in the worst case, you'd only need a const_cast<>).\n",
"The first allows changes to the variable type (whether it is const or not) without further modification of the code. Of course, this means that there is no notification to the developer that this might have changed from the intended path. So it's really whether you value being able to quickly refactor, or having the extra safety net.\n",
"The second one is something related to Hungarian notation which I personally DON'T like so I will stick with the first method.\nI don't like Hungarian notation because it adds redundancy which I usually detest in programming. It is just my opinion.\n",
"Since you hide the names of the classes, this food for thought on style may or may not apply:\nDoes it make sense to tell these two objects, MethodA and MethodB, to \"get\" or \"getAsConst\"? Would you send \"get\" or \"getAsConst\" as messages to either object? \nThe way I see it, as the sender of the message / invoker of the method, you are the one getting the value; so in response to this \"get\" message, you are sending some message to MethodA / MethodB, the result of which is the value you need to get.\nExample: If the caller of MethodA is, say, a service in SOA, and MethodA is a repository, then inside the service's get_A(), call MethodA.find_A_by_criteria(...).\n",
"The major technological drawback of MethodB I saw is that when applying generic code to it, we must double the code to handle both the const and the non-const version. For example:\nLet's say T is an order-able object (ie, we can compare to objects of type T with operator <), and let's say we want to find the max between two MethodA (resp. two MethodB).\nFor MethodA, all we need to code is:\ntemplate <typename T>\nT & getMax(T & p_oLeft, T & p_oRight)\n{\n if(p_oLeft.get() > p_oRight.get())\n {\n return p_oLeft ;\n }\n else\n {\n return p_oRight ;\n }\n}\n\nThis code will work both with const objects and non-const objects of type T:\n// Ok\nconst MethodA oA_C0(), oA_C1() ;\nconst MethodA & oA_CResult = getMax(oA_C0, oA_C1) ;\n\n// Ok again\nMethodA oA_0(), oA_1() ;\nMethodA & oA_Result = getMax(oA_0, oA_1) ;\n\nThe problem comes when we want to apply this easy code to something following the MethodB convention:\n// NOT Ok\nconst MethodB oB_C0(), oB_C1() ;\nconst MethodB & oB_CResult = getMax(oB_C0, oB_C1) ; // Won't compile\n\n// Ok\nMethodA oB_0(), oB_1() ;\nMethodA & oB_Result = getMax(oB_0, oB_1) ;\n\nFor the MethodB to work on both const and non-const version, we must both use the already defined getMax, but add to it the following version of getMax:\ntemplate <typename T>\nconst T & getMax(const T & p_oLeft, const T & p_oRight)\n{\n if(p_oLeft.getAsConst() > p_oRight.getAsConst())\n {\n return p_oLeft ;\n }\n else\n {\n return p_oRight ;\n }\n}\n\nConclusion, by not trusting the compiler on const-ness use, we burden ourselves with the creation of two generic functions when one should have been enough.\nOf course, with enough paranoia, the secondth template function should have been called getMaxAsConst... And thus, the problem would propagate itself through all the code...\n:-p\n"
] |
[
10,
8,
4,
1,
1,
0,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"c++",
"coding_style",
"constants"
] |
stackoverflow_0000097081_c++_coding_style_constants.txt
|
Q:
Where can I find Microsoft assemblies that are not already in Visual Studio?
I figured someone can answer the question generally but if anyone wants to get specific I am trying to use:
using System.Web.Security.SingleSignOn;
using System.Web.Security.SingleSignOn.Authorization;
I've googled my brains out and this is the closest answer I found:
"We discussed this offline, but it looks like the ADFS assembly is GACed, but
not installed on the file system or registered with VS.NET so that it shows
up in the .NET tab. I'm guessing MS may need to beef up the installer for
this scenario. In the meantime, you probably need to do this yourself."
What on earth, do WHAT myself?
A:
I found an install log showing that it was expected to be in
C:\WINDOWS\ADFS\System.Web.Security.SingleSignon.dll
on Windows Server 2003. You probably need to have active directory installed for it to appear there because I checked one of my 2003 servers without AD and it wasn't there.
Normally I would guess the DLL would be registered in the system-wide Global Assembly Cache (GAC), so you wouldn't have to know the actual path for it. If an assembly is registered in the GAC, then you can add a reference to it by bringing up the "Add Reference" dialog and clicking on the ".NET" Tab.
A:
You can find the specified namespace in this file: system.web.security.singlesignon.claimtransforms.dll
But this file isn't normaly available but only installed in the GAC (Global Assembly Cache). You may find it under e.g. c:\window\assembly... and copy the dll to another path. Then you can manual reference it within Visual Studio.
A:
For projects using specific environment (like SharePoint object model)is recommended using virtual pc with installed in GAC assemblies. ADFS assemblies should have only Win server. If you find them and install manually in work environment (desktop) some possibilities (like debugging) will not impossible.
A:
If you're trying to add the assembly to the ".NET" tab in the Visual Studio "Add References" dialog box, there's a registry setting you need to make. KB30149 explains it in greater detail. The short version: You need to add an entry to the HKEY_CURRENT_USER\SOFTWARE\Microsoft\.NETFramework\AssemblyFolders registry key.
If you're trying to locate a physical file corresponding to an assembly in the GAC, drop to a command prompt and go to %WINDIR%\Assembly (e.g., C:\WINDOWS\Assembly). Navigate around in there - that's where GAC'd assemblies live.
|
Where can I find Microsoft assemblies that are not already in Visual Studio?
|
I figured someone can answer the question generally but if anyone wants to get specific I am trying to use:
using System.Web.Security.SingleSignOn;
using System.Web.Security.SingleSignOn.Authorization;
I've googled my brains out and this is the closest answer I found:
"We discussed this offline, but it looks like the ADFS assembly is GACed, but
not installed on the file system or registered with VS.NET so that it shows
up in the .NET tab. I'm guessing MS may need to beef up the installer for
this scenario. In the meantime, you probably need to do this yourself."
What on earth, do WHAT myself?
|
[
"I found an install log showing that it was expected to be in\n\nC:\\WINDOWS\\ADFS\\System.Web.Security.SingleSignon.dll\n\non Windows Server 2003. You probably need to have active directory installed for it to appear there because I checked one of my 2003 servers without AD and it wasn't there.\nNormally I would guess the DLL would be registered in the system-wide Global Assembly Cache (GAC), so you wouldn't have to know the actual path for it. If an assembly is registered in the GAC, then you can add a reference to it by bringing up the \"Add Reference\" dialog and clicking on the \".NET\" Tab.\n",
"You can find the specified namespace in this file: system.web.security.singlesignon.claimtransforms.dll\nBut this file isn't normaly available but only installed in the GAC (Global Assembly Cache). You may find it under e.g. c:\\window\\assembly... and copy the dll to another path. Then you can manual reference it within Visual Studio.\n",
"For projects using specific environment (like SharePoint object model)is recommended using virtual pc with installed in GAC assemblies. ADFS assemblies should have only Win server. If you find them and install manually in work environment (desktop) some possibilities (like debugging) will not impossible.\n",
"If you're trying to add the assembly to the \".NET\" tab in the Visual Studio \"Add References\" dialog box, there's a registry setting you need to make. KB30149 explains it in greater detail. The short version: You need to add an entry to the HKEY_CURRENT_USER\\SOFTWARE\\Microsoft\\.NETFramework\\AssemblyFolders registry key.\nIf you're trying to locate a physical file corresponding to an assembly in the GAC, drop to a command prompt and go to %WINDIR%\\Assembly (e.g., C:\\WINDOWS\\Assembly). Navigate around in there - that's where GAC'd assemblies live.\n"
] |
[
2,
1,
0,
0
] |
[] |
[] |
[
".net",
"adfs",
"assemblies",
"gac",
"visual_studio"
] |
stackoverflow_0000103178_.net_adfs_assemblies_gac_visual_studio.txt
|
Q:
Don't repeat yourself vs Internationalisation
A while back I was reading the W3C article on 'Re-using Strings in Scripted Content', which contains some useful advice on internationalisation, but which strikes me as at odds iwth the DRY (Don't Repeat Yourself) principle of eliminating repetitive code.
To take their example, we might have some code like this...
print "The printer is ";
if (printer.working) {
print "on.\n";
} else {
print "off.\n";
}
print "The stapler is ";
if (stapler.working) {
print "on.\n";
} else {
print "off.\n";
}
My instinct would be to eliminate the repetition roughly as follows...
report-state(printer, "printer");
report-state(stapler, "stapler");
function report-state(name, object) {
print "The "+name+" is ";
if (object.working) {
print "on\n";
} else {
print "off\n";
}
}
...but doing so would cause a difficulty in the code if we needed to localise it to Spanish because the word for 'on' is apparently different in those two cases.
So, I guess my question is, how have other developers approached balancing the DRY principle with internationalisation of their code?
Part of me wants to argue that internationalisation is one of those extreme programming “you arent gonna need it” situations. On the flip side however, refactoring with the DRY principle in mind is supposed to balance this by making it easy to implement functionality as it’s required, not harder as it does here.
A:
I'd try to keep complete sentences in the language resource. As you said you might need different words in different contexts. But a bigger problem is that the order of sentences might be different in different languages. So building up strings from words can cause problems.
Just store
The printer is on
The printer is off
The stapler is on
The stapler is off
in the language resource for every language. The repetition here is less of a maintenance headache than trying to figure out where all the single words are going to pop up in your application.
A:
100% agree with Mendelt.
It is not only a maintenance problem, but can also be a linguistic one.
In all Latin languages the gender, number, and case of the subject affect other elements.
Example for Romanian
The printer is on: Imprimanta este pornită // feminine
The printer is off: Imprimanta este oprită
The stapler is on: Perforatorul este pornit // masculine
The stapler is off: Perforatorul este oprit
Also see http://www.mihai-nita.net/article.php?artID=20060430a
A:
I agree with Mendelt Siebenga when he says you should keep entire sentences or phrases in your language resource files. Differences in grammar will always prevent you from doing single word replacement across languages. This will still lead to less repetitive code than your first example because you only need to check the object type and its state, then print the appropriate message from the language resource.
A:
We try not to create message strings by program manipulation because the loc. team can't see them.
The loc. team actually prefer separate but nearly duplicate messages.
However they will accept parameterized messages.
E.g., "The %(appliance)% is %(on_or_off)%."
The parameters can break down but at least it's more obvious to the loc team when it will work and when it won't.
A:
I suppose it depends on the level of language quality that you are aiming to achieve.
By trying to minimise repetition of code that deals with these real language strings, you are just exposing yourself to a whole other layer of logic in the syntaxes and structures of different languages. There would be a massive amount of work involved in producing code which still retains the original structure of the language whilst minimising repetition.
You'd have to decide which was a more suitable approach to a particular problem; Code that repeats itself, or code that tries to be a Jack of all Trades and accomodates for countless rules of language (no doubt a maintenance nightmare).
Of course, you can strike a middle-ground and minimise your code repitition but give up satisfactory grammatical eloquence. Take the example of Ultima Online - when it was localised, a string that previously read "A pile of 329 gold coins" became something like "A pile of gold coins: 329". Not great, but a fairly reasonable solution that lends itself easily to localisation.
A:
I would suggest using a CMS rather than hardcoding in your textual values to cover localisation.
|
Don't repeat yourself vs Internationalisation
|
A while back I was reading the W3C article on 'Re-using Strings in Scripted Content', which contains some useful advice on internationalisation, but which strikes me as at odds iwth the DRY (Don't Repeat Yourself) principle of eliminating repetitive code.
To take their example, we might have some code like this...
print "The printer is ";
if (printer.working) {
print "on.\n";
} else {
print "off.\n";
}
print "The stapler is ";
if (stapler.working) {
print "on.\n";
} else {
print "off.\n";
}
My instinct would be to eliminate the repetition roughly as follows...
report-state(printer, "printer");
report-state(stapler, "stapler");
function report-state(name, object) {
print "The "+name+" is ";
if (object.working) {
print "on\n";
} else {
print "off\n";
}
}
...but doing so would cause a difficulty in the code if we needed to localise it to Spanish because the word for 'on' is apparently different in those two cases.
So, I guess my question is, how have other developers approached balancing the DRY principle with internationalisation of their code?
Part of me wants to argue that internationalisation is one of those extreme programming “you arent gonna need it” situations. On the flip side however, refactoring with the DRY principle in mind is supposed to balance this by making it easy to implement functionality as it’s required, not harder as it does here.
|
[
"I'd try to keep complete sentences in the language resource. As you said you might need different words in different contexts. But a bigger problem is that the order of sentences might be different in different languages. So building up strings from words can cause problems.\nJust store\nThe printer is on\nThe printer is off\nThe stapler is on\nThe stapler is off\n\nin the language resource for every language. The repetition here is less of a maintenance headache than trying to figure out where all the single words are going to pop up in your application.\n",
"100% agree with Mendelt.\nIt is not only a maintenance problem, but can also be a linguistic one.\nIn all Latin languages the gender, number, and case of the subject affect other elements.\nExample for Romanian\n The printer is on: Imprimanta este pornită // feminine\n The printer is off: Imprimanta este oprită\n The stapler is on: Perforatorul este pornit // masculine\n The stapler is off: Perforatorul este oprit\n\nAlso see http://www.mihai-nita.net/article.php?artID=20060430a\n",
"I agree with Mendelt Siebenga when he says you should keep entire sentences or phrases in your language resource files. Differences in grammar will always prevent you from doing single word replacement across languages. This will still lead to less repetitive code than your first example because you only need to check the object type and its state, then print the appropriate message from the language resource.\n",
"We try not to create message strings by program manipulation because the loc. team can't see them.\nThe loc. team actually prefer separate but nearly duplicate messages. \nHowever they will accept parameterized messages.\nE.g., \"The %(appliance)% is %(on_or_off)%.\"\nThe parameters can break down but at least it's more obvious to the loc team when it will work and when it won't.\n",
"I suppose it depends on the level of language quality that you are aiming to achieve.\nBy trying to minimise repetition of code that deals with these real language strings, you are just exposing yourself to a whole other layer of logic in the syntaxes and structures of different languages. There would be a massive amount of work involved in producing code which still retains the original structure of the language whilst minimising repetition.\nYou'd have to decide which was a more suitable approach to a particular problem; Code that repeats itself, or code that tries to be a Jack of all Trades and accomodates for countless rules of language (no doubt a maintenance nightmare).\nOf course, you can strike a middle-ground and minimise your code repitition but give up satisfactory grammatical eloquence. Take the example of Ultima Online - when it was localised, a string that previously read \"A pile of 329 gold coins\" became something like \"A pile of gold coins: 329\". Not great, but a fairly reasonable solution that lends itself easily to localisation.\n",
"I would suggest using a CMS rather than hardcoding in your textual values to cover localisation.\n"
] |
[
16,
7,
2,
2,
1,
0
] |
[] |
[] |
[
"internationalization"
] |
stackoverflow_0000056574_internationalization.txt
|
Q:
IIS Wildcard Mapping not working for ASP.NET
I've set up wildcard mapping on IIS 6, by adding "C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\aspnet_isapi.dll", and ensured "Verify that file exists" is not checked :
on the "websites" directory in IIS
on the website
However, after a iisreset, when I go to http://myserver/something.gif, I still get IIS 404 error, not asp.net one.
Is there something I missed ?
Precisions:
this is not for using ASP.NET MVC
i'd rather not use iis 404 custom error pages, as I have a httpmodule for logging errors (this is a low traffic internal site, so wildcard mapping performance penalty is not a problem ;))
A:
You need to add an HTTP Handler in your web config for gif files:
<system.web>
<httpHandlers>
<add path="*.gif" verb="GET,HEAD" type="System.Web.StaticFileHandler" validate="true"/>
</httpHandlers>
</system.web>
That forces .Net to handle the file, then you'll get the .Net error.
Server Error in '/' Application.
The resource cannot be found.
Description: HTTP 404. The resource you are looking for (or one of its dependencies) could have been removed, had its name changed, or is temporarily unavailable. Please review the following URL and make sure that it is spelled correctly.
Requested URL: /test.gif
Version Information: Microsoft .NET Framework Version:2.0.50727.1433; ASP.NET Version:2.0.50727.1433
A:
You can try use custom errors to do this.
Go into Custom Errors in you Website properties and set the 404 to point to a URL in your site. Like /404.aspx is that exists.
With aspnet_isapi, you want to use a HttpModule to handle your wildcards.
like http://urlrewriter.net/
A:
You can't use wilcard mapping without using ASP.net Routing or URLrewriting or some url mapping mechanism.
If you want to do 404, you have to configure it in web.config -> Custom errors.
Then you can redirect to other pages if you want.
New in 3.5 SP1, you set the RedirectMode to "responseRewrite" to avoid a redirect to a custom error page and leave the URL in the browser untouched.
Other way to do it, will be catching the error in global.aspx, and redirecting. Please comment on the answer if you need further instructions.
|
IIS Wildcard Mapping not working for ASP.NET
|
I've set up wildcard mapping on IIS 6, by adding "C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\aspnet_isapi.dll", and ensured "Verify that file exists" is not checked :
on the "websites" directory in IIS
on the website
However, after a iisreset, when I go to http://myserver/something.gif, I still get IIS 404 error, not asp.net one.
Is there something I missed ?
Precisions:
this is not for using ASP.NET MVC
i'd rather not use iis 404 custom error pages, as I have a httpmodule for logging errors (this is a low traffic internal site, so wildcard mapping performance penalty is not a problem ;))
|
[
"You need to add an HTTP Handler in your web config for gif files:\n <system.web>\n <httpHandlers>\n <add path=\"*.gif\" verb=\"GET,HEAD\" type=\"System.Web.StaticFileHandler\" validate=\"true\"/>\n </httpHandlers>\n </system.web>\n\nThat forces .Net to handle the file, then you'll get the .Net error.\nServer Error in '/' Application.\nThe resource cannot be found. \nDescription: HTTP 404. The resource you are looking for (or one of its dependencies) could have been removed, had its name changed, or is temporarily unavailable. Please review the following URL and make sure that it is spelled correctly. \nRequested URL: /test.gif\n\nVersion Information: Microsoft .NET Framework Version:2.0.50727.1433; ASP.NET Version:2.0.50727.1433 \n",
"You can try use custom errors to do this.\nGo into Custom Errors in you Website properties and set the 404 to point to a URL in your site. Like /404.aspx is that exists.\nWith aspnet_isapi, you want to use a HttpModule to handle your wildcards.\nlike http://urlrewriter.net/\n",
"You can't use wilcard mapping without using ASP.net Routing or URLrewriting or some url mapping mechanism.\nIf you want to do 404, you have to configure it in web.config -> Custom errors.\nThen you can redirect to other pages if you want.\nNew in 3.5 SP1, you set the RedirectMode to \"responseRewrite\" to avoid a redirect to a custom error page and leave the URL in the browser untouched.\nOther way to do it, will be catching the error in global.aspx, and redirecting. Please comment on the answer if you need further instructions.\n"
] |
[
4,
0,
0
] |
[] |
[] |
[
"asp.net",
"iis"
] |
stackoverflow_0000101326_asp.net_iis.txt
|
Q:
Locking File using Apache Server and TortoiseSVN
I am seeting up Apache server with TortoiseSVN for local source code repository. Currently on trial purpose I am setting only two users.
Is it possible for administrator to set up some thing so that file get compulsory locked once its checkout (copy to working directory) by some one.
Abhijit Dhopate
A:
The main reason you might want to do this on subversion is for binary files (i.e. images, etc.) that are difficult or impossible to 'merge'. In those cases, each user can request a lock on a file. There is also a svn property (needs-lock) that can be applied to files that makes them read-only on checkout, and read-write when you lock, so that you remember to request the lock before editing.
See the chapter on locking in the svn book.
A:
Wouldn't that defeat one of the purposes of a concurrent versioning system like SubVersion? Generally, you'll check out a block of files, but the server doesn't know whether anyone is editing those files. Why not allow another user access to those files and deal with the results if a conflict emerges?
|
Locking File using Apache Server and TortoiseSVN
|
I am seeting up Apache server with TortoiseSVN for local source code repository. Currently on trial purpose I am setting only two users.
Is it possible for administrator to set up some thing so that file get compulsory locked once its checkout (copy to working directory) by some one.
Abhijit Dhopate
|
[
"The main reason you might want to do this on subversion is for binary files (i.e. images, etc.) that are difficult or impossible to 'merge'. In those cases, each user can request a lock on a file. There is also a svn property (needs-lock) that can be applied to files that makes them read-only on checkout, and read-write when you lock, so that you remember to request the lock before editing.\nSee the chapter on locking in the svn book.\n",
"Wouldn't that defeat one of the purposes of a concurrent versioning system like SubVersion? Generally, you'll check out a block of files, but the server doesn't know whether anyone is editing those files. Why not allow another user access to those files and deal with the results if a conflict emerges?\n"
] |
[
2,
0
] |
[] |
[] |
[
"apache",
"locking",
"svn",
"tortoisesvn"
] |
stackoverflow_0000103548_apache_locking_svn_tortoisesvn.txt
|
Q:
Why don't modules always honor 'require' in ruby?
(sorry I should have been clearer with the code the first time I posted this. Hope this makes sense)
File "size_specification.rb"
class SizeSpecification
def fits?
end
end
File "some_module.rb"
require 'size_specification'
module SomeModule
def self.sizes
YAML.load_file(File.dirname(__FILE__) + '/size_specification_data.yml')
end
end
File "size_specification_data.yml
---
- !ruby/object:SizeSpecification
height: 250
width: 300
Then when I call
SomeModule.sizes.first.fits?
I get an exception because "sizes" are Object's not SizeSpecification's so they don't have a "fits" function.
A:
Are your settings and ruby installation ok? I created those 3 files and wrote what follows in "test.rb"
require 'yaml'
require "some_module"
SomeModule.sizes.first.fits?
Then I ran it.
$ ruby --version
ruby 1.8.6 (2008-06-20 patchlevel 230) [i486-linux]
$ ruby -w test.rb
$
No errors!
A:
On second reading I'm a little confused, you seem to want to mix the class into module, which is porbably not so advisable. Also is the YAML supposed to load an array of the SizeSpecifications?
It appears to be that you're not mixing the Module into your class. If I run the test in irb then the require throws a LoadError. So I assume you've put two files together, if not dump it.
Normally you'd write the functionality in the module, then mix that into the class. so you may modify your code like this:
class SizeSpecification
include SomeModule
def fits?
end
end
Which will allow you to then say:
SizeSpecification::SomeModule.sizes
I think you should also be able to say:
SizeSpecification.sizes
However that requires you to take the self off the prefix of the sizes method definition.
Does that help?
A:
The question code got me a little confused.
In general with Ruby, if that happens it's a good sign that I am trying to do things the wrong way.
It might be better to ask a question related to your actual intended outcome, rather than the specifics of a particular 'attack' on your problem. They we can say 'nonono, don't do that, do THIS' or 'ahhhhh, now I understand what you wanna do'
|
Why don't modules always honor 'require' in ruby?
|
(sorry I should have been clearer with the code the first time I posted this. Hope this makes sense)
File "size_specification.rb"
class SizeSpecification
def fits?
end
end
File "some_module.rb"
require 'size_specification'
module SomeModule
def self.sizes
YAML.load_file(File.dirname(__FILE__) + '/size_specification_data.yml')
end
end
File "size_specification_data.yml
---
- !ruby/object:SizeSpecification
height: 250
width: 300
Then when I call
SomeModule.sizes.first.fits?
I get an exception because "sizes" are Object's not SizeSpecification's so they don't have a "fits" function.
|
[
"Are your settings and ruby installation ok? I created those 3 files and wrote what follows in \"test.rb\"\nrequire 'yaml'\nrequire \"some_module\"\n\nSomeModule.sizes.first.fits?\n\nThen I ran it.\n$ ruby --version\nruby 1.8.6 (2008-06-20 patchlevel 230) [i486-linux]\n$ ruby -w test.rb \n$\n\nNo errors!\n",
"On second reading I'm a little confused, you seem to want to mix the class into module, which is porbably not so advisable. Also is the YAML supposed to load an array of the SizeSpecifications?\nIt appears to be that you're not mixing the Module into your class. If I run the test in irb then the require throws a LoadError. So I assume you've put two files together, if not dump it.\nNormally you'd write the functionality in the module, then mix that into the class. so you may modify your code like this:\nclass SizeSpecification\n include SomeModule\n def fits? \n end\nend\n\nWhich will allow you to then say:\nSizeSpecification::SomeModule.sizes\n\nI think you should also be able to say:\nSizeSpecification.sizes\n\nHowever that requires you to take the self off the prefix of the sizes method definition.\nDoes that help?\n",
"The question code got me a little confused.\nIn general with Ruby, if that happens it's a good sign that I am trying to do things the wrong way.\nIt might be better to ask a question related to your actual intended outcome, rather than the specifics of a particular 'attack' on your problem. They we can say 'nonono, don't do that, do THIS' or 'ahhhhh, now I understand what you wanna do'\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"ruby"
] |
stackoverflow_0000079466_ruby.txt
|
Q:
Algorithm Issue: letter combinations
I'm trying to write a piece of code that will do the following:
Take the numbers 0 to 9 and assign one or more letters to this number. For example:
0 = N,
1 = L,
2 = T,
3 = D,
4 = R,
5 = V or F,
6 = B or P,
7 = Z,
8 = H or CH or J,
9 = G
When I have a code like 0123, it's an easy job to encode it. It will obviously make up the code NLTD. When a number like 5,6 or 8 is introduced, things get different. A number like 051 would result in more than one possibility:
NVL and NFL
It should be obvious that this gets even "worse" with longer numbers that include several digits like 5,6 or 8.
Being pretty bad at mathematics, I have not yet been able to come up with a decent solution that will allow me to feed the program a bunch of numbers and have it spit out all the possible letter combinations. So I'd love some help with it, 'cause I can't seem to figure it out. Dug up some information about permutations and combinations, but no luck.
Thanks for any suggestions/clues. The language I need to write the code in is PHP, but any general hints would be highly appreciated.
Update:
Some more background: (and thanks a lot for the quick responses!)
The idea behind my question is to build a script that will help people to easily convert numbers they want to remember to words that are far more easily remembered. This is sometimes referred to as "pseudo-numerology".
I want the script to give me all the possible combinations that are then held against a database of stripped words. These stripped words just come from a dictionary and have all the letters I mentioned in my question stripped out of them. That way, the number to be encoded can usually easily be related to a one or more database records. And when that happens, you end up with a list of words that you can use to remember the number you wanted to remember.
A:
It can be done easily recursively.
The idea is that to handle the whole code of size n, you must handle first the n - 1 digits.
Once you have all answers for n-1 digits, the answers for the whole are deduced by appending to them the correct(s) char(s) for the last one.
A:
There's actually a much better solution than enumerating all the possible translations of a number and looking them up: Simply do the reverse computation on every word in your dictionary, and store the string of digits in another field. So if your mapping is:
0 = N,
1 = L,
2 = T,
3 = D,
4 = R,
5 = V or F,
6 = B or P,
7 = Z,
8 = H or CH or J,
9 = G
your reverse mapping is:
N = 0,
L = 1,
T = 2,
D = 3,
R = 4,
V = 5,
F = 5,
B = 6,
P = 6,
Z = 7,
H = 8,
J = 8,
G = 9
Note there's no mapping for 'ch', because the 'c' will be dropped, and the 'h' will be converted to 8 anyway.
Then, all you have to do is iterate through each letter in the dictionary word, output the appropriate digit if there's a match, and do nothing if there isn't.
Store all the generated digit strings as another field in the database. When you want to look something up, just perform a simple query for the number entered, instead of having to do tens (or hundreds, or thousands) of lookups of potential words.
A:
The general structure you want to hold your number -> letter assignments is an array or arrays, similar to:
// 0 = N, 1 = L, 2 = T, 3 = D, 4 = R, 5 = V or F, 6 = B or P, 7 = Z,
// 8 = H or CH or J, 9 = G
$numberMap = new Array (
0 => new Array("N"),
1 => new Array("L"),
2 => new Array("T"),
3 => new Array("D"),
4 => new Array("R"),
5 => new Array("V", "F"),
6 => new Array("B", "P"),
7 => new Array("Z"),
8 => new Array("H", "CH", "J"),
9 => new Array("G"),
);
Then, a bit of recursive logic gives us a function similar to:
function GetEncoding($number) {
$ret = new Array();
for ($i = 0; $i < strlen($number); $i++) {
// We're just translating here, nothing special.
// $var + 0 is a cheap way of forcing a variable to be numeric
$ret[] = $numberMap[$number[$i]+0];
}
}
function PrintEncoding($enc, $string = "") {
// If we're at the end of the line, then print!
if (count($enc) === 0) {
print $string."\n";
return;
}
// Otherwise, soldier on through the possible values.
// Grab the next 'letter' and cycle through the possibilities for it.
foreach ($enc[0] as $letter) {
// And call this function again with it!
PrintEncoding(array_slice($enc, 1), $string.$letter);
}
}
Three cheers for recursion! This would be used via:
PrintEncoding(GetEncoding("052384"));
And if you really want it as an array, play with output buffering and explode using "\n" as your split string.
A:
This kind of problem are usually resolved with recursion. In ruby, one (quick and dirty) solution would be
@values = Hash.new([])
@values["0"] = ["N"]
@values["1"] = ["L"]
@values["2"] = ["T"]
@values["3"] = ["D"]
@values["4"] = ["R"]
@values["5"] = ["V","F"]
@values["6"] = ["B","P"]
@values["7"] = ["Z"]
@values["8"] = ["H","CH","J"]
@values["9"] = ["G"]
def find_valid_combinations(buffer,number)
first_char = number.shift
@values[first_char].each do |key|
if(number.length == 0) then
puts buffer + key
else
find_valid_combinations(buffer + key,number.dup)
end
end
end
find_valid_combinations("",ARGV[0].split(""))
And if you run this from the command line you will get:
$ ruby r.rb 051
NVL
NFL
This is related to brute-force search and backtracking
A:
Here is a recursive solution in Python.
#!/usr/bin/env/python
import sys
ENCODING = {'0':['N'],
'1':['L'],
'2':['T'],
'3':['D'],
'4':['R'],
'5':['V', 'F'],
'6':['B', 'P'],
'7':['Z'],
'8':['H', 'CH', 'J'],
'9':['G']
}
def decode(str):
if len(str) == 0:
return ''
elif len(str) == 1:
return ENCODING[str]
else:
result = []
for prefix in ENCODING[str[0]]:
result.extend([prefix + suffix for suffix in decode(str[1:])])
return result
if __name__ == '__main__':
print decode(sys.argv[1])
Example output:
$ ./demo 1
['L']
$ ./demo 051
['NVL', 'NFL']
$ ./demo 0518
['NVLH', 'NVLCH', 'NVLJ', 'NFLH', 'NFLCH', 'NFLJ']
A:
Could you do the following:
Create a results array.
Create an item in the array with value ""
Loop through the numbers, say 051 analyzing each one individually.
Each time a 1 to 1 match between a number is found add the correct value to all items in the results array.
So "" becomes N.
Each time a 1 to many match is found, add new rows to the results array with one option, and update the existing results with the other option.
So N becomes NV and a new item is created NF
Then the last number is a 1 to 1 match so the items in the results array become
NVL and NFL
To produce the results loop through the results array, printing them, or whatever.
A:
Let pn be a list of all possible letter combinations of a given number string s up to the nth digit.
Then, the following algorithm will generate pn+1:
digit = s[n+1];
foreach(letter l that digit maps to)
{
foreach(entry e in p(n))
{
newEntry = append l to e;
add newEntry to p(n+1);
}
}
The first iteration is somewhat of a special case, since p-1 is undefined. You can simply initialize p0 as the list of all possible characters for the first character.
So, your 051 example:
Iteration 0:
p(0) = {N}
Iteration 1:
digit = 5
foreach({V, F})
{
foreach(p(0) = {N})
{
newEntry = N + V or N + F
p(1) = {NV, NF}
}
}
Iteration 2:
digit = 1
foreach({L})
{
foreach(p(1) = {NV, NF})
{
newEntry = NV + L or NF + L
p(2) = {NVL, NFL}
}
}
A:
The form you want is probably something like:
function combinations( $str ){
$l = len( $str );
$results = array( );
if ($l == 0) { return $results; }
if ($l == 1)
{
foreach( $codes[ $str[0] ] as $code )
{
$results[] = $code;
}
return $results;
}
$cur = $str[0];
$combs = combinations( substr( $str, 1, $l ) );
foreach ($codes[ $cur ] as $code)
{
foreach ($combs as $comb)
{
$results[] = $code.$comb;
}
}
return $results;}
This is ugly, pidgin-php so please verify it first. The basic idea is to generate every combination of the string from [1..n] and then prepend to the front of all those combinations each possible code for str[0]. Bear in mind that in the worst case this will have performance exponential in the length of your string, because that much ambiguity is actually present in your coding scheme.
A:
The trick is not only to generate all possible letter combinations that match a given number, but to select the letter sequence that is most easy to remember. A suggestion would be to run the soundex algorithm on each of the sequence and try to match against an English language dictionary such as Wordnet to find the most 'real-word-sounding' sequences.
|
Algorithm Issue: letter combinations
|
I'm trying to write a piece of code that will do the following:
Take the numbers 0 to 9 and assign one or more letters to this number. For example:
0 = N,
1 = L,
2 = T,
3 = D,
4 = R,
5 = V or F,
6 = B or P,
7 = Z,
8 = H or CH or J,
9 = G
When I have a code like 0123, it's an easy job to encode it. It will obviously make up the code NLTD. When a number like 5,6 or 8 is introduced, things get different. A number like 051 would result in more than one possibility:
NVL and NFL
It should be obvious that this gets even "worse" with longer numbers that include several digits like 5,6 or 8.
Being pretty bad at mathematics, I have not yet been able to come up with a decent solution that will allow me to feed the program a bunch of numbers and have it spit out all the possible letter combinations. So I'd love some help with it, 'cause I can't seem to figure it out. Dug up some information about permutations and combinations, but no luck.
Thanks for any suggestions/clues. The language I need to write the code in is PHP, but any general hints would be highly appreciated.
Update:
Some more background: (and thanks a lot for the quick responses!)
The idea behind my question is to build a script that will help people to easily convert numbers they want to remember to words that are far more easily remembered. This is sometimes referred to as "pseudo-numerology".
I want the script to give me all the possible combinations that are then held against a database of stripped words. These stripped words just come from a dictionary and have all the letters I mentioned in my question stripped out of them. That way, the number to be encoded can usually easily be related to a one or more database records. And when that happens, you end up with a list of words that you can use to remember the number you wanted to remember.
|
[
"It can be done easily recursively.\nThe idea is that to handle the whole code of size n, you must handle first the n - 1 digits.\nOnce you have all answers for n-1 digits, the answers for the whole are deduced by appending to them the correct(s) char(s) for the last one.\n",
"There's actually a much better solution than enumerating all the possible translations of a number and looking them up: Simply do the reverse computation on every word in your dictionary, and store the string of digits in another field. So if your mapping is:\n0 = N,\n1 = L,\n2 = T,\n3 = D,\n4 = R,\n5 = V or F,\n6 = B or P,\n7 = Z,\n8 = H or CH or J,\n9 = G\n\nyour reverse mapping is:\nN = 0,\nL = 1,\nT = 2,\nD = 3,\nR = 4,\nV = 5,\nF = 5,\nB = 6,\nP = 6,\nZ = 7,\nH = 8,\nJ = 8,\nG = 9\n\nNote there's no mapping for 'ch', because the 'c' will be dropped, and the 'h' will be converted to 8 anyway.\nThen, all you have to do is iterate through each letter in the dictionary word, output the appropriate digit if there's a match, and do nothing if there isn't.\nStore all the generated digit strings as another field in the database. When you want to look something up, just perform a simple query for the number entered, instead of having to do tens (or hundreds, or thousands) of lookups of potential words.\n",
"The general structure you want to hold your number -> letter assignments is an array or arrays, similar to:\n// 0 = N, 1 = L, 2 = T, 3 = D, 4 = R, 5 = V or F, 6 = B or P, 7 = Z, \n// 8 = H or CH or J, 9 = G\n$numberMap = new Array (\n 0 => new Array(\"N\"),\n 1 => new Array(\"L\"),\n 2 => new Array(\"T\"),\n 3 => new Array(\"D\"),\n 4 => new Array(\"R\"),\n 5 => new Array(\"V\", \"F\"),\n 6 => new Array(\"B\", \"P\"),\n 7 => new Array(\"Z\"),\n 8 => new Array(\"H\", \"CH\", \"J\"),\n 9 => new Array(\"G\"),\n);\n\nThen, a bit of recursive logic gives us a function similar to:\nfunction GetEncoding($number) {\n $ret = new Array();\n for ($i = 0; $i < strlen($number); $i++) {\n // We're just translating here, nothing special.\n // $var + 0 is a cheap way of forcing a variable to be numeric\n $ret[] = $numberMap[$number[$i]+0];\n }\n}\n\nfunction PrintEncoding($enc, $string = \"\") {\n // If we're at the end of the line, then print!\n if (count($enc) === 0) {\n print $string.\"\\n\";\n return;\n }\n\n // Otherwise, soldier on through the possible values.\n // Grab the next 'letter' and cycle through the possibilities for it.\n foreach ($enc[0] as $letter) {\n // And call this function again with it!\n PrintEncoding(array_slice($enc, 1), $string.$letter);\n }\n}\n\nThree cheers for recursion! This would be used via:\nPrintEncoding(GetEncoding(\"052384\"));\n\nAnd if you really want it as an array, play with output buffering and explode using \"\\n\" as your split string.\n",
"This kind of problem are usually resolved with recursion. In ruby, one (quick and dirty) solution would be\n@values = Hash.new([])\n\n\n@values[\"0\"] = [\"N\"] \n@values[\"1\"] = [\"L\"] \n@values[\"2\"] = [\"T\"] \n@values[\"3\"] = [\"D\"] \n@values[\"4\"] = [\"R\"] \n@values[\"5\"] = [\"V\",\"F\"] \n@values[\"6\"] = [\"B\",\"P\"] \n@values[\"7\"] = [\"Z\"] \n@values[\"8\"] = [\"H\",\"CH\",\"J\"] \n@values[\"9\"] = [\"G\"]\n\ndef find_valid_combinations(buffer,number)\n first_char = number.shift\n @values[first_char].each do |key|\n if(number.length == 0) then\n puts buffer + key\n else\n find_valid_combinations(buffer + key,number.dup)\n end\n end\nend\n\nfind_valid_combinations(\"\",ARGV[0].split(\"\"))\n\nAnd if you run this from the command line you will get:\n$ ruby r.rb 051\nNVL\nNFL\n\nThis is related to brute-force search and backtracking\n",
"Here is a recursive solution in Python.\n#!/usr/bin/env/python\n\nimport sys\n\nENCODING = {'0':['N'],\n '1':['L'],\n '2':['T'],\n '3':['D'],\n '4':['R'],\n '5':['V', 'F'],\n '6':['B', 'P'],\n '7':['Z'],\n '8':['H', 'CH', 'J'],\n '9':['G']\n }\n\ndef decode(str):\n if len(str) == 0:\n return ''\n elif len(str) == 1:\n return ENCODING[str]\n else:\n result = []\n for prefix in ENCODING[str[0]]:\n result.extend([prefix + suffix for suffix in decode(str[1:])])\n return result\n\nif __name__ == '__main__':\n print decode(sys.argv[1])\n\nExample output:\n$ ./demo 1\n['L']\n$ ./demo 051\n['NVL', 'NFL']\n$ ./demo 0518\n['NVLH', 'NVLCH', 'NVLJ', 'NFLH', 'NFLCH', 'NFLJ']\n\n",
"Could you do the following:\nCreate a results array.\nCreate an item in the array with value \"\"\nLoop through the numbers, say 051 analyzing each one individually.\nEach time a 1 to 1 match between a number is found add the correct value to all items in the results array.\nSo \"\" becomes N.\nEach time a 1 to many match is found, add new rows to the results array with one option, and update the existing results with the other option.\nSo N becomes NV and a new item is created NF\nThen the last number is a 1 to 1 match so the items in the results array become\nNVL and NFL\nTo produce the results loop through the results array, printing them, or whatever.\n",
"Let pn be a list of all possible letter combinations of a given number string s up to the nth digit.\nThen, the following algorithm will generate pn+1:\ndigit = s[n+1];\nforeach(letter l that digit maps to)\n{\n foreach(entry e in p(n))\n {\n newEntry = append l to e;\n add newEntry to p(n+1);\n }\n}\n\nThe first iteration is somewhat of a special case, since p-1 is undefined. You can simply initialize p0 as the list of all possible characters for the first character.\nSo, your 051 example:\nIteration 0:\np(0) = {N}\n\nIteration 1:\ndigit = 5\nforeach({V, F})\n{\n foreach(p(0) = {N})\n {\n newEntry = N + V or N + F\n p(1) = {NV, NF}\n }\n}\n\nIteration 2:\ndigit = 1\nforeach({L})\n{\n foreach(p(1) = {NV, NF})\n {\n newEntry = NV + L or NF + L\n p(2) = {NVL, NFL}\n }\n}\n\n",
"The form you want is probably something like:\nfunction combinations( $str ){\n$l = len( $str );\n$results = array( );\nif ($l == 0) { return $results; }\nif ($l == 1)\n{ \n foreach( $codes[ $str[0] ] as $code )\n {\n $results[] = $code;\n }\n return $results;\n}\n$cur = $str[0];\n$combs = combinations( substr( $str, 1, $l ) );\nforeach ($codes[ $cur ] as $code)\n{\n foreach ($combs as $comb)\n {\n $results[] = $code.$comb;\n }\n}\nreturn $results;}\n\nThis is ugly, pidgin-php so please verify it first. The basic idea is to generate every combination of the string from [1..n] and then prepend to the front of all those combinations each possible code for str[0]. Bear in mind that in the worst case this will have performance exponential in the length of your string, because that much ambiguity is actually present in your coding scheme. \n",
"The trick is not only to generate all possible letter combinations that match a given number, but to select the letter sequence that is most easy to remember. A suggestion would be to run the soundex algorithm on each of the sequence and try to match against an English language dictionary such as Wordnet to find the most 'real-word-sounding' sequences.\n"
] |
[
8,
3,
2,
2,
1,
0,
0,
0,
0
] |
[] |
[] |
[
"algorithm",
"combinations",
"unique"
] |
stackoverflow_0000102468_algorithm_combinations_unique.txt
|
Q:
Global vs Universal Active Directory Group access for a web app
I have a SQL Server 2000, C# & ASP.net web app. We want to control access to it by using Active Directory groups. I can get authentication to work if the group I put in is a 'Global' but not if the group is 'Universal'.
How can I make this work with 'Universal' groups an well?
Here's my authorization block:
<authorization>
<allow roles="domain\Group Name Here"/>
<allow roles="domain\Group Name Here2"/>
<allow roles="domain\Group Name Here3"/>
<deny users="*"/>
</authorization>
A:
Depending on your Active Directory topology, you might have to wait for the Universal Group membership to replicate around to all the Domain Controllers. Active Directory recommends the following though:
Create a Global group for each domain, e.g., "Domain A Authorized Users", "Domain B Authorized Users"
Put the users you want from Domain A in the "Domain A Authorized Users" group, etc
Create a Universal group in the root domain "All Authorized Users"
Put the Global groups in the Universal group
Secure the resource using the Universal group: <allow roles="root domain\All Authorized Users/>
Wait for replication
One advantage of this scheme is that when you add a new user to one of the Global groups, you won't have to wait for GC replication.
A:
Turns out I needed to use the "Pre Win2000" id not the regular one.
|
Global vs Universal Active Directory Group access for a web app
|
I have a SQL Server 2000, C# & ASP.net web app. We want to control access to it by using Active Directory groups. I can get authentication to work if the group I put in is a 'Global' but not if the group is 'Universal'.
How can I make this work with 'Universal' groups an well?
Here's my authorization block:
<authorization>
<allow roles="domain\Group Name Here"/>
<allow roles="domain\Group Name Here2"/>
<allow roles="domain\Group Name Here3"/>
<deny users="*"/>
</authorization>
|
[
"Depending on your Active Directory topology, you might have to wait for the Universal Group membership to replicate around to all the Domain Controllers. Active Directory recommends the following though:\n\nCreate a Global group for each domain, e.g., \"Domain A Authorized Users\", \"Domain B Authorized Users\"\nPut the users you want from Domain A in the \"Domain A Authorized Users\" group, etc\nCreate a Universal group in the root domain \"All Authorized Users\"\nPut the Global groups in the Universal group\nSecure the resource using the Universal group: <allow roles=\"root domain\\All Authorized Users/>\nWait for replication\n\nOne advantage of this scheme is that when you add a new user to one of the Global groups, you won't have to wait for GC replication.\n",
"Turns out I needed to use the \"Pre Win2000\" id not the regular one.\n"
] |
[
1,
0
] |
[] |
[] |
[
"active_directory",
"asp.net",
"authentication",
"web_applications"
] |
stackoverflow_0000092413_active_directory_asp.net_authentication_web_applications.txt
|
Q:
Localising date format descriptors
What is the best way to localise a date format descriptor?
As anyone from a culture which does not use the mm/dd/yyyy format knows, it is annoying to have to enter dates in this format. The .NET framework provides some very good localisation support, so it's trivial to parse dates according to the users culture, but you often want to also display a helpful hint as to the format required (especially to distinguish between yy and yyyy which is interchangeable in most cultures).
What is the best way to do this in a way that make sense to most users (e.g. dd/M/yyy is confusing because of the change in case and the switching between one and two letters).
A:
Just use ISO-8601. It's an international standard.
Date and time (current at page generation) expressed according to ISO 8601:
Date: 2014-07-05
Combined date and time in UTC: 2014-07-05T04:00:25+00:00
2014-07-05T04:00:25Z
Week: 2014-W27
Date with week number: 2014-W27-6
Ordinal date: 2014-186
A:
I have to agree with the OP 'wrong' dates really jar with my DD/MM/YYYY upbringing and I find ISO 8601 dates and times extremely easy to work with. For once the standard got it right and engtech has the obvious answer that doesn't require localisation.
I was going to report the birthday input form on stack overflow as a bug because of how much of a sore thumb it is to the majority of the world.
A:
Here is my current method. Any suggestions?
Regex singleMToDoubleRegex = new Regex("(?<!m)m(?!m)");
Regex singleDToDoubleRegex = new Regex("(?<!d)d(?!d)");
CultureInfo currentCulture = CultureInfo.CurrentUICulture;
// If the culture is netural there is no date pattern to use, so use the default.
if (currentCulture.IsNeutralCulture)
{
currentCulture = CultureInfo.InvariantCulture;
}
// Massage the format into a more general user friendly form.
string shortDatePattern = CultureInfo.CurrentUICulture.DateTimeFormat.ShortDatePattern.ToLower();
shortDatePattern = singleMToDoubleRegex.Replace(shortDatePattern, "mm");
shortDatePattern = singleDToDoubleRegex.Replace(shortDatePattern, "dd");
A:
The trouble with international standards is that pretty much noone uses them. I try where I can, but I am forced to use dd/mm/yyyy almost everywhere in real life, which means I am so used to it it's always a conscious process to use ISO-8601. For the majority of people who don't even try to use ISO-8601 it's even worse. If you can internationalize where you can, I think it's a great advantage.
A:
How about giving the format (mm/dd/yyyy or dd/mm/yyyy) followed by a printout of today's date in the user's culture. MSDN has an article on formatting a DateTime for the person's culture, using the CultureInfo object that might be helpful in doing this. A combination of the format (which most people are familiar with) combined with the current date represented in that format should be enough of a clue to the person on how they should enter the date. (Also include a calendar control for those who still cant figure it out).
A:
A short form is convenient and helps avoid spelling mistakes. Localize as applicable, but be sure to display the expected format (do not leave the user blind). Provide a date-picker control as an optional aide to filling in the field.
As an extra, on-the-fly parsing and display of the date in long form might help too.
A:
Best option: I would instead recommend to use a standard date picker.
Alternative: every time the content of the edit control changes, parse it and display (in a separate control?) the long format of the date (ie: input "03/04/09" display "Your input: March 4, 2009")
|
Localising date format descriptors
|
What is the best way to localise a date format descriptor?
As anyone from a culture which does not use the mm/dd/yyyy format knows, it is annoying to have to enter dates in this format. The .NET framework provides some very good localisation support, so it's trivial to parse dates according to the users culture, but you often want to also display a helpful hint as to the format required (especially to distinguish between yy and yyyy which is interchangeable in most cultures).
What is the best way to do this in a way that make sense to most users (e.g. dd/M/yyy is confusing because of the change in case and the switching between one and two letters).
|
[
"Just use ISO-8601. It's an international standard.\nDate and time (current at page generation) expressed according to ISO 8601:\nDate: 2014-07-05\nCombined date and time in UTC: 2014-07-05T04:00:25+00:00\n 2014-07-05T04:00:25Z\nWeek: 2014-W27\nDate with week number: 2014-W27-6\nOrdinal date: 2014-186\n\n",
"I have to agree with the OP 'wrong' dates really jar with my DD/MM/YYYY upbringing and I find ISO 8601 dates and times extremely easy to work with. For once the standard got it right and engtech has the obvious answer that doesn't require localisation.\nI was going to report the birthday input form on stack overflow as a bug because of how much of a sore thumb it is to the majority of the world.\n",
"Here is my current method. Any suggestions?\nRegex singleMToDoubleRegex = new Regex(\"(?<!m)m(?!m)\");\nRegex singleDToDoubleRegex = new Regex(\"(?<!d)d(?!d)\");\nCultureInfo currentCulture = CultureInfo.CurrentUICulture;\n\n// If the culture is netural there is no date pattern to use, so use the default.\nif (currentCulture.IsNeutralCulture)\n{\n currentCulture = CultureInfo.InvariantCulture;\n}\n\n// Massage the format into a more general user friendly form.\nstring shortDatePattern = CultureInfo.CurrentUICulture.DateTimeFormat.ShortDatePattern.ToLower();\nshortDatePattern = singleMToDoubleRegex.Replace(shortDatePattern, \"mm\");\nshortDatePattern = singleDToDoubleRegex.Replace(shortDatePattern, \"dd\");\n\n",
"The trouble with international standards is that pretty much noone uses them. I try where I can, but I am forced to use dd/mm/yyyy almost everywhere in real life, which means I am so used to it it's always a conscious process to use ISO-8601. For the majority of people who don't even try to use ISO-8601 it's even worse. If you can internationalize where you can, I think it's a great advantage.\n",
"How about giving the format (mm/dd/yyyy or dd/mm/yyyy) followed by a printout of today's date in the user's culture. MSDN has an article on formatting a DateTime for the person's culture, using the CultureInfo object that might be helpful in doing this. A combination of the format (which most people are familiar with) combined with the current date represented in that format should be enough of a clue to the person on how they should enter the date. (Also include a calendar control for those who still cant figure it out).\n",
"A short form is convenient and helps avoid spelling mistakes. Localize as applicable, but be sure to display the expected format (do not leave the user blind). Provide a date-picker control as an optional aide to filling in the field.\nAs an extra, on-the-fly parsing and display of the date in long form might help too.\n",
"Best option: I would instead recommend to use a standard date picker.\nAlternative: every time the content of the edit control changes, parse it and display (in a separate control?) the long format of the date (ie: input \"03/04/09\" display \"Your input: March 4, 2009\")\n"
] |
[
7,
4,
2,
2,
1,
1,
0
] |
[] |
[] |
[
".net",
"date",
"globalization",
"internationalization"
] |
stackoverflow_0000000761_.net_date_globalization_internationalization.txt
|
Q:
Apache and J2EE sharing security realms/logins, single sign-on
Here is the situation I'd like to create:
www.blah.com/priv - protected by Apache HTTP Basic Auth, realm "foo"
www.blah.com/application - protected by Tomcat/Servlet HTTP Basic Auth, realm "foo"
User access /priv, apache requests login info, they provide and are given access
Same user then requests /application. Since they have authenticated to the "foo" realm in the previous step, I would like them to be let in directly.
If another users accesses /application without first going to /priv, Tomcat requires authentication (and then they could also later access /priv without having to re-authenticate)
Basically, I want apache and tomcat to share authentication realms and, ideally, user databases.
How could this be best achieved?
A:
Have you already tried to do this and failed? I ask because HTTP Basic authentication takes place purely by adding an HTTP header to a request; that is to say, once you're authenticated against a given realm on a given server, your browser adds an additional header to your request (e.g., "Authorization: Basic amxldmludnskZXZsaW4="), and the server acknowledges that you're authenticated because of that header. So given your example, and given some ad-hoc testing I just did, I suspect that the setup you describe will just work without any additional effort on your part.
|
Apache and J2EE sharing security realms/logins, single sign-on
|
Here is the situation I'd like to create:
www.blah.com/priv - protected by Apache HTTP Basic Auth, realm "foo"
www.blah.com/application - protected by Tomcat/Servlet HTTP Basic Auth, realm "foo"
User access /priv, apache requests login info, they provide and are given access
Same user then requests /application. Since they have authenticated to the "foo" realm in the previous step, I would like them to be let in directly.
If another users accesses /application without first going to /priv, Tomcat requires authentication (and then they could also later access /priv without having to re-authenticate)
Basically, I want apache and tomcat to share authentication realms and, ideally, user databases.
How could this be best achieved?
|
[
"Have you already tried to do this and failed? I ask because HTTP Basic authentication takes place purely by adding an HTTP header to a request; that is to say, once you're authenticated against a given realm on a given server, your browser adds an additional header to your request (e.g., \"Authorization: Basic amxldmludnskZXZsaW4=\"), and the server acknowledges that you're authenticated because of that header. So given your example, and given some ad-hoc testing I just did, I suspect that the setup you describe will just work without any additional effort on your part.\n"
] |
[
2
] |
[] |
[] |
[
"apache",
"authentication",
"jakarta_ee",
"java",
"tomcat"
] |
stackoverflow_0000102778_apache_authentication_jakarta_ee_java_tomcat.txt
|
Q:
Width issue with Ext.Panel bbar in IE 6
I've just run into a display glitch in IE6 with the ExtJS framework. - Hopefully someone can point me in the right direction.
In the following example, the bbar for the panel is displayed 2ems narrower than the panel it is attached to (it's left aligned) in IE6, where as in Firefox it is displayed as the same width as the panel.
Can anyone suggest how to fix this?
I seem to be able to work around either by specifying the width of the panel in ems or the padding in pixels, but I assume it would be expected to work as I have it below.
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<link rel="stylesheet" type="text/css" href="ext/resources/css/ext-all.css"/>
<script type="text/javascript" src="ext/ext-base.js"></script>
<script type="text/javascript" src="ext/ext-all-debug.js"></script>
<script type="text/javascript">
Ext.onReady(function(){
var main = new Ext.Panel({
renderTo: 'content',
bodyStyle: 'padding: 1em;',
width: 500,
html: "Alignment issue in IE - The bbar's width is 2ems less than the main panel in IE6.",
bbar: [
"->",
{id: "continue", text: 'Continue'}
]
});
});
</script>
</head>
<body>
<div id="content"></div>
</body>
</html>
A:
Maybe you should try to force the width of the bbar:
main.getBottomToolbar().setWidth(500)
right after Panel creation?
But I think the problem is that bbar is rendered into inner div of the panel, so different browsers interpret outer padding differently.
Also you can try to set padding of the bbar to -1em.
A:
The problem comes from the custom bodyStyle padding. It makes the panel content larger, but not the toolbar.
One possible solution is to further nest an Ext panel, like:
var main = new Ext.Panel({
renderTo: 'content',
width: 500,
items: {
bodyStyle: 'padding: 1em;',
border: false,
html: "Now alignment is fine."
},
bbar: [
"->",
{id: "continue", text: 'Continue'}
]
});
The border: false is needed to avoid double bordering.
|
Width issue with Ext.Panel bbar in IE 6
|
I've just run into a display glitch in IE6 with the ExtJS framework. - Hopefully someone can point me in the right direction.
In the following example, the bbar for the panel is displayed 2ems narrower than the panel it is attached to (it's left aligned) in IE6, where as in Firefox it is displayed as the same width as the panel.
Can anyone suggest how to fix this?
I seem to be able to work around either by specifying the width of the panel in ems or the padding in pixels, but I assume it would be expected to work as I have it below.
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<link rel="stylesheet" type="text/css" href="ext/resources/css/ext-all.css"/>
<script type="text/javascript" src="ext/ext-base.js"></script>
<script type="text/javascript" src="ext/ext-all-debug.js"></script>
<script type="text/javascript">
Ext.onReady(function(){
var main = new Ext.Panel({
renderTo: 'content',
bodyStyle: 'padding: 1em;',
width: 500,
html: "Alignment issue in IE - The bbar's width is 2ems less than the main panel in IE6.",
bbar: [
"->",
{id: "continue", text: 'Continue'}
]
});
});
</script>
</head>
<body>
<div id="content"></div>
</body>
</html>
|
[
"Maybe you should try to force the width of the bbar:\nmain.getBottomToolbar().setWidth(500)\n\nright after Panel creation?\nBut I think the problem is that bbar is rendered into inner div of the panel, so different browsers interpret outer padding differently.\nAlso you can try to set padding of the bbar to -1em.\n",
"The problem comes from the custom bodyStyle padding. It makes the panel content larger, but not the toolbar.\nOne possible solution is to further nest an Ext panel, like:\n var main = new Ext.Panel({\n renderTo: 'content',\n width: 500,\n items: {\n bodyStyle: 'padding: 1em;',\n border: false,\n html: \"Now alignment is fine.\"\n },\n bbar: [\n \"->\",\n {id: \"continue\", text: 'Continue'}\n ]\n });\n\nThe border: false is needed to avoid double bordering.\n"
] |
[
1,
1
] |
[] |
[] |
[
"extjs",
"internet_explorer",
"javascript"
] |
stackoverflow_0000090181_extjs_internet_explorer_javascript.txt
|
Q:
What is a good CI build-process
What constitutes a good CI build-process?
We use CI, but is deployment to production even a realistic CI goal when you have dependencies on several services that should be deployed too and other apps may depend on these too.
Is a good good CI build process good enough when its automated to QA and manual from there?
A:
Well "it depends" :)
We use our CI system to:
build & unit test
deploy to single box, run intergration tests and code analisys
deploy to lab environment
run acceptance tests in prod-like system
drop builds that pass to code drop for prod deployment
This is for a greenfield project of about a dozen services and databases deployed to 20+ servers, that also had dependencies on half a dozen other 'external' services.
Using a CI tool to deploy your product to a production environment as a realistic goal? again... "it depends"
Why would you want to do this?
if you have the process you can roll changes (and roll back) faster and more often
less chance for human error
you can test the same deployment strategy in a test environment before going to production and catch issues earlier
Some technical things you have to address before you can answer this:
what is the uptime requirements for your system -- Are you allowed to have downtime or does it need to be up 24/7?
do you have change control processes in place that require human intervention/approval?
is your deployment robust enough for any component to roll back to a known-good state if a deployment fails?
is your system designed to handle different versions of services or clients in case one or several component deployments fails (and you have the above rollback to last known good)?
does the process have the smarts to handle a partial deployment where a component cannot handle mixed versions of its dependencies/clients?
how are you handing database deployment/upgrades?
do you have monitoring in place so you know when something goes wrong?
Here are a couple of recent related links about automation and building the tools you need.
When it comes down to it the more complex your system the more difficult it is do automate everything, but that does not mean it is not a worthy goal, it just takes a lot more effort and willpower to get it done -- everything from knowing the difficulties you're going to face, the problems you have to account for (failure will happen), the political challenges of building infrastructure (vs. more product features).
Now heres the big secret... the technical challenges are challenging but not impossible... the political challenges may be insurmountable. Everything about this costs money whether its dev time or buying 3rd party solutions. So really, can you build the $1K, $10K, $100K, or $1M solution?
Whatever solution you go for make sure the automation is robust first, complete second... i.e. make sure you have as robust a solution as you can for getting deployment to a test environment rather than a fragile solution that deploys to production.
A:
CI is not intended as a deployment mechanism. It is good to have your CI execute any automated deployment to a QA/Test server, to ensure those aspects of your build work, but I would not use a CI system like Cruise Control or Bamboo as the means of deployment.
CI is for building the codebase periodically to automate execution of automated tests, verification of the codebase via static analysis and other checks of that nature.
A:
Be sure you understand the idea behind a CI build. CI stands for Continuous Integration and CI builds are really intended to be throw-away builds that are performed when a developer checks code in to the source control system (or at some specified interval) to ensure that the newest changes do not break the code base (hence the idea of continuously integrating the changes to the code base).
To that end, the technology used for the actual build server process is largely irrelevant compared to what actually happens during the build. As @pdavis mentioned, the CI build should compile the code base, execute some code analysis (FxCop, StyleCop, Lint, etc.), execute unit tests and code coverage, and execute any other custom analysis you want performed that should impact the concept of a "successful" or "failed" build.
Having a CI build automatically deploy to an environment really doesn't fall under the control of a build server. That being said, you can always create a separate project that runs on the build server that handles the deployment when it detects certain conditions (such as a build completes successfuly), but that should always be done as a completely independent thing.
A:
I am starting on a new project at work that I am really looking forward to. We are still in the initial design stage and have just recently completed the Logical System Architecture. We have ordered new servers for the testing and staging environments and are setting up a Continuous Integration (CI) build system based on Cruise Control (http://cruisecontrol.sourceforge.net/) and MSBuild (http://msdn2.microsoft.com/en-us/library/wea2sca5.aspx) which is basically an improved port of ANT. It appears that Visual Studio 2005 project and solution files are all now in MSBuild format. Cruise Control will be automatically pulling the source from Visual Source Safe (ok, it isn't Subversion but we can deal), compiling it, and then running it through fxCop (http://www.gotdotnet.com/Team/FxCop/), nUnit (http://www.nunit.org/), nCover (http://ncover.org/site/), and last but not lease Simian (http://www.redhillconsulting.com.au/products/simian/). Cruise Control has a pretty good website interface for displaying all of the compiled results from the various tools and can even display code changes from one build to the next. It also keeps track of all builds in a build history. I'm looking forward to the test driven development and think that this type of approach combined with nUnit/nCover should give us a pretty good idea before we roll out changes that we haven't broken anything. There are also plans to incorporate some type of automated user interface testing once we are far enough along in the project. Depending on the tool, this should be just a matter of installing the tool on the build server and calling it from Cruise Control. Sweet.
A:
A good CI process will have full or nearly-full unit test coverage. Unit tests test classes and methods, vs. integration tests, which test multiple parts of the system. When you set up your CI builds, have them automate the unit tests. That way, the CI builds can run multiple times per day. We have ours set to run every 2 hours.
You can have longer running builds that run once per day. These can use other services and run integration tests.
A:
I was watching a ThoughtWorks presentation (creators of Cruise Control) and they actually addressed this issue. Their answer is that NO deployment is too complex to test. Why? Because otherwise, your customers become your testers, which is exactly where you don't want to be.
If you have a complex deployment structure, set up a visualization server. Have it pretend to be all the systems you need to talk to. They can always start in a known good state, because you can reset to a clean image.
To answer your initial question, a good process is one which enables communication between the repository and the developers. If the repository is in a bad state (non-compiling code, failed tests, etc.), the developers know about it as soon as possible, so that they can correct it.
A:
The later a bug is discovered, the costlier it is to fix. So bugs should be discovered as early as possible. This is the motivation behind CI.
A good CI should ensure catching as many bugs as possible. The whole application comprises of code (often in multiple languages), Database schema, deployment files etc. Errors in any of these can cause bugs - so the CI should try to exercise as many of them as possible.
CI does not replace a proper QA discipline. Also, CI need not be very comprehensive on day one of the project. One can start with a simple CI process that does basic compilation & unit testing initially. As you discover more classes of bugs in QA, you should adapt the CI process to try to catch future occurrences of those bugs. It can also involve static code-analysis checks, so that you can implement consistent coding and design ideals across the codebase.
|
What is a good CI build-process
|
What constitutes a good CI build-process?
We use CI, but is deployment to production even a realistic CI goal when you have dependencies on several services that should be deployed too and other apps may depend on these too.
Is a good good CI build process good enough when its automated to QA and manual from there?
|
[
"Well \"it depends\" :)\nWe use our CI system to:\n\nbuild & unit test\ndeploy to single box, run intergration tests and code analisys\ndeploy to lab environment\nrun acceptance tests in prod-like system\ndrop builds that pass to code drop for prod deployment\n\nThis is for a greenfield project of about a dozen services and databases deployed to 20+ servers, that also had dependencies on half a dozen other 'external' services.\nUsing a CI tool to deploy your product to a production environment as a realistic goal? again... \"it depends\"\nWhy would you want to do this?\n\nif you have the process you can roll changes (and roll back) faster and more often\nless chance for human error\nyou can test the same deployment strategy in a test environment before going to production and catch issues earlier \n\nSome technical things you have to address before you can answer this:\n\nwhat is the uptime requirements for your system -- Are you allowed to have downtime or does it need to be up 24/7?\ndo you have change control processes in place that require human intervention/approval?\nis your deployment robust enough for any component to roll back to a known-good state if a deployment fails?\nis your system designed to handle different versions of services or clients in case one or several component deployments fails (and you have the above rollback to last known good)?\ndoes the process have the smarts to handle a partial deployment where a component cannot handle mixed versions of its dependencies/clients?\nhow are you handing database deployment/upgrades? \ndo you have monitoring in place so you know when something goes wrong?\n\nHere are a couple of recent related links about automation and building the tools you need.\nWhen it comes down to it the more complex your system the more difficult it is do automate everything, but that does not mean it is not a worthy goal, it just takes a lot more effort and willpower to get it done -- everything from knowing the difficulties you're going to face, the problems you have to account for (failure will happen), the political challenges of building infrastructure (vs. more product features).\nNow heres the big secret... the technical challenges are challenging but not impossible... the political challenges may be insurmountable. Everything about this costs money whether its dev time or buying 3rd party solutions. So really, can you build the $1K, $10K, $100K, or $1M solution? \nWhatever solution you go for make sure the automation is robust first, complete second... i.e. make sure you have as robust a solution as you can for getting deployment to a test environment rather than a fragile solution that deploys to production. \n",
"CI is not intended as a deployment mechanism. It is good to have your CI execute any automated deployment to a QA/Test server, to ensure those aspects of your build work, but I would not use a CI system like Cruise Control or Bamboo as the means of deployment.\nCI is for building the codebase periodically to automate execution of automated tests, verification of the codebase via static analysis and other checks of that nature.\n",
"Be sure you understand the idea behind a CI build. CI stands for Continuous Integration and CI builds are really intended to be throw-away builds that are performed when a developer checks code in to the source control system (or at some specified interval) to ensure that the newest changes do not break the code base (hence the idea of continuously integrating the changes to the code base).\nTo that end, the technology used for the actual build server process is largely irrelevant compared to what actually happens during the build. As @pdavis mentioned, the CI build should compile the code base, execute some code analysis (FxCop, StyleCop, Lint, etc.), execute unit tests and code coverage, and execute any other custom analysis you want performed that should impact the concept of a \"successful\" or \"failed\" build.\nHaving a CI build automatically deploy to an environment really doesn't fall under the control of a build server. That being said, you can always create a separate project that runs on the build server that handles the deployment when it detects certain conditions (such as a build completes successfuly), but that should always be done as a completely independent thing.\n",
"I am starting on a new project at work that I am really looking forward to. We are still in the initial design stage and have just recently completed the Logical System Architecture. We have ordered new servers for the testing and staging environments and are setting up a Continuous Integration (CI) build system based on Cruise Control (http://cruisecontrol.sourceforge.net/) and MSBuild (http://msdn2.microsoft.com/en-us/library/wea2sca5.aspx) which is basically an improved port of ANT. It appears that Visual Studio 2005 project and solution files are all now in MSBuild format. Cruise Control will be automatically pulling the source from Visual Source Safe (ok, it isn't Subversion but we can deal), compiling it, and then running it through fxCop (http://www.gotdotnet.com/Team/FxCop/), nUnit (http://www.nunit.org/), nCover (http://ncover.org/site/), and last but not lease Simian (http://www.redhillconsulting.com.au/products/simian/). Cruise Control has a pretty good website interface for displaying all of the compiled results from the various tools and can even display code changes from one build to the next. It also keeps track of all builds in a build history. I'm looking forward to the test driven development and think that this type of approach combined with nUnit/nCover should give us a pretty good idea before we roll out changes that we haven't broken anything. There are also plans to incorporate some type of automated user interface testing once we are far enough along in the project. Depending on the tool, this should be just a matter of installing the tool on the build server and calling it from Cruise Control. Sweet.\n",
"A good CI process will have full or nearly-full unit test coverage. Unit tests test classes and methods, vs. integration tests, which test multiple parts of the system. When you set up your CI builds, have them automate the unit tests. That way, the CI builds can run multiple times per day. We have ours set to run every 2 hours.\nYou can have longer running builds that run once per day. These can use other services and run integration tests.\n",
"I was watching a ThoughtWorks presentation (creators of Cruise Control) and they actually addressed this issue. Their answer is that NO deployment is too complex to test. Why? Because otherwise, your customers become your testers, which is exactly where you don't want to be.\nIf you have a complex deployment structure, set up a visualization server. Have it pretend to be all the systems you need to talk to. They can always start in a known good state, because you can reset to a clean image.\nTo answer your initial question, a good process is one which enables communication between the repository and the developers. If the repository is in a bad state (non-compiling code, failed tests, etc.), the developers know about it as soon as possible, so that they can correct it.\n",
"The later a bug is discovered, the costlier it is to fix. So bugs should be discovered as early as possible. This is the motivation behind CI.\nA good CI should ensure catching as many bugs as possible. The whole application comprises of code (often in multiple languages), Database schema, deployment files etc. Errors in any of these can cause bugs - so the CI should try to exercise as many of them as possible.\nCI does not replace a proper QA discipline. Also, CI need not be very comprehensive on day one of the project. One can start with a simple CI process that does basic compilation & unit testing initially. As you discover more classes of bugs in QA, you should adapt the CI process to try to catch future occurrences of those bugs. It can also involve static code-analysis checks, so that you can implement consistent coding and design ideals across the codebase.\n"
] |
[
14,
4,
3,
2,
2,
1,
1
] |
[] |
[] |
[
"build_automation",
"build_process",
"continuous_integration"
] |
stackoverflow_0000102902_build_automation_build_process_continuous_integration.txt
|
Q:
Embedding one dll inside another as an embedded resource and then calling it from my code
I've got a situation where I have a DLL I'm creating that uses another third party DLL, but I would prefer to be able to build the third party DLL into my DLL instead of having to keep them both together if possible.
This with is C# and .NET 3.5.
The way I would like to do this is by storing the third party DLL as an embedded resource which I then place in the appropriate place during execution of the first DLL.
The way I originally planned to do this is by writing code to put the third party DLL in the location specified by System.Reflection.Assembly.GetExecutingAssembly().Location.ToString()
minus the last /nameOfMyAssembly.dll. I can successfully save the third party .DLL in this location (which ends up being
C:\Documents and Settings\myUserName\Local Settings\Application
Data\assembly\dl3\KXPPAX6Y.ZCY\A1MZ1499.1TR\e0115d44\91bb86eb_fe18c901
), but when I get to the part of my code requiring this DLL, it can't find it.
Does anybody have any idea as to what I need to be doing differently?
A:
Once you've embedded the third-party assembly as a resource, add code to subscribe to the AppDomain.AssemblyResolve event of the current domain during application start-up. This event fires whenever the Fusion sub-system of the CLR fails to locate an assembly according to the probing (policies) in effect. In the event handler for AppDomain.AssemblyResolve, load the resource using Assembly.GetManifestResourceStream and feed its content as a byte array into the corresponding Assembly.Load overload. Below is how one such implementation could look like in C#:
AppDomain.CurrentDomain.AssemblyResolve += (sender, args) =>
{
var resName = args.Name + ".dll";
var thisAssembly = Assembly.GetExecutingAssembly();
using (var input = thisAssembly.GetManifestResourceStream(resName))
{
return input != null
? Assembly.Load(StreamToBytes(input))
: null;
}
};
where StreamToBytes could be defined as:
static byte[] StreamToBytes(Stream input)
{
var capacity = input.CanSeek ? (int) input.Length : 0;
using (var output = new MemoryStream(capacity))
{
int readLength;
var buffer = new byte[4096];
do
{
readLength = input.Read(buffer, 0, buffer.Length);
output.Write(buffer, 0, readLength);
}
while (readLength != 0);
return output.ToArray();
}
}
Finally, as a few have already mentioned, ILMerge may be another option to consider, albeit somewhat more involved.
A:
In the end I did it almost exactly the way raboof suggested (and similar to what dgvid suggested), except with some minor changes and some omissions fixed. I chose this method because it was closest to what I was looking for in the first place and didn't require using any third party executables and such. It works great!
This is what my code ended up looking like:
EDIT: I decided to move this function to another assembly so I could reuse it in multiple files (I just pass in Assembly.GetExecutingAssembly()).
This is the updated version which allows you to pass in the assembly with the embedded dlls.
embeddedResourcePrefix is the string path to the embedded resource, it will usually be the name of the assembly followed by any folder structure containing the resource (e.g. "MyComapny.MyProduct.MyAssembly.Resources" if the dll is in a folder called Resources in the project). It also assumes that the dll has a .dll.resource extension.
public static void EnableDynamicLoadingForDlls(Assembly assemblyToLoadFrom, string embeddedResourcePrefix) {
AppDomain.CurrentDomain.AssemblyResolve += (sender, args) => { // had to add =>
try {
string resName = embeddedResourcePrefix + "." + args.Name.Split(',')[0] + ".dll.resource";
using (Stream input = assemblyToLoadFrom.GetManifestResourceStream(resName)) {
return input != null
? Assembly.Load(StreamToBytes(input))
: null;
}
} catch (Exception ex) {
_log.Error("Error dynamically loading dll: " + args.Name, ex);
return null;
}
}; // Had to add colon
}
private static byte[] StreamToBytes(Stream input) {
int capacity = input.CanSeek ? (int)input.Length : 0;
using (MemoryStream output = new MemoryStream(capacity)) {
int readLength;
byte[] buffer = new byte[4096];
do {
readLength = input.Read(buffer, 0, buffer.Length); // had to change to buffer.Length
output.Write(buffer, 0, readLength);
}
while (readLength != 0);
return output.ToArray();
}
}
A:
There's a tool called IlMerge that can accomplish this: http://research.microsoft.com/~mbarnett/ILMerge.aspx
Then you can just make a build event similar to the following.
Set Path="C:\Program Files\Microsoft\ILMerge"
ilmerge /out:$(ProjectDir)\Deploy\LevelEditor.exe $(ProjectDir)\bin\Release\release.exe $(ProjectDir)\bin\Release\InteractLib.dll $(ProjectDir)\bin\Release\SpriteLib.dll $(ProjectDir)\bin\Release\LevelLibrary.dll
A:
I've had success doing what you are describing, but because the third-party DLL is also a .NET assembly, I never write it out to disk, I just load it from memory.
I get the embedded resource assembly as a byte array like so:
Assembly resAssembly = Assembly.LoadFile(assemblyPathName);
byte[] assemblyData;
using (Stream stream = resAssembly.GetManifestResourceStream(resourceName))
{
assemblyData = ReadBytesFromStream(stream);
stream.Close();
}
Then I load the data with Assembly.Load().
Finally, I add a handler to AppDomain.CurrentDomain.AssemblyResolve to return my loaded assembly when the type loader looks it.
See the .NET Fusion Workshop for additional details.
A:
You can achieve this remarkably easily using Netz, a .net NET Executables Compressor & Packer.
A:
Instead of writing the assembly to disk you can try to do Assembly.Load(byte[] rawAssembly) where you create rawAssembly from the embedded resource.
|
Embedding one dll inside another as an embedded resource and then calling it from my code
|
I've got a situation where I have a DLL I'm creating that uses another third party DLL, but I would prefer to be able to build the third party DLL into my DLL instead of having to keep them both together if possible.
This with is C# and .NET 3.5.
The way I would like to do this is by storing the third party DLL as an embedded resource which I then place in the appropriate place during execution of the first DLL.
The way I originally planned to do this is by writing code to put the third party DLL in the location specified by System.Reflection.Assembly.GetExecutingAssembly().Location.ToString()
minus the last /nameOfMyAssembly.dll. I can successfully save the third party .DLL in this location (which ends up being
C:\Documents and Settings\myUserName\Local Settings\Application
Data\assembly\dl3\KXPPAX6Y.ZCY\A1MZ1499.1TR\e0115d44\91bb86eb_fe18c901
), but when I get to the part of my code requiring this DLL, it can't find it.
Does anybody have any idea as to what I need to be doing differently?
|
[
"Once you've embedded the third-party assembly as a resource, add code to subscribe to the AppDomain.AssemblyResolve event of the current domain during application start-up. This event fires whenever the Fusion sub-system of the CLR fails to locate an assembly according to the probing (policies) in effect. In the event handler for AppDomain.AssemblyResolve, load the resource using Assembly.GetManifestResourceStream and feed its content as a byte array into the corresponding Assembly.Load overload. Below is how one such implementation could look like in C#:\nAppDomain.CurrentDomain.AssemblyResolve += (sender, args) =>\n{\n var resName = args.Name + \".dll\"; \n var thisAssembly = Assembly.GetExecutingAssembly(); \n using (var input = thisAssembly.GetManifestResourceStream(resName))\n {\n return input != null \n ? Assembly.Load(StreamToBytes(input))\n : null;\n }\n};\n\nwhere StreamToBytes could be defined as:\nstatic byte[] StreamToBytes(Stream input) \n{\n var capacity = input.CanSeek ? (int) input.Length : 0;\n using (var output = new MemoryStream(capacity))\n {\n int readLength;\n var buffer = new byte[4096];\n\n do\n {\n readLength = input.Read(buffer, 0, buffer.Length);\n output.Write(buffer, 0, readLength);\n }\n while (readLength != 0);\n\n return output.ToArray();\n }\n}\n\nFinally, as a few have already mentioned, ILMerge may be another option to consider, albeit somewhat more involved. \n",
"In the end I did it almost exactly the way raboof suggested (and similar to what dgvid suggested), except with some minor changes and some omissions fixed. I chose this method because it was closest to what I was looking for in the first place and didn't require using any third party executables and such. It works great!\nThis is what my code ended up looking like:\nEDIT: I decided to move this function to another assembly so I could reuse it in multiple files (I just pass in Assembly.GetExecutingAssembly()). \nThis is the updated version which allows you to pass in the assembly with the embedded dlls. \nembeddedResourcePrefix is the string path to the embedded resource, it will usually be the name of the assembly followed by any folder structure containing the resource (e.g. \"MyComapny.MyProduct.MyAssembly.Resources\" if the dll is in a folder called Resources in the project). It also assumes that the dll has a .dll.resource extension.\n public static void EnableDynamicLoadingForDlls(Assembly assemblyToLoadFrom, string embeddedResourcePrefix) {\n AppDomain.CurrentDomain.AssemblyResolve += (sender, args) => { // had to add =>\n try {\n string resName = embeddedResourcePrefix + \".\" + args.Name.Split(',')[0] + \".dll.resource\";\n using (Stream input = assemblyToLoadFrom.GetManifestResourceStream(resName)) {\n return input != null\n ? Assembly.Load(StreamToBytes(input))\n : null;\n }\n } catch (Exception ex) {\n _log.Error(\"Error dynamically loading dll: \" + args.Name, ex);\n return null;\n }\n }; // Had to add colon\n }\n\n private static byte[] StreamToBytes(Stream input) {\n int capacity = input.CanSeek ? (int)input.Length : 0;\n using (MemoryStream output = new MemoryStream(capacity)) {\n int readLength;\n byte[] buffer = new byte[4096];\n\n do {\n readLength = input.Read(buffer, 0, buffer.Length); // had to change to buffer.Length\n output.Write(buffer, 0, readLength);\n }\n while (readLength != 0);\n\n return output.ToArray();\n }\n }\n\n",
"There's a tool called IlMerge that can accomplish this: http://research.microsoft.com/~mbarnett/ILMerge.aspx\nThen you can just make a build event similar to the following.\nSet Path=\"C:\\Program Files\\Microsoft\\ILMerge\"\nilmerge /out:$(ProjectDir)\\Deploy\\LevelEditor.exe $(ProjectDir)\\bin\\Release\\release.exe $(ProjectDir)\\bin\\Release\\InteractLib.dll $(ProjectDir)\\bin\\Release\\SpriteLib.dll $(ProjectDir)\\bin\\Release\\LevelLibrary.dll\n",
"I've had success doing what you are describing, but because the third-party DLL is also a .NET assembly, I never write it out to disk, I just load it from memory.\nI get the embedded resource assembly as a byte array like so:\n Assembly resAssembly = Assembly.LoadFile(assemblyPathName);\n\n byte[] assemblyData;\n using (Stream stream = resAssembly.GetManifestResourceStream(resourceName))\n {\n assemblyData = ReadBytesFromStream(stream);\n stream.Close();\n }\n\nThen I load the data with Assembly.Load().\nFinally, I add a handler to AppDomain.CurrentDomain.AssemblyResolve to return my loaded assembly when the type loader looks it.\nSee the .NET Fusion Workshop for additional details.\n",
"You can achieve this remarkably easily using Netz, a .net NET Executables Compressor & Packer.\n",
"Instead of writing the assembly to disk you can try to do Assembly.Load(byte[] rawAssembly) where you create rawAssembly from the embedded resource.\n"
] |
[
44,
21,
13,
9,
8,
2
] |
[] |
[] |
[
".net_3.5",
"c#",
"dll"
] |
stackoverflow_0000096732_.net_3.5_c#_dll.txt
|
Q:
Can the NetBeans code formatter be made to format javadoc comments?
The NetBeans 6.1 editor doesn't seem to like to wrap comments, and the code formatter seems to ignore them. For JavaDoc comments, this behaviour seems inappropriate, as you can end up spending a lot of wasted time manually reflowing paragraphs.
I was wondering if there's some magic setting to get the builtin code formatter, or the editor to wrap/reflow javadoc comments?
A:
This issue has been raised to the Netbeans development team and will likely be added in a "future" release of Netbeans. If you want this feature (or any other feature) to be added to the IDE, go to the issue tracking website and vote for this feature.
http://www.netbeans.org/issues/show_bug.cgi?id=11553
Most open-source products use the votes on their issue tracking systems to determine where to allocate resources for the next release.
A:
I'm fairly sure that you can't do this. Comments are not code and javadoc comments are not exactly plain text either as they're intended to be HTML outputted.
Maybe write your own plugin for this?
|
Can the NetBeans code formatter be made to format javadoc comments?
|
The NetBeans 6.1 editor doesn't seem to like to wrap comments, and the code formatter seems to ignore them. For JavaDoc comments, this behaviour seems inappropriate, as you can end up spending a lot of wasted time manually reflowing paragraphs.
I was wondering if there's some magic setting to get the builtin code formatter, or the editor to wrap/reflow javadoc comments?
|
[
"This issue has been raised to the Netbeans development team and will likely be added in a \"future\" release of Netbeans. If you want this feature (or any other feature) to be added to the IDE, go to the issue tracking website and vote for this feature.\nhttp://www.netbeans.org/issues/show_bug.cgi?id=11553\nMost open-source products use the votes on their issue tracking systems to determine where to allocate resources for the next release.\n",
"I'm fairly sure that you can't do this. Comments are not code and javadoc comments are not exactly plain text either as they're intended to be HTML outputted.\nMaybe write your own plugin for this?\n"
] |
[
1,
0
] |
[] |
[] |
[
"formatting",
"java",
"javadoc",
"netbeans"
] |
stackoverflow_0000098856_formatting_java_javadoc_netbeans.txt
|
Q:
Should rails models be concerned with other models for the sake of skinny controllers?
I read everywhere that business logic belongs in the models and not in controller but where is the limit?
I am toying with a personnal accounting application.
Account
Entry
Operation
When creating an operation it is only valid if the corresponding entries are created and linked to accounts so that the operation is balanced for exemple buy a 6-pack :
o=Operation.new({:description=>"b33r", :user=>current_user, :date=>"2008/09/15"})
o.entries.build({:account_id=>1, :amount=>15})
o.valid? #=>false
o.entries.build({:account_id=>2, :amount=>-15})
o.valid? #=>true
Now the form shown to the user in the case of basic operations is simplified to hide away the entries details, the accounts are selected among 5 default by the kind of operation requested by the user (intialise account -> equity to accout, spend assets->expenses, earn revenues->assets, borrow liabilities->assets, pay debt assets->liabilities ...) I want the entries created from default values.
I also want to be able to create more complex operations (more than 2 entries). For this second use case I will have a different form where the additional complexity is exposed.This second use case prevents me from including a debit and credit field on the Operation and getting rid of the Entry link.
Which is the best form ? Using the above code in a SimpleOperationController as I do for the moment, or defining a new method on the Operation class so I can call Operation.new_simple_operation(params[:operation])
Isn't it breaking the separation of concerns to actually create and manipulate Entry objects from the Operation class ?
I am not looking for advice on my twisted accounting principles :)
edit -- It seems I didn't express myself too clearly.
I am not so concerned about the validation. I am more concerned about where the creation logic code should go :
assuming the operation on the controller is called spend, when using spend, the params hash would contain : amount, date, description. Debit and credit accounts would be derived from the action which is called, but then I have to create all the objects. Would it be better to have
#error and transaction handling is left out for the sake of clarity
def spend
amount=params[:operation].delete(:amount)#remove non existent Operation attribute
op=Operation.new(params[:operation])
#select accounts in some way
...
#build entries
op.entries.build(...)
op.entries.build(...)
op.save
end
or to create a method on Operation that would make the above look like
def spend
op=Operation.new_simple_operation(params)
op.save
end
this definitely give a much thinner controller and a fatter model, but then the model will create and store instances of other models which is where my problem is.
A:
but then the model will create and store instances of other models which is where my problem is.
What is wrong with this?
If your 'business logic' states that an Operation must have a valid set of Entries, then surely there is nothing wrong for the Operation class to know about, and deal with your Entry objects.
You'll only get problems if you take this too far, and have your models manipulating things they don't need to know about, like an EntryHtmlFormBuilder or whatever :-)
A:
Virtual Attributes (more info here and here) will help with this greatly. Passing the whole params back to the model keeps things simple in the controller. This will allow you to dynamically build your form and easily build the entries objects.
class Operation
has_many :entries
def entry_attributes=(entry_attributes)
entry_attributes.each do |entry|
entries.build(entry)
end
end
end
class OperationController < ApplicationController
def create
@operation = Operation.new(params[:opertaion])
if @operation.save
flash[:notice] = "Successfully saved operation."
redirect_to operations_path
else
render :action => 'new'
end
end
end
The save will fail if everything isn't valid. Which brings us to validation. Because each Entry stands alone and you need to check all entries at "creation" you should probably override validate in Operation:
class Operation
# methods from above
protected
def validate
total = 0
entries.each { |e| t += e.amount }
errors.add("entries", "unbalanced transfers") unless total == 0
end
end
Now you will get an error message telling the user that the amounts are off and they should fix the problem. You can get really fancy here and add a lot of value by being specific about the problem, like tell them how much they are off.
A:
It's easier to think in terms of each entity validating itself, and entities which depend on one another delegating their state to the state of their associated entries. In your case, for instance:
class Operation < ActiveRecord::Base
has_many :entries
validates_associated :entries
end
validates_associated will check whether each associated entity is valid (in this case, all entries should if the operation is to be valid).
It is very tempting to try to validate entire hierarchies of models as a whole, but as you said, the place where that would be most easily done is the controller, which should act more as a router of requests and responses than in dealing with business logic.
A:
The way I look at it is that the controller should reflect the end-user view and translate requests into model operations and reponses while also doing formatting. In your case there are 2 kinds of operations that represent simple operations with a default account/entry, and more complex operations that have user selected entries and accounts. The forms should reflect the user view (2 forms with different fields), and there should be 2 actions in the controller to match. The controller however should have no logic relating to how the data is manipulated, only how to receive and respond. I would have class methods on the Operation class that take in the proper data from the forms and creates one or more object as needed, or place those class methods on a support class that is not an AR model, but has business logic that crosses model boundaries. The advantage of the separate utility class is that it keeps each model focused on one purpose, the down side is that the utility classes have no defined place to live. I put them in lib/ but Rails does not specify a place for model helpers as such.
A:
If you are concerned about embedding this logic into any particular model, why not put them into an observer class, that will keep the logic for your creation of the associated items separate from the classes being observed.
|
Should rails models be concerned with other models for the sake of skinny controllers?
|
I read everywhere that business logic belongs in the models and not in controller but where is the limit?
I am toying with a personnal accounting application.
Account
Entry
Operation
When creating an operation it is only valid if the corresponding entries are created and linked to accounts so that the operation is balanced for exemple buy a 6-pack :
o=Operation.new({:description=>"b33r", :user=>current_user, :date=>"2008/09/15"})
o.entries.build({:account_id=>1, :amount=>15})
o.valid? #=>false
o.entries.build({:account_id=>2, :amount=>-15})
o.valid? #=>true
Now the form shown to the user in the case of basic operations is simplified to hide away the entries details, the accounts are selected among 5 default by the kind of operation requested by the user (intialise account -> equity to accout, spend assets->expenses, earn revenues->assets, borrow liabilities->assets, pay debt assets->liabilities ...) I want the entries created from default values.
I also want to be able to create more complex operations (more than 2 entries). For this second use case I will have a different form where the additional complexity is exposed.This second use case prevents me from including a debit and credit field on the Operation and getting rid of the Entry link.
Which is the best form ? Using the above code in a SimpleOperationController as I do for the moment, or defining a new method on the Operation class so I can call Operation.new_simple_operation(params[:operation])
Isn't it breaking the separation of concerns to actually create and manipulate Entry objects from the Operation class ?
I am not looking for advice on my twisted accounting principles :)
edit -- It seems I didn't express myself too clearly.
I am not so concerned about the validation. I am more concerned about where the creation logic code should go :
assuming the operation on the controller is called spend, when using spend, the params hash would contain : amount, date, description. Debit and credit accounts would be derived from the action which is called, but then I have to create all the objects. Would it be better to have
#error and transaction handling is left out for the sake of clarity
def spend
amount=params[:operation].delete(:amount)#remove non existent Operation attribute
op=Operation.new(params[:operation])
#select accounts in some way
...
#build entries
op.entries.build(...)
op.entries.build(...)
op.save
end
or to create a method on Operation that would make the above look like
def spend
op=Operation.new_simple_operation(params)
op.save
end
this definitely give a much thinner controller and a fatter model, but then the model will create and store instances of other models which is where my problem is.
|
[
"\nbut then the model will create and store instances of other models which is where my problem is.\n\nWhat is wrong with this? \nIf your 'business logic' states that an Operation must have a valid set of Entries, then surely there is nothing wrong for the Operation class to know about, and deal with your Entry objects.\nYou'll only get problems if you take this too far, and have your models manipulating things they don't need to know about, like an EntryHtmlFormBuilder or whatever :-)\n",
"Virtual Attributes (more info here and here) will help with this greatly. Passing the whole params back to the model keeps things simple in the controller. This will allow you to dynamically build your form and easily build the entries objects.\nclass Operation\n has_many :entries\n\n def entry_attributes=(entry_attributes)\n entry_attributes.each do |entry|\n entries.build(entry)\n end\n end\n\nend\n\nclass OperationController < ApplicationController\n def create\n @operation = Operation.new(params[:opertaion])\n if @operation.save\n flash[:notice] = \"Successfully saved operation.\"\n redirect_to operations_path\n else\n render :action => 'new'\n end\n end\nend\n\nThe save will fail if everything isn't valid. Which brings us to validation. Because each Entry stands alone and you need to check all entries at \"creation\" you should probably override validate in Operation:\nclass Operation\n # methods from above\n protected\n def validate\n total = 0\n entries.each { |e| t += e.amount }\n errors.add(\"entries\", \"unbalanced transfers\") unless total == 0\n end\nend\n\nNow you will get an error message telling the user that the amounts are off and they should fix the problem. You can get really fancy here and add a lot of value by being specific about the problem, like tell them how much they are off.\n",
"It's easier to think in terms of each entity validating itself, and entities which depend on one another delegating their state to the state of their associated entries. In your case, for instance:\nclass Operation < ActiveRecord::Base\n has_many :entries\n validates_associated :entries\nend\n\nvalidates_associated will check whether each associated entity is valid (in this case, all entries should if the operation is to be valid).\nIt is very tempting to try to validate entire hierarchies of models as a whole, but as you said, the place where that would be most easily done is the controller, which should act more as a router of requests and responses than in dealing with business logic.\n",
"The way I look at it is that the controller should reflect the end-user view and translate requests into model operations and reponses while also doing formatting. In your case there are 2 kinds of operations that represent simple operations with a default account/entry, and more complex operations that have user selected entries and accounts. The forms should reflect the user view (2 forms with different fields), and there should be 2 actions in the controller to match. The controller however should have no logic relating to how the data is manipulated, only how to receive and respond. I would have class methods on the Operation class that take in the proper data from the forms and creates one or more object as needed, or place those class methods on a support class that is not an AR model, but has business logic that crosses model boundaries. The advantage of the separate utility class is that it keeps each model focused on one purpose, the down side is that the utility classes have no defined place to live. I put them in lib/ but Rails does not specify a place for model helpers as such.\n",
"If you are concerned about embedding this logic into any particular model, why not put them into an observer class, that will keep the logic for your creation of the associated items separate from the classes being observed.\n"
] |
[
6,
2,
0,
0,
0
] |
[] |
[] |
[
"ruby",
"ruby_on_rails"
] |
stackoverflow_0000064214_ruby_ruby_on_rails.txt
|
Q:
Compiling on multiple hosts
Say that you're developing code which needs to compile and run on multiple hosts (say Linux and Windows), how would you go about doing that in the most efficient manner given that:
You have full access to hardware for each host you're compiling for (in my case a Linux host and a Windows host standing on my desk)
Building over a network drive is too expensive
No commits to a central repository should be required -- assume that there is a CI engine which tries to build as soon as anything is checked in
"Efficient" means keeping the compile-edit-run cycle as short and simple as possible.
A:
The best thing I can recommend is an awesome cross platform project called 'BuildBot'.
BuildBot can automatically cause a build to occur on every platform you support, every time you check a new revision into your source control system. Have it build on OSX, Linux (ubuntu), Linux (debian), Linux (Redhat), Vista, Windows XP, etc, and have emails sent or whatever you prefer when a build fails.
As part of the build process, you can publish binaries if the tests pass. Useful for 'nightly' or 'bleeding edge' builds.
Here's some urls:
Buildbot.net
home page
Python.org's
buildbot
A:
We find that Hudson is a great CI server that can perform builds from source control as needed. As it is written in Java it can run on your target platform of choice and as the interface is web based you can control it from anywhere. There are plugins to do most things you want to do and best of all it is free!
A:
Most of the build servers mentioned in the other answers check out your changes from a version control system. Given your "No commits to a central repository should be required" requirement, I'd suggest that you try Jetbrains TeamCity CI server.
It has plugins fro Visual Studio and Eclipse and allows you to request a "private build", sending your changes straight to the build server. For each project you can define a number of build configurations with different requirements (OS is one of the possible reqs). If the builds succeed, the plugin will prompt you to commit your changes.
The free version supports 3 agents and you can buy more if needed.
It looks like Pulse also has the same feature, but I have no first hand experience with it.
A:
Pick one machine as your development box.
Setup the other one to automatically update from your source control on a regular basis (hourly/daily/whatever). Any build/test failures should send you some sort of warning message. (email,im,whatever). Your non-dev box is still be building locally since it has its own copy of the tree.
Before doing a real release, you still want to do human testing of course. But this keeps life sane the rest of the time.
A:
Building simple setup for such task is very simple.
I will suggest Cygwin to be used on Windows platform. This way you can write completely portable software/scrips for both Linux and Windows platforms. It's not clear from you post on which stage of project your are, but assuming that you only starting i will suggest using make to build your software. You can use cron to schedule the frequency for your check out/build circle. You can even send an email with build log if its broken.
There is number of ready daily build test both commercial and open source you can google for it or may be somebody will add here suggestions.
We are using home grown tool for that task so i can not suggest anything ready made.
Ok, i missed the point that you don't want to use source control system (which is strange, but you are the boss :) ) in this case just replace the check out with rsync everything else stays similar.
A:
Use http://ccache.samba.org to speed up compiles where only a few files have changed in a larger project,
and when large changes have been made, leverage http://en.opensuse.org/Icecream at the same time for shared distributed compiling.
That should probably quicken your compile-edit-run-cycle significantly.
A:
Since you are using CI I assume you have already set up a build process properly. What we are doing is that we are using windows boxes as dev machines and CI is running on Solaris. This assures that the code compiles well on multiple platforms. The code is in Java and we don't use any native libraries so it is quite guaranteed that the code will work.
We are using Bamboo at work - it is great but not free:-)
For my private projects I have been using Continuum, but the Husdon looks neat (I'll give it a try) - thanks Peter.
A:
One option would be Cascade, which allows you to test your changes on all your platforms before, rather than after, commit, by "checkpointing" them on the server.
A:
One word: Cruise (not Cruise Control) is very nice.
You can get two agents for free and one one agent per platform. It takes literally minutes to setup on mac and pc, and isn't too bad on linux from what i hear.
|
Compiling on multiple hosts
|
Say that you're developing code which needs to compile and run on multiple hosts (say Linux and Windows), how would you go about doing that in the most efficient manner given that:
You have full access to hardware for each host you're compiling for (in my case a Linux host and a Windows host standing on my desk)
Building over a network drive is too expensive
No commits to a central repository should be required -- assume that there is a CI engine which tries to build as soon as anything is checked in
"Efficient" means keeping the compile-edit-run cycle as short and simple as possible.
|
[
"The best thing I can recommend is an awesome cross platform project called 'BuildBot'.\nBuildBot can automatically cause a build to occur on every platform you support, every time you check a new revision into your source control system. Have it build on OSX, Linux (ubuntu), Linux (debian), Linux (Redhat), Vista, Windows XP, etc, and have emails sent or whatever you prefer when a build fails.\nAs part of the build process, you can publish binaries if the tests pass. Useful for 'nightly' or 'bleeding edge' builds.\nHere's some urls:\n\nBuildbot.net\nhome page \nPython.org's\nbuildbot\n\n",
"We find that Hudson is a great CI server that can perform builds from source control as needed. As it is written in Java it can run on your target platform of choice and as the interface is web based you can control it from anywhere. There are plugins to do most things you want to do and best of all it is free!\n",
"Most of the build servers mentioned in the other answers check out your changes from a version control system. Given your \"No commits to a central repository should be required\" requirement, I'd suggest that you try Jetbrains TeamCity CI server. \nIt has plugins fro Visual Studio and Eclipse and allows you to request a \"private build\", sending your changes straight to the build server. For each project you can define a number of build configurations with different requirements (OS is one of the possible reqs). If the builds succeed, the plugin will prompt you to commit your changes.\nThe free version supports 3 agents and you can buy more if needed.\nIt looks like Pulse also has the same feature, but I have no first hand experience with it.\n",
"Pick one machine as your development box.\nSetup the other one to automatically update from your source control on a regular basis (hourly/daily/whatever). Any build/test failures should send you some sort of warning message. (email,im,whatever). Your non-dev box is still be building locally since it has its own copy of the tree. \nBefore doing a real release, you still want to do human testing of course. But this keeps life sane the rest of the time.\n",
"Building simple setup for such task is very simple.\nI will suggest Cygwin to be used on Windows platform. This way you can write completely portable software/scrips for both Linux and Windows platforms. It's not clear from you post on which stage of project your are, but assuming that you only starting i will suggest using make to build your software. You can use cron to schedule the frequency for your check out/build circle. You can even send an email with build log if its broken.\nThere is number of ready daily build test both commercial and open source you can google for it or may be somebody will add here suggestions.\nWe are using home grown tool for that task so i can not suggest anything ready made.\nOk, i missed the point that you don't want to use source control system (which is strange, but you are the boss :) ) in this case just replace the check out with rsync everything else stays similar. \n",
"Use http://ccache.samba.org to speed up compiles where only a few files have changed in a larger project, \nand when large changes have been made, leverage http://en.opensuse.org/Icecream at the same time for shared distributed compiling.\nThat should probably quicken your compile-edit-run-cycle significantly.\n",
"Since you are using CI I assume you have already set up a build process properly. What we are doing is that we are using windows boxes as dev machines and CI is running on Solaris. This assures that the code compiles well on multiple platforms. The code is in Java and we don't use any native libraries so it is quite guaranteed that the code will work.\nWe are using Bamboo at work - it is great but not free:-) \nFor my private projects I have been using Continuum, but the Husdon looks neat (I'll give it a try) - thanks Peter.\n",
"One option would be Cascade, which allows you to test your changes on all your platforms before, rather than after, commit, by \"checkpointing\" them on the server.\n",
"One word: Cruise (not Cruise Control) is very nice.\nYou can get two agents for free and one one agent per platform. It takes literally minutes to setup on mac and pc, and isn't too bad on linux from what i hear.\n"
] |
[
3,
1,
1,
0,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"build_automation",
"build_process"
] |
stackoverflow_0000090658_build_automation_build_process.txt
|
Q:
Best Javascript drop-down menu?
I am looking for a drop-down JavaScript menu.
It should be the simplest and most elegant accessible menu that works in IE6 and Firefox 2 also.
It would be fine if it worked on an unnumbered list (ul) so the user can use the page without JavaScript support.
Which one do you recommend and where can I find the code to such a menu?
A:
I think the jquery superfish menu is fantastic and easy to use:
http://users.tpg.com.au/j_birch/plugins/superfish/
Javascript is not required, and it is based on simple valid ul unorder lists.
A:
A List Apart - Dropdowns
I'd use a css-only solution like the above so the user still gets dropdown menus even with javascript disabled.
A:
Here's my answer using jQuery:
jQuery.fn.ddnav = function() {
this.wrap("");
this.each(function() {
var sel = document.createElement('select');
jQuery(this).find("li.label, li a").each(function() {
jQuery("<option>").val(this.href ? this.href : '').html(jQuery(this).html()).appendTo(sel);
});
jQuery(this).hide().after(sel);
});
this.parent().find("select").after("<input type=\"button\" value=\"Go\">");
var callback = function(button) {
var url = jQuery(button.target).parent("div").find("select").val();
if(url.length)
window.open(url, "_self")
};
this.parent().find("input[type='button']").click(callback);
this.parent().find("select").change(callback);
return this;
};
And then in your onready handler:
$("ul.dropdown_nav").ddnav();
But I would point out that these are terrible for usability. Better to use a list and show people all of the options at once, and it's better to not navigate away after a selection and/or require a different button to be pushed to get to where they want.
I think you're best off never using the above (and I wrote the code!)
A:
For the purist:
http://www.grc.com/menudemo.htm
Absolutely no JavaScript, pure-css only - and works with virtually all browsers.
A little tweaking can make them look as good as the fancy menus (jQuery, etc.)
But we have also used jQuery, YUI! and others. YUI! has great accessibility options built in, if that's a requirement for JavaScript-powered menus.
--
Andrew
A:
I use this one:
http://www.tanfa.co.uk/css/examples/menu/vs7.asp
Comes in both vertical and horizontal flavours.
A:
I like stickman's accordion, which depending on how you want it to behave can be a nice effect.
A:
I've been an (unabashed) fan of the Yahoo! User Interface Library. They have a nice menubar system that's easy to implement. Great cross-browser support.
You can probably get something similar from the other popular Javascript frameworks, such as jQuery, as well.
|
Best Javascript drop-down menu?
|
I am looking for a drop-down JavaScript menu.
It should be the simplest and most elegant accessible menu that works in IE6 and Firefox 2 also.
It would be fine if it worked on an unnumbered list (ul) so the user can use the page without JavaScript support.
Which one do you recommend and where can I find the code to such a menu?
|
[
"I think the jquery superfish menu is fantastic and easy to use:\nhttp://users.tpg.com.au/j_birch/plugins/superfish/\nJavascript is not required, and it is based on simple valid ul unorder lists. \n",
"A List Apart - Dropdowns\nI'd use a css-only solution like the above so the user still gets dropdown menus even with javascript disabled.\n",
"Here's my answer using jQuery:\n\njQuery.fn.ddnav = function() {\n this.wrap(\"\");\n this.each(function() {\n var sel = document.createElement('select');\n jQuery(this).find(\"li.label, li a\").each(function() {\n jQuery(\"<option>\").val(this.href ? this.href : '').html(jQuery(this).html()).appendTo(sel);\n });\n jQuery(this).hide().after(sel);\n });\n this.parent().find(\"select\").after(\"<input type=\\\"button\\\" value=\\\"Go\\\">\");\n var callback = function(button) {\n var url = jQuery(button.target).parent(\"div\").find(\"select\").val();\n if(url.length)\n window.open(url, \"_self\")\n };\n this.parent().find(\"input[type='button']\").click(callback);\n this.parent().find(\"select\").change(callback);\n return this;\n};\n\nAnd then in your onready handler:\n\n $(\"ul.dropdown_nav\").ddnav();\n\nBut I would point out that these are terrible for usability. Better to use a list and show people all of the options at once, and it's better to not navigate away after a selection and/or require a different button to be pushed to get to where they want.\nI think you're best off never using the above (and I wrote the code!)\n",
"For the purist:\n http://www.grc.com/menudemo.htm\nAbsolutely no JavaScript, pure-css only - and works with virtually all browsers.\nA little tweaking can make them look as good as the fancy menus (jQuery, etc.)\nBut we have also used jQuery, YUI! and others. YUI! has great accessibility options built in, if that's a requirement for JavaScript-powered menus.\n--\nAndrew\n",
"I use this one:\nhttp://www.tanfa.co.uk/css/examples/menu/vs7.asp\nComes in both vertical and horizontal flavours.\n",
"I like stickman's accordion, which depending on how you want it to behave can be a nice effect.\n",
"I've been an (unabashed) fan of the Yahoo! User Interface Library. They have a nice menubar system that's easy to implement. Great cross-browser support.\nYou can probably get something similar from the other popular Javascript frameworks, such as jQuery, as well.\n"
] |
[
12,
3,
2,
2,
2,
1,
1
] |
[] |
[] |
[
"dhtml",
"javascript",
"menu"
] |
stackoverflow_0000101536_dhtml_javascript_menu.txt
|
Q:
Git - is it pull or rebase when working on branches with other people
So if I'm using branches that are remote (tracked) branches, and I want to get the lastest, I'm still unclear if I should be doing git pull or git rebase. I thought I had read that doing git rebase when working on a branch with other users, it can screw them up when they pull or rebase. Is that true? Should we all be using git pull?
A:
Git pull is a combination of 2 commands
git fetch (syncs your local repo with the newest stuff on the remote)
git merge (merges the changes from the distant branch, if any, into your local tracking branch)
git rebase is only a rough equivalent to git merge. It doesn't fetch anything remotely. In fact it doesn't do a proper merge either, it replays the commits of the branch you're standing on after the new commits from a second branch.
Its purpose is mainly to let you have a cleaner history. It doesn't take many merges by many people before the past history in gitk gets terribly spaghetti-like.
The best graphical explanation can be seen in the first 2 graphics here. But let me explain here with an example.
I have 2 branches: master and mybranch. When standing on mybranch I can run
git rebase master
and I'll get anything new in master inserted before my most recent commits in mybranch. This is perfect, because if I now merge or rebase the stuff from mybranch in master, my new commits are added linearly right after the most recent commits.
The problem you refer to happens if I rebase in the "wrong" direction. If I just got the most recent master (with new changes) and from master I rebase like this (before syncing my branch):
git rebase mybranch
Now what I just did is that I inserted my new changes somewhere in master's past. The main line of commits has changed. And due to the way git works with commit ids, all the commits (from master) that were just replayed over my new changes have new ids.
Well, it's a bit hard to explain just in words... Hope this makes a bit of sense :-)
Anyway, my own workflow is this:
'git pull' new changes from remote
switch to mybranch
'git rebase master' to bring master's new changes in my commit history
switch back to master
'git merge mybranch', which only fast-forwards when everything in master is also in mybranch (thus avoiding the commit reordering problem on a public branch)
'git push'
One last word. I strongly recommend using rebase when the differences are trivial (e.g. people working on different files or at least different lines). It has the gotcha I tried to explain just up there, but it makes for a much cleaner history.
As soon as there may be significant conflicts (e.g. a coworker has renamed something in a bunch of files), I strongly recommend merge. In this case, you'll be asked to resolve the conflict and then commit the resolution. On the plus side, a merge is much easier to resolve when there are conflicts. The down side is that your history may become hard to follow if a lot of people do merges all the time :-)
Good luck!
A:
Git rebase is a re-write of history. You should never do this on branches that are "public" (i.e., branches that you share with others). If someone clones your branch and then you rebase that branch -- then they can no longer pull/merge changes from your branch -- they'll have to throw their old one away and re-pull.
This article on packaging software with git is a very worthwhile read. It's more about managing software distributions but it's quite technical and talks about how branches can be used/managed/shared. They talk about when to rebase and when to pull and what the various consequences of each are.
In short, they both have their place but you need to really grok the difference.
A:
git pull does a merge if you've got commits that aren't in the remote branch. git rebase rewrites any existing commits you have to be relative to the tip of the remote branch. They're similar in that they can both cause conflicts, but I think using git rebase if you can allows for smoother collaboration. During the rebase operation you can refine your commits so they look like they were newly applied to the latest revision of the remote branch. A merge is perhaps more appropriate for longer development cycles on a branch that have more history.
Like most other things in git, there is a lot of overlapping functionality to accommodate different styles of working.
A:
Check out the excellent Gitcasts on Branching and merging as well as rebasing.
A:
If you want to pull source without affecting remote branches and without any changes in your local copy, it's best to use git pull.
I believe if you have a working branch that you have made changes to, use git rebase to change the base of that branch to be latest remote master, you will keep all of your branch changes, however the branch will now be branching from the master location, rather than where it was previously branched from.
|
Git - is it pull or rebase when working on branches with other people
|
So if I'm using branches that are remote (tracked) branches, and I want to get the lastest, I'm still unclear if I should be doing git pull or git rebase. I thought I had read that doing git rebase when working on a branch with other users, it can screw them up when they pull or rebase. Is that true? Should we all be using git pull?
|
[
"Git pull is a combination of 2 commands\n\ngit fetch (syncs your local repo with the newest stuff on the remote)\ngit merge (merges the changes from the distant branch, if any, into your local tracking branch)\n\ngit rebase is only a rough equivalent to git merge. It doesn't fetch anything remotely. In fact it doesn't do a proper merge either, it replays the commits of the branch you're standing on after the new commits from a second branch. \nIts purpose is mainly to let you have a cleaner history. It doesn't take many merges by many people before the past history in gitk gets terribly spaghetti-like.\nThe best graphical explanation can be seen in the first 2 graphics here. But let me explain here with an example.\nI have 2 branches: master and mybranch. When standing on mybranch I can run\ngit rebase master\n\nand I'll get anything new in master inserted before my most recent commits in mybranch. This is perfect, because if I now merge or rebase the stuff from mybranch in master, my new commits are added linearly right after the most recent commits. \nThe problem you refer to happens if I rebase in the \"wrong\" direction. If I just got the most recent master (with new changes) and from master I rebase like this (before syncing my branch):\ngit rebase mybranch\n\nNow what I just did is that I inserted my new changes somewhere in master's past. The main line of commits has changed. And due to the way git works with commit ids, all the commits (from master) that were just replayed over my new changes have new ids.\nWell, it's a bit hard to explain just in words... Hope this makes a bit of sense :-)\nAnyway, my own workflow is this:\n\n'git pull' new changes from remote\nswitch to mybranch\n'git rebase master' to bring master's new changes in my commit history\nswitch back to master\n'git merge mybranch', which only fast-forwards when everything in master is also in mybranch (thus avoiding the commit reordering problem on a public branch)\n'git push'\n\nOne last word. I strongly recommend using rebase when the differences are trivial (e.g. people working on different files or at least different lines). It has the gotcha I tried to explain just up there, but it makes for a much cleaner history.\nAs soon as there may be significant conflicts (e.g. a coworker has renamed something in a bunch of files), I strongly recommend merge. In this case, you'll be asked to resolve the conflict and then commit the resolution. On the plus side, a merge is much easier to resolve when there are conflicts. The down side is that your history may become hard to follow if a lot of people do merges all the time :-)\nGood luck!\n",
"Git rebase is a re-write of history. You should never do this on branches that are \"public\" (i.e., branches that you share with others). If someone clones your branch and then you rebase that branch -- then they can no longer pull/merge changes from your branch -- they'll have to throw their old one away and re-pull.\nThis article on packaging software with git is a very worthwhile read. It's more about managing software distributions but it's quite technical and talks about how branches can be used/managed/shared. They talk about when to rebase and when to pull and what the various consequences of each are.\nIn short, they both have their place but you need to really grok the difference.\n",
"git pull does a merge if you've got commits that aren't in the remote branch. git rebase rewrites any existing commits you have to be relative to the tip of the remote branch. They're similar in that they can both cause conflicts, but I think using git rebase if you can allows for smoother collaboration. During the rebase operation you can refine your commits so they look like they were newly applied to the latest revision of the remote branch. A merge is perhaps more appropriate for longer development cycles on a branch that have more history.\nLike most other things in git, there is a lot of overlapping functionality to accommodate different styles of working.\n",
"Check out the excellent Gitcasts on Branching and merging as well as rebasing.\n",
"If you want to pull source without affecting remote branches and without any changes in your local copy, it's best to use git pull.\nI believe if you have a working branch that you have made changes to, use git rebase to change the base of that branch to be latest remote master, you will keep all of your branch changes, however the branch will now be branching from the master location, rather than where it was previously branched from.\n"
] |
[
52,
10,
6,
2,
0
] |
[] |
[] |
[
"git",
"version_control"
] |
stackoverflow_0000096615_git_version_control.txt
|
Q:
C++ strings without and STL
I've not used C++ very much in the past, and have recently been doing a lot of C#, and I'm really struggling to get back into the basics of C++ again. This is particularly tricky as work mandates that none of the most handy C++ constructs can be used, so all strings must be char *'s, and there is no provision for STL lists.
What I'm currently trying to do is to create a list of strings, something which would take me no time at all using STL or in C#. Basically I want to have a function such as:
char **registeredNames = new char*[numberOfNames];
Then,
RegisterName(const * char const name, const int length)
{
//loop to see if name already registered snipped
if(notFound)
{
registeredNames[lastIndex++] = name;
}
}
or, if it was C#...
if(!registeredNames.Contains(name))
{
registeredNames.Add(name);
}
and I realize that it doesn't work. I know the const nature of the passed variables (a const pointer and a const string) makes it rather difficult, but my basic problem is that I've always avoided this situation in the past by using STL lists etc. so I've never had to work around it!
A:
There are legitimate reasons that STL might be avoided. When working in fixed environments where memory or speed is a premium, it's sometimes difficult to tell what is going on under the hood with STL. Yes, you can write your own memory allocators, and yes, speed generally isn't a problem, but there are differences between STL implementations across platforms, and those differences mighe be subtle and potentially buggy. Memory is perhaps my biggest concern when thinking about using it.
Memory is precious, and how we use it needs to be tightly controlled. Unless you've been down this road, this concept might not make sense, but it's true. We do allow for STL usage in tools (outside of game code), but it's prohibited inside of the actual game. One other related problem is code size. I am slightly unsure of how much STL can contribute to executable size, but we've seen marked increases in code size when using STL. Even if your executable is "only" 2M bigger, that's 2M less RAM for something else for your game.
STL is nice for sure. But it can be abused by programmers who don't know what they are doing. It's not intentional, but it can provide nasty surprises when you don't want to see them (again, memory bloat and performance issues)
I'm sure that you are close with your solution.
for ( i = 0; i < lastIndex; i++ ) {
if ( !strcmp(®isteredNames[i], name ) {
break; // name was found
}
}
if ( i == lastIndex ) {
// name was not found in the registeredNames list
registeredNames[lastIndex++] = strdup(name);
}
You might not want to use strdup. That's simply an example of how to to store the name given your example. You might want to make sure that you either don't want to allocate space for the new name yourself, or use some other memory construct that might already be available in your app.
And please, don't write a string class. I have held up string classes as perhaps the worst example of how not to re-engineer a basic C construct in C++. Yes, the string class can hide lots of nifty details from you, but it's memory usage patterns are terrible, and those don't fit well into a console (i.e. ps3 or 360, etc) environment. About 8 years ago we did the same time. 200000+ memory allocations before we hit the main menu. Memory was terribly fragmented and we couldn't get the rest of the game to fit in the fixed environment. We wound up ripping it out.
Class design is great for some things, but this isn't one of them. This is an opinion, but it's based on real world experience.
A:
You'll probably need to use strcmp to see if the string is already stored:
for (int index=0; index<=lastIndex; index++)
{
if (strcmp(registeredNames[index], name) == 0)
{
return; // Already registered
}
}
Then if you really need to store a copy of the string, then you'll need to allocate a buffer and copy the characters over.
char* nameCopy = malloc(length+1);
strcpy(nameCopy, name);
registeredNames[lastIndex++] = nameCopy;
You didn't mention whether your input is NULL terminated - if not, then extra care is needed, and strcmp/strcpy won't be suitable.
A:
If portability is an issue, you may want to check out STLport.
A:
Why can't you use the STL?
Anyway, I would suggest that you implement a simple string class and list templates of your own. That way you can use the same techniques as you normally would and keep the pointer and memory management confined to those classes. If you mimic the STL, it would be even better.
A:
If you really can't use stl (and I regret believing that was true when I was in the games industry) then can you not create your own string class? The most basic of string class would allocate memory on construction and assignment, and handle the delete in the destructor. Later you could add further functionality as you need it. Totally portable, and very easy to write and unit test.
A:
Edit: I guess I misunderstood your question. There is no constness problem in this code I'm aware of.
I'm doing this from my head but it should be about right:
static int lastIndex = 0;
static char **registeredNames = new char*[numberOfNames];
void RegisterName(const * char const name)
{
bool found = false;
//loop to see if name already registered snipped
for (int i = 0; i < lastIndex; i++)
{
if (strcmp(name, registeredNames[i] == 0))
{
found = true;
break;
}
}
if (!found)
{
registeredNames[lastIndex++] = name;
}
}
A:
Working with char* requires you to work with C functions. In your case, what you really need is to copy the strings around. To help you, you have the strndup function. Then you'll have to write something like:
void RegisterName(const char* name)
{
// loop to see if name already registered snipped
if(notFound)
{
registerNames[lastIndex++] = stdndup(name, MAX_STRING_LENGTH);
}
}
This code suppose your array is big enough.
Of course, the very best would be to properly implement your own string and array and list, ... or to convince your boss the STL is not evil anymore !
A:
Using:
const char **registeredNames = new const char * [numberOfNames];
will allow you to assign a const * char const to an element of the array.
Just out of curiosity, why does "work mandates that none of the most handy C++ constructs can be used"?
A:
I can understand why you can't use STL - most do bloat your code terribly. However there are implementations for games programmers by games programmers - RDESTL is one such library.
A:
If you are not worried about conventions and just want to get the job done use realloc. I do this sort of thing for lists all of the time, it goes something like this:
T** list = 0;
unsigned int length = 0;
T* AddItem(T Item)
{
list = realloc(list, sizeof(T)*(length+1));
if(!list) return 0;
list[length] = new T(Item);
++length;
return list[length];
}
void CleanupList()
{
for(unsigned int i = 0; i < length; ++i)
{
delete item[i];
}
free(list)
}
There is more you can do, e.g. only realloc each time the list size doubles, functions for removing items from list by index or by checking equality, make a template class for handling lists etc... (I have one I wrote ages ago and always use myself... but sadly I am at work and can't just copy-paste it here). To be perfectly honest though, this will probably not outperform the STL equivalent, although it may equal its performance if you do a ton of work or have an especially poor implementation of STL.
Annoyingly C++ is without an operator renew/resize to replace realloc, which would be very useful.
Oh, and apologies if my code is error ridden, I just pulled it out from memory.
A:
All the approaches suggested are valid, my point is if the way C# does it is appealing replicate it, create your own classes/interfaces to present the same abstraction, i.e. a simple linked list class with methods Contains and Add, using the sample code provided by other answers this should be relatively simple.
One of the great things about C++ is generally you can make it look and act the way you want, if another language has a great implementation of something you can usually reproduce it.
A:
const correctness is still const correctness regardless of whether you use the STL or not. I believe what you are looking for is to make registeredNames a const char ** so that the assignment to registeredNames[i] (which is a const char *) works.
Moreover, is this really what you want to be doing? It seems like making a copy of the string is probably more appropriate.
Moreover still, you shouldn't be thinking about storing this in a list given the operation you are doing on it, a set would be better.
A:
I have used this String class for years.
http://www.robertnz.net/string.htm
It provides practically all the features of the
STL string but is implemented as a true class not a template
and does not use STL.
A:
This is a clear case of you get to roll your own. And do the same for a vector class.
Do it with test-first programming.
Keep it simple.
Avoid reference counting the string buffer if you are in MT environment.
|
C++ strings without and STL
|
I've not used C++ very much in the past, and have recently been doing a lot of C#, and I'm really struggling to get back into the basics of C++ again. This is particularly tricky as work mandates that none of the most handy C++ constructs can be used, so all strings must be char *'s, and there is no provision for STL lists.
What I'm currently trying to do is to create a list of strings, something which would take me no time at all using STL or in C#. Basically I want to have a function such as:
char **registeredNames = new char*[numberOfNames];
Then,
RegisterName(const * char const name, const int length)
{
//loop to see if name already registered snipped
if(notFound)
{
registeredNames[lastIndex++] = name;
}
}
or, if it was C#...
if(!registeredNames.Contains(name))
{
registeredNames.Add(name);
}
and I realize that it doesn't work. I know the const nature of the passed variables (a const pointer and a const string) makes it rather difficult, but my basic problem is that I've always avoided this situation in the past by using STL lists etc. so I've never had to work around it!
|
[
"There are legitimate reasons that STL might be avoided. When working in fixed environments where memory or speed is a premium, it's sometimes difficult to tell what is going on under the hood with STL. Yes, you can write your own memory allocators, and yes, speed generally isn't a problem, but there are differences between STL implementations across platforms, and those differences mighe be subtle and potentially buggy. Memory is perhaps my biggest concern when thinking about using it.\nMemory is precious, and how we use it needs to be tightly controlled. Unless you've been down this road, this concept might not make sense, but it's true. We do allow for STL usage in tools (outside of game code), but it's prohibited inside of the actual game. One other related problem is code size. I am slightly unsure of how much STL can contribute to executable size, but we've seen marked increases in code size when using STL. Even if your executable is \"only\" 2M bigger, that's 2M less RAM for something else for your game.\nSTL is nice for sure. But it can be abused by programmers who don't know what they are doing. It's not intentional, but it can provide nasty surprises when you don't want to see them (again, memory bloat and performance issues)\nI'm sure that you are close with your solution.\nfor ( i = 0; i < lastIndex; i++ ) {\n if ( !strcmp(®isteredNames[i], name ) {\n break; // name was found\n }\n}\nif ( i == lastIndex ) {\n // name was not found in the registeredNames list\n registeredNames[lastIndex++] = strdup(name);\n}\n\nYou might not want to use strdup. That's simply an example of how to to store the name given your example. You might want to make sure that you either don't want to allocate space for the new name yourself, or use some other memory construct that might already be available in your app.\nAnd please, don't write a string class. I have held up string classes as perhaps the worst example of how not to re-engineer a basic C construct in C++. Yes, the string class can hide lots of nifty details from you, but it's memory usage patterns are terrible, and those don't fit well into a console (i.e. ps3 or 360, etc) environment. About 8 years ago we did the same time. 200000+ memory allocations before we hit the main menu. Memory was terribly fragmented and we couldn't get the rest of the game to fit in the fixed environment. We wound up ripping it out.\nClass design is great for some things, but this isn't one of them. This is an opinion, but it's based on real world experience.\n",
"You'll probably need to use strcmp to see if the string is already stored:\nfor (int index=0; index<=lastIndex; index++)\n{\n if (strcmp(registeredNames[index], name) == 0)\n {\n return; // Already registered\n }\n}\n\nThen if you really need to store a copy of the string, then you'll need to allocate a buffer and copy the characters over.\nchar* nameCopy = malloc(length+1);\nstrcpy(nameCopy, name);\nregisteredNames[lastIndex++] = nameCopy;\n\nYou didn't mention whether your input is NULL terminated - if not, then extra care is needed, and strcmp/strcpy won't be suitable.\n",
"If portability is an issue, you may want to check out STLport.\n",
"Why can't you use the STL?\nAnyway, I would suggest that you implement a simple string class and list templates of your own. That way you can use the same techniques as you normally would and keep the pointer and memory management confined to those classes. If you mimic the STL, it would be even better.\n",
"If you really can't use stl (and I regret believing that was true when I was in the games industry) then can you not create your own string class? The most basic of string class would allocate memory on construction and assignment, and handle the delete in the destructor. Later you could add further functionality as you need it. Totally portable, and very easy to write and unit test.\n",
"Edit: I guess I misunderstood your question. There is no constness problem in this code I'm aware of.\nI'm doing this from my head but it should be about right:\nstatic int lastIndex = 0;\nstatic char **registeredNames = new char*[numberOfNames];\n\nvoid RegisterName(const * char const name)\n{\n bool found = false;\n //loop to see if name already registered snipped\n for (int i = 0; i < lastIndex; i++)\n {\n if (strcmp(name, registeredNames[i] == 0))\n {\n found = true;\n break;\n }\n }\n\n if (!found)\n {\n registeredNames[lastIndex++] = name;\n }\n}\n\n",
"Working with char* requires you to work with C functions. In your case, what you really need is to copy the strings around. To help you, you have the strndup function. Then you'll have to write something like:\nvoid RegisterName(const char* name)\n{\n // loop to see if name already registered snipped\n if(notFound)\n {\n registerNames[lastIndex++] = stdndup(name, MAX_STRING_LENGTH);\n }\n}\n\nThis code suppose your array is big enough.\nOf course, the very best would be to properly implement your own string and array and list, ... or to convince your boss the STL is not evil anymore !\n",
"Using:\nconst char **registeredNames = new const char * [numberOfNames];\n\nwill allow you to assign a const * char const to an element of the array.\nJust out of curiosity, why does \"work mandates that none of the most handy C++ constructs can be used\"?\n",
"I can understand why you can't use STL - most do bloat your code terribly. However there are implementations for games programmers by games programmers - RDESTL is one such library.\n",
"If you are not worried about conventions and just want to get the job done use realloc. I do this sort of thing for lists all of the time, it goes something like this:\nT** list = 0;\nunsigned int length = 0;\n\nT* AddItem(T Item)\n{\n list = realloc(list, sizeof(T)*(length+1));\n if(!list) return 0;\n list[length] = new T(Item);\n ++length;\n return list[length];\n}\n\nvoid CleanupList()\n{\n for(unsigned int i = 0; i < length; ++i)\n {\n delete item[i];\n }\n free(list)\n}\n\nThere is more you can do, e.g. only realloc each time the list size doubles, functions for removing items from list by index or by checking equality, make a template class for handling lists etc... (I have one I wrote ages ago and always use myself... but sadly I am at work and can't just copy-paste it here). To be perfectly honest though, this will probably not outperform the STL equivalent, although it may equal its performance if you do a ton of work or have an especially poor implementation of STL.\nAnnoyingly C++ is without an operator renew/resize to replace realloc, which would be very useful.\nOh, and apologies if my code is error ridden, I just pulled it out from memory.\n",
"All the approaches suggested are valid, my point is if the way C# does it is appealing replicate it, create your own classes/interfaces to present the same abstraction, i.e. a simple linked list class with methods Contains and Add, using the sample code provided by other answers this should be relatively simple.\nOne of the great things about C++ is generally you can make it look and act the way you want, if another language has a great implementation of something you can usually reproduce it.\n",
"const correctness is still const correctness regardless of whether you use the STL or not. I believe what you are looking for is to make registeredNames a const char ** so that the assignment to registeredNames[i] (which is a const char *) works.\nMoreover, is this really what you want to be doing? It seems like making a copy of the string is probably more appropriate.\nMoreover still, you shouldn't be thinking about storing this in a list given the operation you are doing on it, a set would be better.\n",
"I have used this String class for years.\nhttp://www.robertnz.net/string.htm\nIt provides practically all the features of the\nSTL string but is implemented as a true class not a template\nand does not use STL.\n",
"This is a clear case of you get to roll your own. And do the same for a vector class.\n\nDo it with test-first programming.\nKeep it simple.\n\nAvoid reference counting the string buffer if you are in MT environment.\n"
] |
[
7,
6,
5,
3,
2,
1,
1,
1,
1,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"c++",
"list",
"string"
] |
stackoverflow_0000091715_c++_list_string.txt
|
Q:
ruby method names
For a project I am working on in ruby I am overriding the method_missing method so that I can set variables using a method call like this, similar to setting variables in an ActiveRecord object:
Object.variable_name= 'new value'
However, after implementing this I found out that many of the variable names have periods (.) in them. I have found this workaround:
Object.send('variable.name=', 'new value')
However, I am wondering is there a way to escape the period so that I can use
Object.variable.name= 'new value'
A:
Don't do it!
Trying to create identifiers that are not valid in your language is not a good idea. If you really want to set variables like that, use attribute macros:
attr_writer :bar
attr_reader :baz
attr_accessor :foo
Okay, now that you have been warned, here's how to do it. Just return another instance of the same class every time you get a regular accessor, and collect the needed information as you go.
class SillySetter
def initialize path=nil
@path = path
end
def method_missing name,value=nil
new_path = @path ? "#{@path}.#{name}" : name
if name.to_s[-1] == ?=
puts "setting #{new_path} #{value}"
else
return self.class.new(path=new_path)
end
end
end
s = SillySetter.new
s.foo = 5 # -> setting foo= 5
s.foo.bar.baz = 4 # -> setting foo.bar.baz= 4
I didn't want to encourage ruby sillyness, but I just couldn't help myself!
A:
The only reason I can think of to do this, is if you really REALLY hate the person who is going to be maintaining this code after you.
And I don't mean 'he ran over my dog' hatred.
I mean real steaming pulsing vein in temple hatred.
So, in short, don't. :-)
A:
If there's no hope of changing the canonical names, you could alias the getters and setters manually:
def variable_name
send 'variable.name'
end
def variable_name=(value)
send 'variable.name=', value
end
|
ruby method names
|
For a project I am working on in ruby I am overriding the method_missing method so that I can set variables using a method call like this, similar to setting variables in an ActiveRecord object:
Object.variable_name= 'new value'
However, after implementing this I found out that many of the variable names have periods (.) in them. I have found this workaround:
Object.send('variable.name=', 'new value')
However, I am wondering is there a way to escape the period so that I can use
Object.variable.name= 'new value'
|
[
"Don't do it!\nTrying to create identifiers that are not valid in your language is not a good idea. If you really want to set variables like that, use attribute macros:\nattr_writer :bar\nattr_reader :baz\nattr_accessor :foo\n\nOkay, now that you have been warned, here's how to do it. Just return another instance of the same class every time you get a regular accessor, and collect the needed information as you go.\nclass SillySetter\n def initialize path=nil\n @path = path\n end\n\n def method_missing name,value=nil\n new_path = @path ? \"#{@path}.#{name}\" : name\n if name.to_s[-1] == ?=\n puts \"setting #{new_path} #{value}\"\n else\n return self.class.new(path=new_path)\n end\n end\nend\n\ns = SillySetter.new\ns.foo = 5 # -> setting foo= 5\ns.foo.bar.baz = 4 # -> setting foo.bar.baz= 4\n\nI didn't want to encourage ruby sillyness, but I just couldn't help myself!\n",
"The only reason I can think of to do this, is if you really REALLY hate the person who is going to be maintaining this code after you.\nAnd I don't mean 'he ran over my dog' hatred. \nI mean real steaming pulsing vein in temple hatred.\nSo, in short, don't. :-)\n",
"If there's no hope of changing the canonical names, you could alias the getters and setters manually:\ndef variable_name\n send 'variable.name'\nend\n\ndef variable_name=(value)\n send 'variable.name=', value\nend\n\n"
] |
[
9,
1,
0
] |
[] |
[] |
[
"ruby"
] |
stackoverflow_0000049252_ruby.txt
|
Q:
Oracle Natural Joins and Count(1)
Does anyone know why in Oracle 11g when you do a Count(1) with more than one natural join it does a cartesian join and throws the count way off?
Such as
SELECT Count(1) FROM record NATURAL join address NATURAL join person WHERE status=1
AND code = 1 AND state = 'TN'
This pulls back like 3 million rows when
SELECT * FROM record NATURAL join address NATURAL join person WHERE status=1
AND code = 1 AND state = 'TN'
pulls back like 36000 rows, which is the correct amount.
Am I just missing something?
Here are the tables I'm using to get this result.
CREATE TABLE addresses (
address_id NUMBER(10,0) NOT NULL,
address_1 VARCHAR2(60) NULL,
address_2 VARCHAR2(60) NULL,
city VARCHAR2(35) NULL,
state CHAR(2) NULL,
zip VARCHAR2(5) NULL,
zip_4 VARCHAR2(4) NULL,
county VARCHAR2(35) NULL,
phone VARCHAR2(11) NULL,
fax VARCHAR2(11) NULL,
origin_network NUMBER(3,0) NOT NULL,
owner_network NUMBER(3,0) NOT NULL,
corrected_address_id NUMBER(10,0) NULL,
"HASH" VARCHAR2(200) NULL
);
CREATE TABLE rates (
rate_id NUMBER(10,0) NOT NULL,
eob VARCHAR2(30) NOT NULL,
network_code NUMBER(3,0) NOT NULL,
product_code VARCHAR2(2) NOT NULL,
rate_type NUMBER(1,0) NOT NULL
);
CREATE TABLE records (
pk_unique_id NUMBER(10,0) NOT NULL,
rate_id NUMBER(10,0) NOT NULL,
address_id NUMBER(10,0) NOT NULL,
effective_date DATE NOT NULL,
term_date DATE NULL,
last_update DATE NULL,
status CHAR(1) NOT NULL,
network_unique_id VARCHAR2(20) NULL,
rate_id_2 NUMBER(10,0) NULL,
contracted_by VARCHAR2(50) NULL,
contract_version VARCHAR2(5) NULL,
bill_address_id NUMBER(10,0) NULL
);
I should mention this wasn't a problem in Oracle 9i, but when we switched to 11g it became a problem.
A:
My advice would be to NOT use NATURAL JOIN. Explicitly define your join conditions to avoid confusion and "hidden bugs". Here is the official NATURAL JOIN Oracle documentation and more discussion about this subject.
A:
If it happens exactly as you say then it must be an optimiser bug, you should report it to Oracle.
A:
you should try a count(*)
There is a difference between the two.
count(1) signifies count rows where 1 is not null
count(*) signifies count the rows
A:
Just noticed you used 2 natural joins...
From the documentation you can only use a natural join on 2 tables
Natural_Join
|
Oracle Natural Joins and Count(1)
|
Does anyone know why in Oracle 11g when you do a Count(1) with more than one natural join it does a cartesian join and throws the count way off?
Such as
SELECT Count(1) FROM record NATURAL join address NATURAL join person WHERE status=1
AND code = 1 AND state = 'TN'
This pulls back like 3 million rows when
SELECT * FROM record NATURAL join address NATURAL join person WHERE status=1
AND code = 1 AND state = 'TN'
pulls back like 36000 rows, which is the correct amount.
Am I just missing something?
Here are the tables I'm using to get this result.
CREATE TABLE addresses (
address_id NUMBER(10,0) NOT NULL,
address_1 VARCHAR2(60) NULL,
address_2 VARCHAR2(60) NULL,
city VARCHAR2(35) NULL,
state CHAR(2) NULL,
zip VARCHAR2(5) NULL,
zip_4 VARCHAR2(4) NULL,
county VARCHAR2(35) NULL,
phone VARCHAR2(11) NULL,
fax VARCHAR2(11) NULL,
origin_network NUMBER(3,0) NOT NULL,
owner_network NUMBER(3,0) NOT NULL,
corrected_address_id NUMBER(10,0) NULL,
"HASH" VARCHAR2(200) NULL
);
CREATE TABLE rates (
rate_id NUMBER(10,0) NOT NULL,
eob VARCHAR2(30) NOT NULL,
network_code NUMBER(3,0) NOT NULL,
product_code VARCHAR2(2) NOT NULL,
rate_type NUMBER(1,0) NOT NULL
);
CREATE TABLE records (
pk_unique_id NUMBER(10,0) NOT NULL,
rate_id NUMBER(10,0) NOT NULL,
address_id NUMBER(10,0) NOT NULL,
effective_date DATE NOT NULL,
term_date DATE NULL,
last_update DATE NULL,
status CHAR(1) NOT NULL,
network_unique_id VARCHAR2(20) NULL,
rate_id_2 NUMBER(10,0) NULL,
contracted_by VARCHAR2(50) NULL,
contract_version VARCHAR2(5) NULL,
bill_address_id NUMBER(10,0) NULL
);
I should mention this wasn't a problem in Oracle 9i, but when we switched to 11g it became a problem.
|
[
"My advice would be to NOT use NATURAL JOIN. Explicitly define your join conditions to avoid confusion and \"hidden bugs\". Here is the official NATURAL JOIN Oracle documentation and more discussion about this subject. \n",
"If it happens exactly as you say then it must be an optimiser bug, you should report it to Oracle.\n",
"you should try a count(*)\nThere is a difference between the two.\ncount(1) signifies count rows where 1 is not null\ncount(*) signifies count the rows\n",
"Just noticed you used 2 natural joins...\nFrom the documentation you can only use a natural join on 2 tables\nNatural_Join\n"
] |
[
9,
2,
1,
1
] |
[] |
[] |
[
"natural_join",
"oracle"
] |
stackoverflow_0000103389_natural_join_oracle.txt
|
Q:
How do I run a batch file on startup for a Win x64 machine?
I know you can use autoexnt to run a batch file on startup for Windows XP, but that only seems to work for 32-bit machines. I'm running Windows XP x64 on a box, and I need to have a script run on startup (without anyone's logging in). Any ides?
Thanks for the help.
A:
Can also use local computer policy to configure startup and shutdown scripts.
http://vlaurie.com/computers2/Articles/group_policy_editor.htm
Has a good walkthrough of how to do it.
A:
In your registry, accessible through "regedit" you can navigate to the following key:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run
Add a Reg_sz type entry, doesn't matter what the key name is really, but as the value give the fully qualified path name to your program or batch file.
A:
On startup meaning Login, or on startup meaning (before anyone logs in)?
On login, you could just put a BAT in your Startup folder.
|
How do I run a batch file on startup for a Win x64 machine?
|
I know you can use autoexnt to run a batch file on startup for Windows XP, but that only seems to work for 32-bit machines. I'm running Windows XP x64 on a box, and I need to have a script run on startup (without anyone's logging in). Any ides?
Thanks for the help.
|
[
"Can also use local computer policy to configure startup and shutdown scripts.\nhttp://vlaurie.com/computers2/Articles/group_policy_editor.htm\nHas a good walkthrough of how to do it.\n",
"In your registry, accessible through \"regedit\" you can navigate to the following key:\nHKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Run\nAdd a Reg_sz type entry, doesn't matter what the key name is really, but as the value give the fully qualified path name to your program or batch file. \n",
"On startup meaning Login, or on startup meaning (before anyone logs in)?\nOn login, you could just put a BAT in your Startup folder.\n"
] |
[
2,
1,
0
] |
[] |
[] |
[
"64_bit",
"startupscript"
] |
stackoverflow_0000103842_64_bit_startupscript.txt
|
Q:
What is the preferred practice for event arguments provided by custom events?
In regards to custom events in .NET, what is the preferred design pattern for passing event arguments? Should you have a separate EventArgs derived class for each event that can be raised, or it is acceptable to have a single class for the events if they are all raised by events from the same class?
A:
I typically create a base EventArgs class that has common data for each event. If a particular event has more data associated with it, I create a subclass for that event; otherwise I just use the base class.
A:
You don't need to have a separate EventArgs derived class for each event. It's perfectly acceptable and even desirable to use existing EventArgs-derived classes rather than reinventing the wheel.
These could be existing framework classes (e.g. System.Component.CancelEventArgs if all you want to do is give the event handler the possibility to cancel an action.
Or you can create your own EventArgs-derived classes if you have data specific to your application to pass to event handlers. There is no reason why two events from the same class or different classes shouldn't use the same EventArgs-derived class if they are sending the same data.
A:
It depends on what the events are, but for the most part, for the sake of whoever is going to consuming your events, create a single custom class deriving from EventArgs.
A:
I would, like OAB, create a custom 'base' args class that extends EventArgs by adding data specific to the component or application I use it in. E.g. in an accounting export application, my base ExportEventArgs would add an AccountNo property.
|
What is the preferred practice for event arguments provided by custom events?
|
In regards to custom events in .NET, what is the preferred design pattern for passing event arguments? Should you have a separate EventArgs derived class for each event that can be raised, or it is acceptable to have a single class for the events if they are all raised by events from the same class?
|
[
"I typically create a base EventArgs class that has common data for each event. If a particular event has more data associated with it, I create a subclass for that event; otherwise I just use the base class.\n",
"You don't need to have a separate EventArgs derived class for each event. It's perfectly acceptable and even desirable to use existing EventArgs-derived classes rather than reinventing the wheel.\nThese could be existing framework classes (e.g. System.Component.CancelEventArgs if all you want to do is give the event handler the possibility to cancel an action.\nOr you can create your own EventArgs-derived classes if you have data specific to your application to pass to event handlers. There is no reason why two events from the same class or different classes shouldn't use the same EventArgs-derived class if they are sending the same data.\n",
"It depends on what the events are, but for the most part, for the sake of whoever is going to consuming your events, create a single custom class deriving from EventArgs.\n",
"I would, like OAB, create a custom 'base' args class that extends EventArgs by adding data specific to the component or application I use it in. E.g. in an accounting export application, my base ExportEventArgs would add an AccountNo property. \n"
] |
[
1,
1,
0,
0
] |
[] |
[] |
[
".net",
"design_patterns"
] |
stackoverflow_0000102052_.net_design_patterns.txt
|
Q:
How does debug level (0-99) in the Tomcat server.xml affect speed?
The server.xml which controls the startup of Apache Tomcat's servlet container contains a debug attribute for nearly every major component. The debug attribute is more or less verbose depending upon the number you give it, zero being least and 99 being most verbose. How does the debug level affect Tomcat's speed when servicing large numbers of users? I assume zero is fast and 99 is relatively slower, but is this true. If there are no errors being thrown, does it matter?
A:
Extensive logging takes a significant amount of time. This is why it is so important to put
if (log.isDebugEnabled())
log.debug(bla_bla_bla);
so I would say that seting your production server to being verbose would seriously affect performance. I assume it's a production server you're talking about since you say it must service a large number of users.
A:
Logging is not only responsible for giving you errors, but also for tracking of what's going on. In some cases, code cannot run inside a debugger, then logging is your only option.
This is why logging output can be extremely verbose. And I really mean that. I remember setting Catalina's loglevel to TRACE once and ended up with a several megabyte logfile. That was before the server received any hits at all. It was a huge performance hog. Countable in several seconds.
If you don't need logging for Tomcat itself, don't activate it on any of its components. You will typically only want to tinker with Tomcat's loglevel if you suspect a bug in either your setup or Tomcat itself.
For your own applications, measure the logging cost using a profiler or just some stress testing. Whatever your results, I would recommend against running an application with a high loglevel setting in a production environment. My current project dumps about a megabyte per request at TRACE setting, only about three to four lines on INFO and nothing on WARNING (iff everything goes well :-). I recommend not more than the most necessary logging. Your app should really just report startup, shutdown and failure, and - at most - one line per request.
|
How does debug level (0-99) in the Tomcat server.xml affect speed?
|
The server.xml which controls the startup of Apache Tomcat's servlet container contains a debug attribute for nearly every major component. The debug attribute is more or less verbose depending upon the number you give it, zero being least and 99 being most verbose. How does the debug level affect Tomcat's speed when servicing large numbers of users? I assume zero is fast and 99 is relatively slower, but is this true. If there are no errors being thrown, does it matter?
|
[
"Extensive logging takes a significant amount of time. This is why it is so important to put\nif (log.isDebugEnabled())\n log.debug(bla_bla_bla);\n\nso I would say that seting your production server to being verbose would seriously affect performance. I assume it's a production server you're talking about since you say it must service a large number of users.\n",
"Logging is not only responsible for giving you errors, but also for tracking of what's going on. In some cases, code cannot run inside a debugger, then logging is your only option.\nThis is why logging output can be extremely verbose. And I really mean that. I remember setting Catalina's loglevel to TRACE once and ended up with a several megabyte logfile. That was before the server received any hits at all. It was a huge performance hog. Countable in several seconds.\nIf you don't need logging for Tomcat itself, don't activate it on any of its components. You will typically only want to tinker with Tomcat's loglevel if you suspect a bug in either your setup or Tomcat itself.\nFor your own applications, measure the logging cost using a profiler or just some stress testing. Whatever your results, I would recommend against running an application with a high loglevel setting in a production environment. My current project dumps about a megabyte per request at TRACE setting, only about three to four lines on INFO and nothing on WARNING (iff everything goes well :-). I recommend not more than the most necessary logging. Your app should really just report startup, shutdown and failure, and - at most - one line per request.\n"
] |
[
2,
2
] |
[] |
[] |
[
"apache",
"java",
"servlets",
"tomcat"
] |
stackoverflow_0000102058_apache_java_servlets_tomcat.txt
|
Q:
How do I set an Application's Icon Globally in Swing?
I know I can specify one for each form, or for the root form and then it'll cascade through to all of the children forms, but I'd like to have a way of overriding the default Java Coffee Cup for all forms even those I might forget.
Any suggestions?
A:
You can make the root form (by which I assume you mean JFrame) be your own subclass of JFrame, and put standard functionality in its constructor, such as:
this.setIconImage(STANDARD_ICON);
You can bundle other standard stuff in here too, such as memorizing the frame's window metrics as a user preference, managing splash panes, etc.
Any new frames spawned by this one would also be instances of this JFrame subclass. The only thing you have to remember is to instantiate your subclass, instead of JFrame. I don't think there's any substitute for remembering to do this, but at least now it's a matter of remembering a subclass instead of a setIconImage call (among possibly other features).
A:
There is another way, but its more of a "hack" then a real fix....
If you are distributing the JRE with your Application, you could replace the coffee cup icon resource in the java exe/dll/rt.jar wherever that is with your own icon. It might not be very legit, but it is a possibility...
A:
Also, if you have one "main" window, and set its icon properly, as long as you use that main window as the "parent" for any Dialog classes, they will inherit the icon. Any new Frames need to have the icon set on them, though.
as Paul/Andreas said, subclassing JFrame is going to be your best bet.
A:
Extend the JDialog class (for example name it MyDialog) and set the icon in constructor. Then all dialogs should extend your implementation (MyDialog).
|
How do I set an Application's Icon Globally in Swing?
|
I know I can specify one for each form, or for the root form and then it'll cascade through to all of the children forms, but I'd like to have a way of overriding the default Java Coffee Cup for all forms even those I might forget.
Any suggestions?
|
[
"You can make the root form (by which I assume you mean JFrame) be your own subclass of JFrame, and put standard functionality in its constructor, such as:\nthis.setIconImage(STANDARD_ICON);\n\nYou can bundle other standard stuff in here too, such as memorizing the frame's window metrics as a user preference, managing splash panes, etc.\nAny new frames spawned by this one would also be instances of this JFrame subclass. The only thing you have to remember is to instantiate your subclass, instead of JFrame. I don't think there's any substitute for remembering to do this, but at least now it's a matter of remembering a subclass instead of a setIconImage call (among possibly other features).\n",
"There is another way, but its more of a \"hack\" then a real fix....\nIf you are distributing the JRE with your Application, you could replace the coffee cup icon resource in the java exe/dll/rt.jar wherever that is with your own icon. It might not be very legit, but it is a possibility...\n",
"Also, if you have one \"main\" window, and set its icon properly, as long as you use that main window as the \"parent\" for any Dialog classes, they will inherit the icon. Any new Frames need to have the icon set on them, though.\nas Paul/Andreas said, subclassing JFrame is going to be your best bet.\n",
"Extend the JDialog class (for example name it MyDialog) and set the icon in constructor. Then all dialogs should extend your implementation (MyDialog).\n"
] |
[
9,
2,
1,
0
] |
[] |
[] |
[
"java",
"swing"
] |
stackoverflow_0000103179_java_swing.txt
|
Q:
What is a javascript hash table implementation that avoids object namespace collisions?
First off: I'm using a rather obscure implementation of javascript embedded as a scripting engine for Adobe InDesign CS3. This implementation sometimes diverges from "standard" javascript, hence my problem.
I'm using John Resig's jsdiff library (source here) to compare selections of text between two documents. jsdiff uses vanilla objects as associative arrays to map a word from the text to another object. (See the "ns" and "os" variables in jsdiff.js, around line 129.)
My headaches start when the word "reflect" comes up in the text. "reflect" is a default, read-only property on all objects. When jsdiff tries to assign a value on the associative array to ns['reflect'], everything explodes.
My question: is there a way around this? Is there a way to do a hash table in javascript without using the obvious vanilla object?
Ground rules: switching scripting engines isn't an option. :)
A:
You might be "asking the wrong question" (as Raymond Chen would say); rather than trying to avoid using the vanilla objects, try changing the way the associative array members are named.
The way I'd try to approach this: instead of there being an array member ns["reflect"], change the way that jsdiff builds the arrays so that the member is ns["_reflect"] or some other variation on that.
A:
If the JS implementation you're using supports the hasOwnProperty method for objects, you can use it to test whether a property has explicitly been set for an object or the property is inherited from its prototype. Example:
if(object.hasOwnProperty('testProperty')){
// do something
}
A:
Well given objects in javascript are just associative arrays, there really isn't another built in solution for a hash. You might be able to create your own psuedo hashtable by wrapping a class around some arrays although there will probably be a significant performance hit with the manual work involved.
Just a side note I haven't really used or looked at the jsdiff library so I can't offer any valid insight as per tips or tricks.
|
What is a javascript hash table implementation that avoids object namespace collisions?
|
First off: I'm using a rather obscure implementation of javascript embedded as a scripting engine for Adobe InDesign CS3. This implementation sometimes diverges from "standard" javascript, hence my problem.
I'm using John Resig's jsdiff library (source here) to compare selections of text between two documents. jsdiff uses vanilla objects as associative arrays to map a word from the text to another object. (See the "ns" and "os" variables in jsdiff.js, around line 129.)
My headaches start when the word "reflect" comes up in the text. "reflect" is a default, read-only property on all objects. When jsdiff tries to assign a value on the associative array to ns['reflect'], everything explodes.
My question: is there a way around this? Is there a way to do a hash table in javascript without using the obvious vanilla object?
Ground rules: switching scripting engines isn't an option. :)
|
[
"You might be \"asking the wrong question\" (as Raymond Chen would say); rather than trying to avoid using the vanilla objects, try changing the way the associative array members are named.\nThe way I'd try to approach this: instead of there being an array member ns[\"reflect\"], change the way that jsdiff builds the arrays so that the member is ns[\"_reflect\"] or some other variation on that.\n",
"If the JS implementation you're using supports the hasOwnProperty method for objects, you can use it to test whether a property has explicitly been set for an object or the property is inherited from its prototype. Example:\nif(object.hasOwnProperty('testProperty')){\n // do something\n}\n\n",
"Well given objects in javascript are just associative arrays, there really isn't another built in solution for a hash. You might be able to create your own psuedo hashtable by wrapping a class around some arrays although there will probably be a significant performance hit with the manual work involved.\nJust a side note I haven't really used or looked at the jsdiff library so I can't offer any valid insight as per tips or tricks.\n"
] |
[
5,
1,
0
] |
[] |
[] |
[
"adobe_indesign",
"diff",
"hash",
"javascript"
] |
stackoverflow_0000103679_adobe_indesign_diff_hash_javascript.txt
|
Q:
Java code to import CSV into Access
I posted the code below to the Sun developers forum since I thought it was erroring (the true error was before this code was even hit). One of the responses I got said it would not work and to throw it away. But it is actually working. It might not be the best code (I am new to Java) but is there something inherently "wrong" with it?
=============
CODE:
private static void ImportFromCsvToAccessTable(String mdbFilePath, String accessTableName
, String csvDirPath , String csvFileName ) throws ClassNotFoundException, SQLException {
Connection msConn = getDestinationConnection(mdbFilePath);
try{
String strSQL = "SELECT * INTO " + accessTableName + " FROM [Text;HDR=YES;DATABASE=" + csvDirPath + ";].[" + csvFileName + "]";
PreparedStatement selectPrepSt = msConn.prepareStatement(strSQL );
boolean result = selectPrepSt.execute();
System.out.println( "result = " + result );
} catch(Exception e) {
System.out.println(e);
} finally {
msConn.close();
}
}
A:
The literal answer is no - there is never anything "inherently wrong" with code, it's a matter of whether it meets the requirements - which may or may not include being maintainable, secure, robust or fast.
The code you are running is actually a JET query purely within Access - the Java code is doing nothing except telling Access to run the query.
On the one hand, if it ain't broke don't fix it. On the other hand, there's a good chance it will break in the near future so you could try fixing it in advance.
The two likely reasons it might break are:
SQL injection risk. Depending on where csvDirPath and csvFileName come from (e.g. csvFileName might come from the name of the file uploaded by a user?), and on how clever the Access JDBC driver is, you could be open to someone breaking or deleting your data by inserting a semicolon (or some brackets to make a subquery) and some additional SQL commands into the query.
You are relying on the columns of the CSV file being compatible with the columns of the Access table. If you have unchecked CSV being uploaded, or if the CSV generator has a particular way of handling nulls, or if you one day get an unusual date or number format, you may get an error on inserting into the Access table.
Having said all that, we are all about pragmatism here. If the above code is from a utility class which you are going to use by hand a few times a week/month/year/ever, then it isn't really a problem.
If it is a class which forms part of a web application, then the 'official' Java way to do it would be to read records out of the CSV file (either using a CSV parser or a CSV/text JDBC driver), get the columns out of the recordset, do some validation or sanity checking on them, and then use a new PreparedStatement to insert them into the Access database. Much more trouble but much more robust.
You can probably find a combination of tools (e.g. object-relational layers or other data access tools) which will do a lot of that for you, but setting up the tools is going to be as much hassle as writing the code. Then again, you'll learn a lot from either one.
A:
One word of warning - jdbc -> Access queries (which bridge using odbc) do not work on 64 bit systems, as there exist no 64 bit Access database drivers (The driver is included into 32 bit copies of Windows and can only be accessed by 32 bit processes. You can run "odbcad32" or look at the ODBC control panel to see that the driver is present)
While I don't see the code with the connection string in your code snippet, I am not aware of any noncommercial Access JDBC drivers for Java, only jdbc->odbc bridging and relying on Windows to have the Access (*.mdb) driver. Microsoft no longer supports this driver and has no plans to port it to 64bit, so infrastructure wise it is something to think about.
A:
@david.w.fenton.myopenid.com: "Can you provide a citation about MS's plans to never introduce 64-bit ODBC drivers for Jet?"
David, I found a post on Microsoft's Connect Feedback about that.
http://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=125117
"At the moment there are no plans to ship a 64-bit version of JET driver by Office team. We may considere alternate options and will update you when we have a concrete plan."
Thanks,
SSIS team.
Posted by Microsoft on 10/3/2007 at 9:47 PM
There's been no update from Microsoft in that feedback thread.
A:
Question to Joshua McKinnon:
Can you provide a citation about MS's plans to never introduce 64-bit ODBC drivers for Jet? This sounds reasonable, so I'm not doubting you at all, I would just like to know if you have a source for it that you can point to.
Surely MS is providing access to Jet on 64-bit systems through OLEDB, though, right? That doesn't help with JDBC, but certainly provides a method to use Jet data (they have to provide something, since Jet 4 is part of the OS, as it is used as the data store for Active Directory, and has been used thus since Windows 2000).
|
Java code to import CSV into Access
|
I posted the code below to the Sun developers forum since I thought it was erroring (the true error was before this code was even hit). One of the responses I got said it would not work and to throw it away. But it is actually working. It might not be the best code (I am new to Java) but is there something inherently "wrong" with it?
=============
CODE:
private static void ImportFromCsvToAccessTable(String mdbFilePath, String accessTableName
, String csvDirPath , String csvFileName ) throws ClassNotFoundException, SQLException {
Connection msConn = getDestinationConnection(mdbFilePath);
try{
String strSQL = "SELECT * INTO " + accessTableName + " FROM [Text;HDR=YES;DATABASE=" + csvDirPath + ";].[" + csvFileName + "]";
PreparedStatement selectPrepSt = msConn.prepareStatement(strSQL );
boolean result = selectPrepSt.execute();
System.out.println( "result = " + result );
} catch(Exception e) {
System.out.println(e);
} finally {
msConn.close();
}
}
|
[
"The literal answer is no - there is never anything \"inherently wrong\" with code, it's a matter of whether it meets the requirements - which may or may not include being maintainable, secure, robust or fast.\nThe code you are running is actually a JET query purely within Access - the Java code is doing nothing except telling Access to run the query.\nOn the one hand, if it ain't broke don't fix it. On the other hand, there's a good chance it will break in the near future so you could try fixing it in advance.\nThe two likely reasons it might break are:\n\nSQL injection risk. Depending on where csvDirPath and csvFileName come from (e.g. csvFileName might come from the name of the file uploaded by a user?), and on how clever the Access JDBC driver is, you could be open to someone breaking or deleting your data by inserting a semicolon (or some brackets to make a subquery) and some additional SQL commands into the query.\nYou are relying on the columns of the CSV file being compatible with the columns of the Access table. If you have unchecked CSV being uploaded, or if the CSV generator has a particular way of handling nulls, or if you one day get an unusual date or number format, you may get an error on inserting into the Access table.\n\nHaving said all that, we are all about pragmatism here. If the above code is from a utility class which you are going to use by hand a few times a week/month/year/ever, then it isn't really a problem.\nIf it is a class which forms part of a web application, then the 'official' Java way to do it would be to read records out of the CSV file (either using a CSV parser or a CSV/text JDBC driver), get the columns out of the recordset, do some validation or sanity checking on them, and then use a new PreparedStatement to insert them into the Access database. Much more trouble but much more robust.\nYou can probably find a combination of tools (e.g. object-relational layers or other data access tools) which will do a lot of that for you, but setting up the tools is going to be as much hassle as writing the code. Then again, you'll learn a lot from either one.\n",
"One word of warning - jdbc -> Access queries (which bridge using odbc) do not work on 64 bit systems, as there exist no 64 bit Access database drivers (The driver is included into 32 bit copies of Windows and can only be accessed by 32 bit processes. You can run \"odbcad32\" or look at the ODBC control panel to see that the driver is present)\nWhile I don't see the code with the connection string in your code snippet, I am not aware of any noncommercial Access JDBC drivers for Java, only jdbc->odbc bridging and relying on Windows to have the Access (*.mdb) driver. Microsoft no longer supports this driver and has no plans to port it to 64bit, so infrastructure wise it is something to think about.\n",
"@david.w.fenton.myopenid.com: \"Can you provide a citation about MS's plans to never introduce 64-bit ODBC drivers for Jet?\"\nDavid, I found a post on Microsoft's Connect Feedback about that.\nhttp://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=125117\n\"At the moment there are no plans to ship a 64-bit version of JET driver by Office team. We may considere alternate options and will update you when we have a concrete plan.\"\nThanks,\nSSIS team.\n Posted by Microsoft on 10/3/2007 at 9:47 PM\nThere's been no update from Microsoft in that feedback thread.\n",
"Question to Joshua McKinnon:\nCan you provide a citation about MS's plans to never introduce 64-bit ODBC drivers for Jet? This sounds reasonable, so I'm not doubting you at all, I would just like to know if you have a source for it that you can point to.\nSurely MS is providing access to Jet on 64-bit systems through OLEDB, though, right? That doesn't help with JDBC, but certainly provides a method to use Jet data (they have to provide something, since Jet 4 is part of the OS, as it is used as the data store for Active Directory, and has been used thus since Windows 2000).\n"
] |
[
6,
2,
1,
0
] |
[] |
[] |
[
"csv",
"java",
"ms_access"
] |
stackoverflow_0000030696_csv_java_ms_access.txt
|
Q:
How should I move queued messages from IIS to Exchange on different servers?
We currently have a company email server with Exchange, and a bulk email processing server that is using IIS SMTP. We are upgrading to a 3rd party MTA (zrinity xms) for bulk sending. I need to be able to keep sending the messages already queued for IIS when we switch to the 3rd party sofware. Can I simply move the IIS queue files to the Exchange server queue and have sending attempts begin automatically for them? If not, any suggestions on accomplishing this?
A:
You should be able to move the *.eml files to the Exchange server's pickup directory. Or set the IIS SMTP service to smart host to the new MTA, assuming they (the 3rd party) allow SMTP relay from your IP address.
A:
Moving the files will work. However, any email with a BCC line in the header will get sent out with the BCC intact. Some clients, such as gmail, will display the information to the recipient, thus breaking the whole point of BCC.
This happens when copying EML files to MS-SMTP (which Exchange also uses) because the BCC information is usually stripped out of the header in during the SMTP hand-off to (not from) MS-SMTP.
If that was how the messages were initially handed off, then it's possible that the EMLs you have were already broken into separate messages for each BCC, and that header was properly stripped.
Just a little gotcha to watch out for.
|
How should I move queued messages from IIS to Exchange on different servers?
|
We currently have a company email server with Exchange, and a bulk email processing server that is using IIS SMTP. We are upgrading to a 3rd party MTA (zrinity xms) for bulk sending. I need to be able to keep sending the messages already queued for IIS when we switch to the 3rd party sofware. Can I simply move the IIS queue files to the Exchange server queue and have sending attempts begin automatically for them? If not, any suggestions on accomplishing this?
|
[
"You should be able to move the *.eml files to the Exchange server's pickup directory. Or set the IIS SMTP service to smart host to the new MTA, assuming they (the 3rd party) allow SMTP relay from your IP address.\n",
"Moving the files will work. However, any email with a BCC line in the header will get sent out with the BCC intact. Some clients, such as gmail, will display the information to the recipient, thus breaking the whole point of BCC.\nThis happens when copying EML files to MS-SMTP (which Exchange also uses) because the BCC information is usually stripped out of the header in during the SMTP hand-off to (not from) MS-SMTP.\nIf that was how the messages were initially handed off, then it's possible that the EMLs you have were already broken into separate messages for each BCC, and that header was properly stripped.\nJust a little gotcha to watch out for.\n"
] |
[
3,
2
] |
[] |
[] |
[
"email",
"exchange_server",
"iis",
"smtp"
] |
stackoverflow_0000102647_email_exchange_server_iis_smtp.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.