text
stringlengths 64
81.1k
| meta
dict |
---|---|
Q:
What is a Spaced Out Word™?
This is my first try at a puzzle, so hopefully it at least lasts a few hours before someone figures it out :)
This is in the spirit of the What is a Word/Phrase™ series started by JLee with a special brand of Phrase™ and Word™ puzzles.
If a word conforms to a special rule, I call it a Spaced Out Word™.
Use the following examples below to find the rule.
And, if you want to analyze, here is a CSV version:
Spaced Out Words™,Not Spaced Out Words™
HUMILIATE,DEGRADE
HEXAGONALLY,OCTAGONALLY
DUNGEON,CATACOMB
NEFARIOUSLY,HEINOUSLY
NONSPHERICAL,NONCUBIC
WOOZINESS,DIZZINESS
PRECOGNITION,CLAIRVOYANCE
EXTENSION,ANNEX
UPDATE:
It occurred to me that, even if a puzzler were to reverse-engineer my criteria from the words above, they may not be able to fully determine the rules. To help, here are some additional words that are Not Spaced Out Words™ to narrow down the criteria:
CELEBRATION
WEATHERMEN
MANIPULATING
ADORINGLY
It's less of a hint and more of a clarification, but I put them in spoiler text for those who may want to solve the puzzle as it was originally written (and maybe refer back to these after, for verification).
ADDITIONAL WORDS
Below are a few more Spaced Out Words™ and Not Spaced Out Words™:
Spaced Out Words™,Not Spaced Out Words™
AROUSING,EXCITING
ADJOURN,RECESS
And a few more if you need more. I think this just about exhausts the list...
Spaced Out Words™,Not Spaced Out Words™
POVERTYSTRICKEN,IMPOVERISHED
UNCOPYRIGHTED,PUBLICDOMAIN
(yes, that last Not Spaced Out Word™ is technically two words)
A:
The solution is that Spaced Out Words™ contain the letters for
Homonuclear gases at room temperature.
Word™ Element
----------------------------
HUMILIATE HELIUM
HEXAGONALLY OXYGEN
DUNGEON NEON
NEFARIOUSLY FLUORINE
NONSPHERICAL CHLORINE
WOOZINESS OZONE.
PRECOGNITION NITROGEN
EXTENSION XENON
AROUSING ARGON
ADJOURN RADON
POVERTYSTRICKEN KRYPTON
UNCOPYRIGHTED HYDROGEN
| {
"pile_set_name": "StackExchange"
} |
Q:
How to dynamically import SVG and render it inline
I have a function that takes some arguments and renders an SVG. I want to dynamically import that svg based on the name passed to the function. It looks like this:
import React from 'react';
export default async ({name, size = 16, color = '#000'}) => {
const Icon = await import(/* webpackMode: "eager" */ `./icons/${name}.svg`);
return <Icon width={size} height={size} fill={color} />;
};
According to the webpack documentation for dynamic imports and the magic comment "eager":
"Generates no extra chunk. All modules are included in the current
chunk and no additional network requests are made. A Promise is still
returned but is already resolved. In contrast to a static import, the
module isn't executed until the call to import() is made."
This is what my Icon is resolved to:
> Module
default: "static/media/antenna.11b95602.svg"
__esModule: true
Symbol(Symbol.toStringTag): "Module"
Trying to render it the way my function is trying to gives me this error:
Objects are not valid as a React child (found: [object Promise]). If you meant to render a collection of children, use an array instead.
I don't understand how to use this imported Module to render it as a component, or is it even possible this way?
A:
unfortunately it appears to be webpack configuration issue and I've seen a bug report on create-react-app's repo as well. It was my bad that I didn't test it outside of code sandbox, sorry for any misleading information. Regardless, it will probably work once the configuration issue has been fixed by the CRA team. Thank you.
Finally figured out how to do this. You can make use of ref and ReactComponent named export when importing SVG file. Note that it has to be ref as I've tested using state to store the imported SVG ReactComponent and it does not work.
Sample Dynamic SVG component:
const Icon = ({ name, ...rest }) => {
const ImportedIconRef = React.useRef(null);
const [loading, setLoading] = React.useState(false);
React.useEffect(() => {
setLoading(true);
const importIcon = async () => {
try {
ImportedIconRef.current = (await import(`./${name}.svg`)).ReactComponent;
} catch (err) {
// Your own error handling logic, throwing error for the sake of
// simplicity
throw err;
} finally {
setLoading(false);
}
};
importIcon();
}, [name]);
if (!loading && ImportedIconRef.current) {
const { current: ImportedIcon } = ImportedIconRef;
return <ImportedIcon {...rest} />;
}
return null;
};
You can implement your own error handling logic as well. Maybe bugsnag or something.
Working CodeSandbox Demo:
For you typescript fans out there, here's an example with Typescript.
interface IconProps extends React.SVGProps<SVGSVGElement> {
name: string;
}
const Icon: React.FC<IconProps> = ({ name, ...rest }): JSX.Element | null => {
const ImportedIconRef = React.useRef<
React.FC<React.SVGProps<SVGSVGElement>>
>();
const [loading, setLoading] = React.useState(false);
React.useEffect((): void => {
setLoading(true);
const importIcon = async (): Promise<void> => {
try {
ImportedIconRef.current = (await import(`./${name}.svg`)).ReactComponent;
} catch (err) {
// Your own error handling logic, throwing error for the sake of
// simplicity
throw err;
} finally {
setLoading(false);
}
};
importIcon();
}, [name]);
if (!loading && ImportedIconRef.current) {
const { current: ImportedIcon } = ImportedIconRef;
return <ImportedIcon {...rest} />;
}
return null;
};
Working CodeSandbox Demo:
A:
Your rendering functions (for class components) and function components should not be async (because they must return DOMNode or null - in your case, they return a Promise). Instead, you could render them in the regular way, after that import the icon and use it in the next render. Try the following:
const Test = () => {
let [icon, setIcon] = useState('');
useEffect(async () => {
let importedIcon = await import('your_path');
setIcon(importedIcon.default);
}, []);
return <img alt='' src={ icon }/>;
};
| {
"pile_set_name": "StackExchange"
} |
Q:
Web-based WYSIWYG Markdown editor with commenting?
My question is similar to this one, but with (hopefully) a clearly defined use case: sharing and reviewing basic tech docs with a minimal, intuitive GUI.
I'm producing technical documents using MS Word. These docs need to be reviewed, commented and have minimal additions & changes to their content, e.g. text, headings, tables.
I want to both
1. Track the changes
2. Accept/reject per change
In between focussing on getting the content right, a lot of issues come up (too many to list) around formatting, versioning, application compatibility etc.
It would be great if I could sidestep these and just
maintain a single source in Markdown while
providing a WYSIWYG "user interface" that only allows minimal interactions, i.e.:
changing/adding/deleting text/numbers/tables
commenting particular areas of text
In order to allow "users" to do these things without being in the Markdown workflow behind the scences, ideally this would be a Web-based editor with change tracking that outputs [some flavour of] Markdown.
Anyone know of such a thing? Or something that fits my requirements [explicit or as yet unconscious ;) ] even better?
UPDATE:
pt. 1
This (hallo.js) is very close to what I'm talking about, I just need more headings (H3 etc.), commenting, visible change tracking, and simple tables in addition.
pt 2.
This (ice) is the sort of change tracking in mind.
A:
As of 2017, Authorea might be the way to go. It has all the suggested features, and in contrast to the other answers, it is finally an editor that actually supports collaborative commenting.
As a plus, authors can choose whether they want to write their paragraphs in Markdown, LaTeX, HTML, or RichText.
A:
Ideally, the reviewers don’t get to do any editing, no matter what the workflow. Even if they just want to see a small change like uppercasing a word, it is better if they just put in a comment and you make the changes. Or not, if you have a reason to reject the proposed change. That way you can maintain strict control of your master document. Again, no matter what the workflow, but especially with Markdown, where your manuscript is a letter-perfect source code for building published HTML or RTF or PDF.
So your workflow could be:
write document in Markdown
convert Markdown to RTF for Microsoft Word or PDF for Acrobat
distribute Word or PDF document to reviewers who only modify the document with comments
receive reviewed Word or PDF document
review each comment in turn and apply changes to original Markdown document
if another review round is needed, go to step 2
convert finished Markdown to HTML for publishing
Generally speaking, editing privileges are for co-writers or editors to make extensive changes, but minor changes should just be made as comments. If they have additional content like a table or photo, they can just put a comment “table X goes here” and send you the table in an Excel document or wherever they made it. So you don’t necessarily need a complicated workflow.
A:
Etherpad.
It has
Commenting PLUS change proposals PLUS attribution
Output as plaintext (even better than Markdown for the workflow)
Collaboration in realtime with color coding
Minimal WYSIWYG
A local installation for security
| {
"pile_set_name": "StackExchange"
} |
Q:
Setting R environmental variable in Tortoise SVN
I have a collection of functions in a file called some_functions.R and saved in a SVN directory in C:\blah1\blah2\Rcodes\some_functions.R . I have several Rprojects which uses this code file. Say a R project is available in the directory C:\blah1\blah2\Rprojects\project1. I can use hard coded path to refer the file and it works.
source("C:/blah1/blah2/Rcodes/some_functions.R")'
But I would like to set the path as environmental variable.
Looking at How to unfold user and environment variable in R language? and setting the home directory in windows R I add the following line in the RProfile.site file
Sys.setenv(R_CODE_PATH = "C:/blah1/blah2/Rcodes")
and in the project1.Rnw file
source("R_CODE_PATH/some_functions.R")
But the project file can not read the some_functions.R file. I tried with %R_CODE_PATH% without any luck.
Not sure what I'm missing here. Any help is much appreciated.
A:
You retrieve environment variables using Sys.getenv(). Try:
r_code_path <- Sys.getenv("R_CODE_PATH")
Then, for example:
source(paste(r_code_path, "some_functions.R", sep = "/"))
I would use the .Renviron config file to define environment variables. Put it in whatever directory the R command Sys.getenv("HOME") returns and include lines like this:
R_CODE_PATH=C:/blah1/blah2/Rcodes
| {
"pile_set_name": "StackExchange"
} |
Q:
Applescript send variable parameters to xslt file (updatedx2)
I have an applescript that is using xmltransform to transform a xml using and external xlst file. I would like to add a variable in applescript to be picked up by the xslt file.
The satimage dictionary talks about the use of xsl params as part of xmltransform but i cannot find any examples.
Using the template XMLTransform xmlFile with xsltFile in outputpath
how would i define the variables both in the applescript and in the following
xsl file example.
<xsl:stylesheet version="1.0">
<xsl:param name="vb1" value="'unknown'"/>
<xsl:template match="@*|node()">
<xsl:copy>
<xsl:apply-templates select="@*|node()"/>
</xsl:copy>
</xsl:template>
<xsl:template match="xmeml/list/name">
<xsl:value-of select="{$vb1}"/>
</xsl:template>
</xsl:stylesheet>
Applescript snippet
set vb1 to "Hello" as string
XMLTransform xmlFile with xsltFile1 in outputpathtemp xsl params vb1
The current applescript returns "cannot make vb1 into record".
I am now looking at using the following but it is returning NULL in the XML
set vb1 to {s:"'hello'"}
XMLTransform xmlFile with xsltFile1 in outputpathtemp xsl string params vb1
the input is
<xmeml>
<list>
<name>IMaName</name>
<!-- other nodes -->
</list>
</xmeml>
the current output is
<xmeml>
<list>
<name/>
<!-- other nodes -->
</list>
</xmeml>
Can anyone Help please?
many thanks.
A:
Here is the Answer:
applescript
XMLTransform xmlFile with xsltFile1 in outputpathtemp xsl params {"vb1", "'hello'"}
XSLT
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output method="xml" indent="yes"/>
<xsl:param name="vb1" />
<xsl:template match="@*|node()">
<xsl:copy>
<xsl:apply-templates select="@*|node()"/>
</xsl:copy>
</xsl:template>
<xsl:template match="/xmeml/list/name">
<xsl:element name="name">
<xsl:value-of select="$vb1"/>
</xsl:element>
</xsl:template>
</xsl:stylesheet>
Thanks.
| {
"pile_set_name": "StackExchange"
} |
Q:
Passing objects into NSOperation and ensuring correct memory management policies
If I want to pass an object that was created on the main thread onto an NSOperation object, what's the standard way of doing to so that I'm not creating any memory management issues? Should I make my object's properties not have the 'nonatomic' attribute?
Right now, I allocate the objects via [[[AClass alloc] init] autorelease], keep a copy of the instance on my main thread and then pass another copy into the NSOperation as part of an NSArray. When I try to iterate through the array list objects inside NSOperation class and access one of the AClass's properties, the debugger reports that one of the member properties of AClass's instance object is already zombied while others are not. The error I'm seeing is:
-[CFString retain]: message sent to deallocated instance 0x5a8c6b0
*** -[CFString _cfTypeID]: message sent to deallocated instance 0x5a8c6b0
*** -[CFString _cfTypeID]: message sent to deallocated instance 0x5a8c6b0
I can't tell who is releasing my string properties too early but the entire object instance has not been released.
My class looks like:
@interface AClass
{
NSString *myTitle;
NSString *myDescription;
}
@property (nonatomic, retain, readonly) NSString *myTitle;
@property (nonatomic, retain, readonly) NSString *myDescription;
@end
@implementation AClass
@synthesize myTitle, myDescription;
- (void)dealloc
{
[myTitle release];
[myDescription release];
}
@end
A:
Here's an updated snippet for an efficient, 'thread-safe' version of AClass:
/**
AClass is an immutable container:
- category methods must never change the state of AClass
*/
@interface AClass : NSObject < NSCopying >
{
@private
NSString * title;
NSString * description;
}
/**
subclassing notes:
- do not override properties: title, description
- implement @protocol NSCopying
*/
/*
1) document copy on entry here, even though the compiler has no
additional work to do.
2) nonatomic in this case - these ivars initialized and never mutate.
3) readonly because they are readonly
*/
@property (copy, readonly, nonatomic) NSString * title;
@property (copy, readonly, nonatomic) NSString * description;
/* prohibited: */
- (id)init;
/* designated initializer */
- (id)initWithTitle:(NSString *)inTitle description:(NSString *)inDescription;
@end
@implementation AClass
@synthesize title;
@synthesize description;
- (id)init
{
assert(0 && "use the designated initializer");
self = [super init];
[self release];
return 0;
}
- (id)initWithTitle:(NSString *)inTitle description:(NSString *)inDescription
{
self = [super init];
assert(self && "uh oh, NSObject returned 0");
if (0 != self) {
if (0 == inTitle || 0 == inDescription) {
assert(inTitle && inDescription && "AClass: invalid argument");
[self release];
return 0;
}
/* this would catch a zombie, if you were given one */
title = [inTitle copy];
description = [inDescription copy];
if (0 == title || 0 == description) {
assert(title && description && "string failed to copy");
[self release];
return 0;
}
}
return self;
}
- (void)dealloc
{
/* which could also happen when if your init fails, but the assertion in init will be hit first */
assert(title && description && "my ivars are not meant to be modified");
[title release], title = 0;
[description release], description = 0;
/* don't forget to call through super at the end */
[super dealloc];
}
- (id)copyWithZone:(NSZone *)zone
{
assert(self.title == title && self.description == description && "the subclasser should not override the accessors");
if ([self zone] == zone && [self class] == [AClass class]) {
/*
this is one possible (optional) optimization:
- avoid using this approach if you don't entirely understand
all the outlined concepts of immutable containers and low
level memory management in Cocoa and just use the
implementation in 'else'
*/
return [self retain];
}
else {
return [[[self class] allocWithZone:zone] initWithTitle:self.title description:self.description];
}
}
@end
Beyond that, avoid overusing autorelease calls so your memory issues are local to the callsite. This approach will solve many issues (although memory issues may still exist in your app).
Update in response to questions:
Justin Galzic: so basically, the copy
ensures that objects are local to the
caller and when the instance is shared
out to the thread on which
NSOperations is on, they're two
different instances?
actually, the copy call to an immutable string could perform a retain.
as an example: AClass could now implement @protocol NSCopying by simply retaining the 2 strings. also, if you know AClass is never subclassed, you could just return [self retain] when the objects are allocated in the same NSZone.
if a mutable string is passed to the initializer of AClass, then it will (of course) perform a concrete copy.
if you want objects to share these strings, then this approach is preferred (in most cases) because you (and all clients using AClass) now know the ivars will never change behind your back (what they point to, as well as the strings' contents). of course, you still have the ability to make the mistake of changing what title and description point to in the implementation of AClass - this would break the policy you've established. if you wanted to change what the members of AClass pointed to, you'd have to use locks, @synchronized directives (or something similar) - and then typically set up some observation callbacks so you could guarantee that your class works as expected... all that is unnecessary for most cases because the above immutable interface is perfectly simple for most cases.
to answer your question: the call to copy is not guaranteed to create a new allocation - it just allows several guarantees to propagate to clients, while avoiding all thread safety (and locking/synchronizing).
What if there are some cases where you
do want multiple classes (on the same
thread) to share this object? Would
you then make an implicit copy of this
object and then pass along to the
NSOperation?
now that i've detailed how copying immutable objects can be implemented. it should be obvious that properties of immutable objects (NSString, NSNumber, etc.) should be declared copy in many cases (but many Cocoa programmers don't declare them that way).
if you want to share a NSString which you know is immutable, you should just copy it from AClass.
if you want to share an instance of AClass you have 2 choices:
1) (best) implement @protocol NSCopying in AClass: - (id)copyWithZone: implementation added above.
now the client is free to copy and retain AClass as is most logical for their needs.
2) (BAD) expect that all clients will keep their code up to date with changes to AClass, and to use copy or retain as required. this is not realistic. it is a good way to introduce bugs if your implementation of AClass needs to change because clients will not always update their programs accordingly. some people consider this acceptable when the object is private in a package (e.g., only one class uses and sees its interface).
in short, it's best to keep the retain and copy semantics predictable - and just hide all the implementation details in your class so your clients' code never breaks (or is minimized).
if your object is truly shared and its state is mutable, then use retain and implement callbacks for state changes. otherwise, keep it simple and use immutable interfaces and concrete copying.
if an object has an immutable state, then this example is always a lock free thread safe implementation with many guarantees.
for an implementation of an NSOperation subclass, i find it best (in most cases) to:
- create an object which provides all the context it needs (e.g., an url to load)
- if the something needs to know about the operation's result or to use the data, then create a @protocol interface for the callbacks and add a member to the operation subclass which is retained by the NSOperation subclass, and which you've prohibited from pointing to another object during the lifetime of the NSOperation instance:
@protocol MONImageRenderCallbackProtocol
@required
/** ok, the operation succeeded */
- (void)imageRenderOperationSucceeded:(AClass *)inImageDescriptor image:(NSImage *)image;
@required
/** bummer. the image request failed. see the @a error */
- (void)imageRenderOperationFailed:(AClass *)inImageDescriptor withError:(NSError *)error;
@end
/* MONOperation: do not subclass, create one instance per render request */
@interface MONOperation : NSOperation
{
@private
AClass * imageDescriptor; /* never change outside initialization/dealloc */
NSObject<MONImageRenderCallbackProtocol>* callback; /* never change outside initialization/dealloc */
BOOL downloadSucceeded;
NSError * error;
}
/* designated initializer */
- (id)initWithImageDescriptor:(AClass *)inImageDescriptor callback:(NSObject<MONImageRenderCallbackProtocol>*)inCallback;
@end
@implementation MONOperation
- (id)initWithImageDescriptor:(AClass *)inImageDescriptor callback:(NSObject<MONImageRenderCallbackProtocol>*)inCallback
{
self = [super init];
assert(self);
if (0 != self) {
assert(inImageDescriptor);
imageDescriptor = [inImageDescriptor copy];
assert(inCallback);
callback = [inCallback retain];
downloadSucceeded = 0;
error = 0;
if (0 == imageDescriptor || 0 == callback) {
[self release];
return 0;
}
}
return self;
}
- (void)dealloc
{
[imageDescriptor release], imageDescriptor = 0;
[callback release], callback = 0;
[error release], error = 0;
[super dealloc];
}
/**
@return an newly rendered NSImage, created based on self.imageDescriptor
will set self.downloadSucceeded and self.error appropriately
*/
- (NSImage *)newImageFromImageDescriptor
{
NSImage * result = 0;
/* ... */
return result;
}
- (void)main
{
NSAutoreleasePool * pool = [NSAutoreleasePool new];
NSImage * image = [self newImageFromImageDescriptor];
if (downloadSucceeded) {
assert(image);
assert(0 == error);
[callback imageRenderOperationSucceeded:imageDescriptor image:image];
[image release], image = 0;
}
else {
assert(0 == image);
assert(error);
[callback imageRenderOperationFailed:imageDescriptor withError:error];
}
[pool release], pool = 0;
}
@end
| {
"pile_set_name": "StackExchange"
} |
Q:
What is "Extending move semantics to *this" all about?
Please, could someone explain in plain English what is "Extending move semantics to *this"? I am referring to this proposal. All what am looking for is what is that & why do we need that. Note that I do understand what an rvalue reference is in general, upon which move semantics is built. I am not able to grasp what such an extension adds to rvalue references!
A:
The ref-qualifier feature (indicating the type of *this) would allow you to distinguish whether a member function can be called on rvalues or lvalues (or both), and to overload functions based on that. The first version gives some rationale in the informal part:
Prevent surprises:
struct S {
S* operator &() &; // Selected for lvalues only
S& operator=(S const&) &; // Selected for lvalues only
};
int main() {
S* p = &S(); // Error!
S() = S(); // Error!
}
Enable move semantics:
class X {
std::vector<char> data_;
public:
// ...
std::vector<char> const & data() const & { return data_; }
std::vector<char> && data() && { return data_; } //should probably be std::move(data_)
};
X f();
// ...
X x;
std::vector<char> a = x.data(); // copy
std::vector<char> b = f().data(); // move
A:
For example, you can overload operators as free functions with rvalue references if you wish:
Foo operator+(Foo&& a, const Foo& b)
{
a += b;
return std::move(a);
}
To achieve the same effect with a member function, you need the quoted proposal:
Foo Foo::operator+(const Foo& b) && // note the double ampersand
{
*this += b;
return *this;
}
The double ampersand says "this member function can only be called on rvalues".
Whether or not you must explicitly move from *this in such a member function is discussed here.
| {
"pile_set_name": "StackExchange"
} |
Q:
Install Python geopandas failed
I'm trying to install geopandas. Have the following setup:
Windows-64
Anaconda2 (64-bit)
Python 2.7
Have tried two things:
1)
pip install geopandas
This gives me the following error:
WindowsError: [Error 126] The specified module could not be found and Command "python setup.py egg_info" failed with error code 1 in c:\users\username\appdata\local\temp\pip-install-_kgeyw\shapely\
The solutions to the similar problem here suggest that it's because of the slashes in the path being converted. Not sure how to test this.
2)
anaconda search -t conda geopandas
I then search for the version of geopandas suitable for my setup (Windows-64):
conda install -c maxalbert geopandas
which produces the following error:
UnsatisfiableError: The following specifications were found to be in conflict:
- geopandas
Use "conda info <package> to see the dependencies for each package
When I run the command conda info geopandas I get a list of geopandas version. Not sure how to proceed from here.
A:
It is a common problem and the solution is to install all dependencies manually (as Geoff Boeing describes here: https://geoffboeing.com/2014/09/using-geopandas-windows/)
First try to conda install -c conda-forge geopandas. If it doesn't work, do the following steps:
Download wheels for your Python version and OS for GDAL, Fiona, pyproj, rtree and shapely (e.g. from Gohlke)
Uninstall all OSGeo4W, GDAL, Fiona, pyproj, rtree and shapely packages
pip install the downloaded wheels in the following order: GDAL, Fiona, pyproj, rtree and shapely (for example pip install GDAL-1.11.2-cp27-none-win_amd64.whl)
Now you can pip install geopandas
| {
"pile_set_name": "StackExchange"
} |
Q:
Place a dialog on top of another div
I need to place a dialog(with position:fixed to keep it fixed while scrolling page) exactly on top of another div in page body. I have centered layout on my page so I cannot figure out how can I set the left property in CSS for the dialog. For the top property I know the value as this dialog is kept below nav bar
A:
nice question.
it has 2 answers :
1) use javascript to determine the sizes...
2) without js - its a pure html trick :
http://css-tricks.com/320-quick-css-trick-how-to-center-an-object-exactly-in-the-center/
| {
"pile_set_name": "StackExchange"
} |
Q:
how to display the English DisplayName in .net core view instead of the localized string
i have a .net core project where in one of my views i have to display the
DisplayName in English and in Arabic in the same view
only i get one language as the view have localized resource file
i created an extension method which take the model and return the DisplayName metadata
actually my method and the original @html helper return one language
here the method i hope some one can modify it so it return the original English metadata instead of the localized value
using Microsoft.AspNetCore.Html;
using Microsoft.AspNetCore.Mvc.Rendering;
using Microsoft.AspNetCore.Mvc.ViewFeatures.Internal;
using System;
using System.Linq.Expressions;
namespace iSee.IHtmlHelpers
{
public static class HtmlExtensions
{
public static IHtmlContent DisplayNameForEn<TModel, TValue>(
this IHtmlHelper<TModel> htmlHelper, Expression<Func<TModel, TValue>> expression
)
{
var modelExplorer = ExpressionMetadataProvider.FromLambdaExpression(expression, htmlHelper.ViewData, htmlHelper.MetadataProvider);
var metadata = modelExplorer.Metadata;
var DisplayName = metadata.DisplayName;
return new HtmlString(DisplayName);
}
}
}
A:
Thanks to @Nick Polideropoulos the answer above was helpful and inspired me to find an answer for the question
here the answer in case some one else search for
using Microsoft.AspNetCore.Html;
using Microsoft.AspNetCore.Mvc.Rendering;
using System;
using System.ComponentModel.DataAnnotations;
using System.Linq.Expressions;
using System.Reflection;
namespace iSee.IHtmlHelpers
{
public static class HtmlExtensions
{
public static IHtmlContent DisplayNameForLatin<TModel, TValue>(
this IHtmlHelper<TModel> htmlHelper, Expression<Func<TModel, TValue>> expression)
{
var memberExpression = expression.Body as MemberExpression;
if (memberExpression == null)
{
return new HtmlString("");
}
var displayAttribute = memberExpression.Member.GetCustomAttribute<DisplayAttribute>();
return new HtmlString(displayAttribute.Name);
}
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Change Response type in asp
I've used xml type by writing (asp coding)
Response.ContentType = "text/xml"
Now xml contents are over and I'd like to add html content so I wrote
Response.ContentType = "text/html"
But it still writing in xml what would be the problem here?
A:
I'm not sure what you mean. There can only be one ContentType per response. The browser will interpret the whole content of the response as xml.
If you want to get two files with different content types, you'll have to call them separately.
| {
"pile_set_name": "StackExchange"
} |
Q:
Retrieve average Z value of a 3D geometry in postgis
I need to compute the average Z value of a 3D geometry in PostGIS.
Two functions ST_ZMin and ST_ZMax are available to return the min and the max, but the function ST_ZAverage returning the average value does not exist unfortunately.
Is there a way to compute this average value?
Maybe there is an existing solution based on ST_NPoints, ST_PointN and ST_Z.
A:
Dump the points, get the average?
create table g ( id integer, geom geometry );
insert into g values (1, 'LINESTRING(0 0 0, 1 1 1, 2 2 2)');
insert into g values (2, 'LINESTRING(1 1 1, 2 2 2, 3 3 3)');
with pts as (
select id,
st_z((st_dumppoints(geom)).geom) as z
from g
)
select avg(z), id
from pts
group by id;
Or in subquery form, if you don't like CTEs:
select avg(z), id
from (
select id,
st_z((st_dumppoints(geom)).geom) as z
from g
) as pts
group by id;
Trouble with this is that it's a vertex average, it doesn't account for the parts of the line in between the vertices, so the vertex density at particular locations will skew the average in favour of those locations. A more sophisticated result would require some quite complicated PL/PgSQL code.
| {
"pile_set_name": "StackExchange"
} |
Q:
HttpClient authentication -
I am trying to emulate this behavior in java
curl -u usrname:password http://somewebsite.com/docs/DOC-2264
I am not sure if the auth is NTLM or Basic. I login to the website giving a username and passoword. It's a form post. Using the curl command above I am able to login and get the content.
To do this in java I did:
try {
URL url = new URL("http://somewebsite.com/docs/DOC-2264");
String authStr ="username:password";
String encodedAuthStr = Base64.encodeBytes(authStr.getBytes());
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
connection.setRequestMethod("POST");
connection.setDoOutput(true);
onnection.setRequestProperty("Authorization", "Basic" + encodedAuthStr);
InputStream content = (InputStream)connection.getInputStream();
BufferedReader in =
new BufferedReader (new InputStreamReader (content));
String line;
while ((line = in.readLine()) != null) {
System.out.println(line);
}
} catch (MalformedURLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
But I only get the login page. Not the actual content.
What am i doing wrong?
A:
You can see the exact header curl is sending by adding the -v (verbose) option.
The Authorization header needs the authentication type before the base64 encoded username/password. It should look something like this: Authorization: Basic dXNybmFtZTpwYXNzd29yZA==
| {
"pile_set_name": "StackExchange"
} |
Q:
How to compute modulo of md5 in R?
The title says it all...
I know that I can use digest::digest to compute the md5 of a string:
digest::digest('string', algo = "md5", serialize = FALSE)
However, I'm at a loss for how I can convert this (presumably hexadecimal) value into an integer (or big int) for modulo purposes...
My attempts to use as.hexmode and strtoi have both failed.
> as.hexmode(digest("1", algo = "md5", serialize = FALSE))
Error in as.hexmode(digest("1", algo = "md5", serialize = FALSE)) :
'x' cannot be coerced to class "hexmode"
> strtoi(digest("1", algo = "md5", serialize = FALSE), base = 16L)
[1] NA
A:
The problem is that the resulting number is too high to be represented as an integer, and strtoi returns NA. Since you need only the lower numbers for the modulo, why not just convert the end of the md5-string? This example does not give the same result as the next (correct) solution with Rmpfr.
x <- digest::digest('string', algo = "md5", serialize = FALSE)
strtoi(substr(x, nchar(x)-4, nchar(x)), base=16)
Another solution is to use the Rmpfr library, which supports conversion of large integers. This gives the correct conversion result (but requires an additional package):
library(Rmpfr)
x <- digest::digest('string', algo = "md5", serialize = FALSE)
x <- mpfr(x, base=16)
x %% 1000
| {
"pile_set_name": "StackExchange"
} |
Q:
where column in (single value) performance
I am writing dynamic sql code and it would be easier to use a generic where column in (<comma-seperated values>) clause, even when the clause might have 1 term (it will never have 0).
So, does this query:
select * from table where column in (value1)
have any different performance than
select * from table where column=value1
?
All my test result in the same execution plans, but if there is some knowledge/documentation that sets it to stone, it would be helpful.
A:
This might not hold true for each and any RDBMS as well as for each an any query with its specific circumstances.
The engine will translate WHERE id IN(1,2,3) to WHERE id=1 OR id=2 OR id=3.
So your two ways to articulate the predicate will (probably) lead to exactly the same interpretation.
As always: We should not really bother about the way the engine "thinks". This was done pretty well by the developers :-) We tell - through a statement - what we want to get and not how we want to get this.
Some more details here, especially the first part.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to remove characters in String Swift 3?
Code to get the string before a certain character:
let string = "Hello World"
if let range = string.range(of: "World") {
let firstPart = string[string.startIndex..<range.lowerBound]
print(firstPart) // print Hello
}
To begin with, I have a program that converts Hex float to a Binary float and I want to remove all "0" from Binary string answer until first "1". Example:
Any ideas?
A:
You can use Regular Expression:
var str = "001110111001"
str = str.replacingOccurrences(of: "0", with: "", options: [.anchored], range: nil)
The anchored option means search for 0s at the start of the string only.
| {
"pile_set_name": "StackExchange"
} |
Q:
Apple Mach-O Linker Error, after changing project name
I'm making an app in Xcode 6.1.1, and just changed the name of the app, and afterwards I'm getting this "Apple Mach-O Linker Error" build failed error.
It says "ld: file not found:", probably because it can't find the placement of some file with the new name.. Any suggestions?
ld: file not found: /Users/rb/Library/Developer/Xcode/DerivedData/Which_Club-gkgjdxflldelikaopinkdoskkers/Build/Products/Debug-iphoneos/WhichClubToUse.app/WhichClubToUse
clang: error: linker command failed with exit code 1 (use -v to see invocation)
"Which_club" is the new name of the project, and "WhichClubToUse" is the old name..
A:
I found the solution! Under my [project]Tests -> general, i needed to select the new Host Application
| {
"pile_set_name": "StackExchange"
} |
Q:
Can an application find out information about currently playing song on iPhone/iPod Touch?
I've searched around a bit in the small amount of iPhone/iPod Touch development information available and couldn't find anything for or against. Can an application find out information about currently playing song on iPhone/iPod Touch? Since the music can continue to play while you are in 3rd party applications, is there a function or library that will give you information about what is playing? (Track, Artist, Album, etc.) I know generally that applications are sand-boxed but thought maybe there was a way.
A:
Apple is pretty tight about allowing access to anything having to do with media (other than that provided by you) on the device. I've not found a way to do this.
A:
If you aren't planning to put your app on the App Store you can import MobileMusicPlayer.h as seen in this example application: song-info
edit: Interacting with the iPod application is now part of the iPhone OS 3.0 SDK
| {
"pile_set_name": "StackExchange"
} |
Q:
The C program crashes while executing function
Hello I have a problem when I call to function arrayBigToSmall the program crashes (after I enter the numbers). I want to understand why this happens and how I can fix this problem.?
Code -
#include <stdio.h>
#include <stdlib.h>
int main()
{
float array[2][3][2];
getNums(array);
return(0);
}
void getNums(float array[2][3][2])
{
int i,j,p;
printf("Enter numbers: \n");
for(i = 0; i < 2 ; i++)
{
for(j = 0; j < 3; j++)
{
for(p = 0; p < 2; p++)
{
scanf("%f",&array[i][j][p]);
}
}
}
arrayBigToSmall(array);
}
void arrayBigToSmall(float array[2][3][2])
{
int i,j,p,k;
float array1[12];
float temp;
for( i=0; i<3; i++)
{
for( j=0; j < 2; j++)
{
for(p = 0; p < 3; p++)
{
array1[k] = array[i][j][p];
k++;
}
}
}
}
A:
for( i=0; i<3; i++)
{
for( j=0; j < 2; j++)
{
for(p = 0; p < 3; p++)
{
array1[k] = array[i][j][p];
k++;
}
}
}
}
k must be initialized to 0. i should be not greater than 2, j not greater that 3, and p not greater than 2
| {
"pile_set_name": "StackExchange"
} |
Q:
retrieving whois data from jsonwhoisapi.com using api, c#
I'm trying to connect to jsonwhoisapi.com to drag down some whois data, but am having no luck. Has anyone done this or done something similar and can help spot my folly? I've never done an api connection using HTTP headers.
I've basically copied this from a post online where it was apparently working, but the following dies at GetResponseStream.
public static void WebRequest()
{
string WEBSERVICE_URL = "https://jsonwhoisapi.com/api/v1/whois?identifier=google.com";
try
{
var webRequest = System.Net.WebRequest.Create(WEBSERVICE_URL);
if (webRequest != null)
{
webRequest.Method = "GET";
webRequest.Timeout = 20000;
webRequest.ContentType = "application/json";
webRequest.Headers.Add("userid:apikey");
using (System.IO.Stream s = webRequest.GetResponse().GetResponseStream())
{
using (System.IO.StreamReader sr = new System.IO.StreamReader(s))
{
var jsonResponse = sr.ReadToEnd();
Console.WriteLine(String.Format("Response: {0}", jsonResponse));
}
}
}
}
catch (Exception ex)
{
Console.WriteLine(ex.ToString());
}
}
A:
My guess is you'll have to replace 'apikey' with your ApiKey string, and userId with your CustomerId string.
Direct quote:
With every request you must send the [customer ID:API key] pair to authenticate - both pieces of information can be found when logged in to you account.
Chances are you might have to encode the auth request using the instructions here: https://en.wikipedia.org/wiki/Basic_access_authentication#Protocol
Client side:
When the user agent wants to send the server authentication
credentials it may use the Authorization field.
The Authorization field is constructed as follows:
The username and password are combined with a single colon. (:)
The resulting string is encoded into an octet sequence.
The resulting string is encoded using a variant of Base64.
The authorization method and a space is then prepended to the encoded string, separated with a space (e.g. "Basic ").
For example, if the browser uses Aladdin as the username and
OpenSesame as the password, then the field's value is the
base64-encoding of Aladdin:OpenSesame, or QWxhZGRpbjpPcGVuU2VzYW1l.
Then the Authorization header will appear as:
Authorization: Basic QWxhZGRpbjpPcGVuU2VzYW1l
e.g. Something like:
webRequest.Headers.Add(string.Format("Authorization: Basic {0}", Convert.ToBase64String(Encoding.ASCII.GetBytes(string.Format("{0}:{1}", userId, apiKey)))));
| {
"pile_set_name": "StackExchange"
} |
Q:
Python/XML: Pretty-printing ElementTree
I construct XML using The ElementTree XML API and I would like to be able to pretty-print
individual nodes (for inspection) as well as
the whole document (to a file, for future examination).
I can use use ET.write() to write my XML to file and then pretty-print it using many suggestions in Pretty printing XML in Python. However, this requires me to serialize and then deserialize the XML (to disk or to StringIO) just to serialize it again prettily - which is clearly suboptimal.
So, is there a way to pretty-print an xml.etree.ElementTree?
A:
As the docs say, in the write method:
file is a file name, or a file object opened for writing.
This includes a StringIO object. So:
outfile = cStringIO.StringIO()
tree.write(of)
Then you can just pretty-print outfile using your favorite method—just outfile.seek(0) then pass outfile itself to a function that takes a file, or pass outfile.getvalue() to a function that takes a string.
However, notice that many of the ways to pretty-print XML in the question you linked don't even need this. For example:
lxml.etree.tostring (answer #2): lxml.etree is a near-perfect superset of the stdlib etree, so if you're going to use it for pretty-printing, just use it to build the XML in the first place.
Effbot indent/prettyprint (answer #3): This expects an ElementTree tree, which is exactly what you already have, not a string or file.
| {
"pile_set_name": "StackExchange"
} |
Q:
*ngFor not updating until click
I have created a table with an *ngFor that displays data from a web request. Here is the templates code:
...
<tr *ngFor="let task of tasks">
<td>{{task.ID}}</td>
<td><a (click)="openTaskArea(task.TaskType0.ID)" class="viewLink">{{task.TaskType0.Title}}</a></td>
<td><a (click)="editTask(task.ID)" class="viewLink"></a></td>
<td><a (click)="openTask(task.ID)" class="viewLink">{{task.Title}}</a></td>
<td>
<ng-container *ngIf="task.AssignedTo">{{task.AssignedTo.Title}}</ng-container>
</td>
</tr>
...
This web request is fired on load and when a modal that allows a new item to be created is closed.
Here is the function that gets this data:
getTasks() {
this.http.get(`https://example.com/_api/web/lists/getbytitle('Tasks')/items`).subscribe(data => {
// console.log(data['value'])
this.tasks = data['value']
console.log(this.tasks)
})
}
This function is fired when the modal is closed with the following function:
(function() {
this.getTasks();
}).bind(this);
Binding this was required as it's context was getting lost. This was a recommended solution for this question.
The request is completing correctly as I can log this.tasks however, the DOM is not being updated until I click a link from the table.
How can I solve this?
A:
You can try manually calling detectChanges to make the rendering happen,
import { ChangeDetectorRef } from 'angular/core';
constructor(private chRef: ChangeDetectorRef){
}
Then use:
chRef.detectChanges(); // whenever you need to force update view
| {
"pile_set_name": "StackExchange"
} |
Q:
Measuring elapsed time in QML
Let's consider the following example: we have a Qt Quick Controls Button. The user clicks it twice within 5 seconds. After pushing the Button for the first time, the QML Timer is running for these 5 seconds. We want to measure the time elapsed between two clicks, with a millisecond accuracy.
Unfortunately, the QML Timer can't show us the elapsed time.
As suggested on the BlackBerry forums, it would be possible to compare the dates. This isn't very handy, though, since the first click might occur on 31 Dec 2015, 23:59:55 and the second on 1 Jan 2016, 00:00:05 and the check would have to be complex.
Is there any better option?
A:
As explained in the comments, QML Timer is not suitable for your specific needs since it is synchronized with the animation timer (further details here) and thus its resolution is dependent on animation timer as well.
@qCring solution is for sure satisfying and I would prefer such an approach, if an higher precision is needed or a better performance (see also this answer and the interesting link at the bottom about improving precision).
However, given your requirements, a pure QML/JS approach is perfectly feasible. In this case you can exploit JavaScript Date, both because it's easy to calculate elapsed time, using getTime(), but also because QML fully supports JS Date and also extends it with some useful functions.
Here is a simple example:
import QtQuick 2.4
import QtQuick.Window 2.2
import QtQuick.Layouts 1.1
import QtQuick.Controls 1.3
ApplicationWindow {
width: 300
height: 300
visible: true
property double startTime: 0
ColumnLayout {
anchors.fill: parent
Text {
id: time
font.pixelSize: 30
text: "--"
Layout.alignment: Qt.AlignCenter
}
Button {
text: "Click me!"
Layout.alignment: Qt.AlignCenter
onClicked: {
if(startTime == 0){
time.text = "click again..."
startTime = new Date().getTime()
} else {
time.text = new Date().getTime() - startTime + " ms"
startTime = 0
}
}
}
}
}
A:
Unfortunately, the QML Timer doesn't provide a property to check the elapsed time. But you could write your custom Timer in C++ and expose it to QML:
MyTimer.h
#include <QObject>
#include <QElapsedTimer>
class MyTimer : public QObject
{
Q_OBJECT
Q_PROPERTY(int elapsed MEMBER m_elapsed NOTIFY elapsedChanged)
Q_PROPERTY(bool running MEMBER m_running NOTIFY runningChanged)
private:
QElapsedTimer m_timer;
int m_elapsed;
bool m_running;
public slots:
void start() {
this->m_elapsed = 0;
this->m_running = true;
m_timer.start();
emit runningChanged();
}
void stop() {
this->m_elapsed = m_timer.elapsed();
this->m_running = false;
emit elapsedChanged();
emit runningChanged();
}
signals:
void runningChanged();
void elapsedChanged();
};
After registering via qmlRegisterType<MyTimer>("MyStuff", 1, 0, "MyTimer") it's available in QML:
Window.qml
import QtQuick 2.4
import QtQuick.Controls 1.3
import MyStuff 1.0
ApplicationWindow {
width: 800
height: 600
visible: true
Button {
id: button
anchors.centerIn: parent
text: timer.running ? "stop" : "start"
checkable: true
onClicked: {
if (timer.running) {
timer.stop()
label.text = timer.elapsed + "ms"
} else {
timer.start()
}
}
MyTimer {
id: timer
}
}
Text {
id: label
anchors.left: button.right
anchors.verticalCenter: button.verticalCenter
text: "0ms"
visible: !timer.running
}
}
Hope this helps!
A:
You don't mention in your question if the measured time is only for debugging purposes or if it will be needed for other calculations. Because if not QML offers a very simple way to debug the time spent doing various operations using console.time("id string") and console.timeEnd("id string").
An example using a Button would look like this:
Button {
text: "click here"
property bool measuring: false
onClicked: {
if(!measuring){
console.time("button")
measuring=true
} else {
console.timeEnd("button")
measuring=false
}
}
}
This will print the time in ms to the console, and can be very useful to measure the time needed to execute some long operations in QML.
| {
"pile_set_name": "StackExchange"
} |
Q:
como se relaciona .resolve() y .reject() con .then() y .catch()
Estoy aprendiendo JavaScript y justo ahora me topo con asincronismo que apenas estoy comenzando a saber como funciona, la cosa es que todavía no logro entender muy bien para que sirve .resolve() y como se relaciona con .then(), al igual que .reject() y .catch(), entiendo que resolve es cuando la promesa se resuelve con éxito y reject cuando se rechaza, lo que pienso es que resolve y reject es como cuando pasamos una función dentro de otro, que solo son el nombre y dentro tienen el método .then() y .catch() respectivamente, pero al ver que resolve y reject tienen paréntesis por ende son funciones y me hace pensar que estoy equivocado, alguien tiene alguna idea?
ADJUNTO MI CÓDIGO.
function getPersonaje(id) {
return new Promise((resolve, reject) => {
const personaje = `${API_URL}${PEOPLE_URL.replace(':id', id)}`;
// $.get permite hacer un request.
$.get(personaje, opts, function(data) {
resolve(data)
})
.fail(() => reject(id));
});
}
function onError(id) {
console.log(`Sucedio un error al obtener el personaje ${id}`)
}
getPersonaje(1)
.then(function(personaje) {
console.log(`Hola soy ${personaje.name}`)
})
.catch(onError);
A:
Resolve y reject son parámetros de una callback con el que se construye una Promise, y así como has dicho una se relaciona con then y otra con catch, donde resolve sirve para retornar o no un valor cuando no han ocurrido errores.
Mientras que catch sirve para atrapar o arrojar el error obtenido.
Cabe aclarar que resolve y reject funcionan como retornos, es
decir, es prácticamente el equivalente a la sentencia return pero
para una promise, la diferencia es que al invocarse las callbacks
resolve y reject que provee la misma promise, le estamos diciendo
a ésta qué camino debe tomar, ya sea devolvernos un error o
devolvernos un valor, por lo tanto el objetivo de .then y .catch
es manejar el código asíncrono, de tal forma que controlemos tanto los
errores como la ejecución y el fin de la promise. De esta manera, la
promise sabrá qué hacer y en qué momento terminará una vez finalizada, también sabrá qué valor retornar una vez se haya terminado de ejecutar.
Si no colocamos resolve ni reject, la promise nunca será resuelta
ni rechazada, teniendo así una promise infinita, una promise que
estará en su estado Promise<pending> durante el resto de su
eternidad, por lo tanto si tenemos una promise infinita al usar .then y .catch no se entraría nunca ni al uno ni al otro y la promise nunca retornaría ni un valor ni un error.
Así que .then y .catch se pueden encadenar, para siempre tener en cuenta ambas posibilidades (entrar al then si la promesa se resolvió con éxito o ir al catch si la promesa fue rechazada).
Con respecto a si .then y .catch son funciones te equivocas, éstos son métodos, debido que hacen parte de la instancia de una clase Promise.
Entonces tomemos de ejemplo este fragmento de código:
getPersonaje(1)
.then(function(personaje) {
console.log(`Hola soy ${personaje.name}`)
})
.catch(onError);
Ahí mismo podemos ver cómo dependiendo de un parámetro la promesa se resuelve o se rechaza, en caso de que se resuelva se procede a obtener por medio de una callback en .then el resultado retornado por la promise, que en este caso el resultado se llama personaje y se imprime.
Por otro lado, en caso de haber error no estás haciendo nada, puesto que el .catch también debería ser un callback, así que yo en el .catch pondría algo como esto:
.catch(e => console.log(e));
Donde e => console.log(e) no es mas que una función flecha minimizada.
Por lo tanto, lo has entendido bien la forma en la que .then y .catch se relacionan, solo tenías un pequeño error de concepto, nada más.
¿Quizá quisiste ser mas específico con respecto a algún detalles más a fondo que no entiendas? Creo que lo has entendido bien.
Bueno, un detalle extra que te quiero comentar para que lo tengas en mente, y es que desde hace relativamente un tiempo existen algo llamado funciones asíncronas, las cuales no son mas que el wrapper de una promise, una peculiaridad que existe de las funciones asíncronas es que sea que tú le pongas un return o no a la función asíncrona ésta siempre va a devolver una promise.
Por lo tanto, veamos un ejemplo de función asíncrona y sus posibles usos con await:
async function makeTransaction(){
return await new Promise((res, rej) =>{
setTimeout(()=>{
res(true);
},2000);
});
}
makeTransaction().then((transactionExited)=>{
console.log(transactionExited);
});
Como vemos podemos usar algo nuevo SOLO EN LAS FUNCIONES ASÍNCRONAS y es la palabra reservada await, además ésta NO puede ser usada en el contexto global.
La palabra reservada await tiene como objetivo obligar a la función asíncrona a esperar ya sea el error o el valor obtenido, por lo tanto lo que hace es pausar la ejecución de la función y luego cuando la promise de la cual estamos intentando obtener un valor haya sido resuelta o rechazada la reanuda y nos devuelve el valor donde queremos.
De esta manera podemos organizar de mejor forma nuestro código, por supuesto lo ideal es no usar funciones asíncronas para código que es síncrono, pero en caso de que estemos manejando código asíncrono y multiples llamadas asíncronas es súper recomendable utilizar funciones asíncronas, pues en ese caso await nos hace la vida más fácil.
| {
"pile_set_name": "StackExchange"
} |
Q:
Getting contents of a particular file in the tar archive
This script lists the name of the file ( in a tar archive) containing a pattern.
tar tf myarchive.tar | while read -r FILE
do
if tar xf test.tar $FILE -O | grep "pattern" ;then
echo "found pattern in : $FILE"
fi
done
My question is:
Where is this feature documented, where $FILE is one of the files in the archive:
tar xf test.tar $FILE
A:
This is usually documented in man pages, try running this command:
man tar
Unfortunately, Linux has not the best set of man pages. There is an online copy of tar manpage from this OS: http://linux.die.net/man/1/tar and it is terrible. But it links to info man command which is command to access the "info" system widely used in GNU world (many programs in linux user-space are from GNU projects, for example gcc). There is an exact link to section of online info tar about extracting specific files: http://www.gnu.org/software/tar/manual/html_node/extracting-files.html#SEC27
I may also recommend documentation from BSD (e.g. FreeBSD) or opengroup.org. Utilities can be different in detail but behave same in general.
For example, there is some rather old but good man from opengroup (XCU means 'Commands and Utilities' of the Single UNIX Specification, Version 2, 1997):
http://pubs.opengroup.org/onlinepubs/7908799/xcu/tar.html
tar key [file...]
The following operands are supported:
key --
The key operand consists of a function letter followed immediately by zero or more modifying letters. The function letter is one of the following:
x --
Extract the named file or files from the archive. If a named file matches a directory whose contents had been written onto the archive, this directory is (recursively) extracted. If a named file in the archive does not exist on the system, the file is created with the same mode as the one in the archive, except that the set-user-ID and set-group-ID modes are not set unless the user has appropriate privileges. If the files exist, their modes are not changed except as described above. The owner, group, and modification time are restored (if possible). If no file operand is given, the entire content of the archive is extracted. Note that if several files with the same name are in the archive, the last one overwrites all earlier ones.
And to fully understand command tar xf test.tar $FILE you should also read about f option:
f --
Use the first file operand (or the second, if b has already been specified) as the name of the archive instead of the system-dependent default.
So, test.tar in your command will be used by f key as archive name; then x will use second argument ($FILE) as name of file or directory to extract from archive.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to mark user's role which is represented by dynamically created checkbox on DataGrid selection changed
I'm working on WPF application, and I have users, and ofcourse users has some kind of roles, in my case SUPERADMIN AND ADMIN, that roles are stored in table "Roles", One user could have 1 or more roles, so that means one or more checkboxes can be selected on my form. I generated checkBoxes dinamically:
I'm adding checkboxes to a stack panel which orientation is Vertical so it looks like this after applying method below:
private void LoadRolesToStackPanel()
{
try
{
var roles = RolesController.Instance.SelectAll();
if (roles.Count > 0)
{
foreach (Role r in roles)
{
CheckBox cb = new CheckBox();
//cb.Name = r.RoleId.ToString();
cb.Content = r.Title.ToString();
cb.FontSize = 15;
stackRole.Children.Add(cb);
}
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}
Now I am wondering how could I show/mark/check or whatever appropriate checkboxes for each user when I'm selecting another user (users are containted in DataGrid dtgUsers so I'm firing dtgUsers_SelectionChanged event when I'm changing selection from user to user, and as I am doing it I need also to show appropriate check boxes as representation of roles and they must be checked of course as representation that selected user has that (Selected) role ).
Right now I did it on this way, and I think it is very bad approach, so I am asking for new or better solution of doing this.
private void dtgUsers_SelectionChanged(object sender, SelectionChangedEventArgs e)
{
if (dtgUsers.SelectedItem != null)
{
stackRole.Children.Clear();
User user = (User)(dtgUsers.SelectedItem);
if (user != null)
{
//Get all roles from database for selected user
user.Roles = RolesController.SelectByUserId(user.Id);
if (user.Roles.Count > 0)
{
//This is bad approach I took Title of each user's role to compare it with all existing roles
var roleNames = user.Roles.Select(r => r.Title);
var allRoles = RolesController.SelectAll();
if (allRoles.Count > 0)
{
foreach (Role r in allRoles)
{
CheckBox cb = new CheckBox();
cb.Content = r.Title.ToString();
cb.FontSize = 15;
cb.Tag = r;
stackRole.Children.Add(cb);
if (roleNames.Contains(cb.Content)) //Here I'm bassically as I am creating checkbox immediatelly checking/marking it if it exist in user's roles
cb.IsChecked = true;
}
}
}
}
}
Any kind of suggestion how should I fix this/make it better is very welcome!
Thanks guys
Cheers
A:
As I told you in a previous answer you should check out the MVVM design pattern if you want to do this using best practices:
How to store ID from database object, to a checkbox in code behind WPF
Then you could simply bind the ItemsControl where the roles are displayed to the currently selected user in the DataGrid:
<ItemsControl ItemsSource="{Binding SelectedItem.Roles, ElementName=dtgUsers}">
<ItemsControl.ItemTemplate>
<DataTemplate>
<CheckBox IsChecked="{Binding IsChecked}" Content="{Binding Title}" FontSize="15" />
</DataTemplate>
</ItemsControl.ItemTemplate>
</ItemsControl>
The roles list will then be updated automatically when you select an item in the DataGrid.
| {
"pile_set_name": "StackExchange"
} |
Q:
broadleaf commerce demo site login link gets redirected
I have configured Broadleaf demo site to run with tomcat server directly from eclipse (not by using ant task). I tried to visit login, register and cart page but the link got redirected. Since in demosite for tomcat, login page must be at /mycompany/login when i click on login link i get redirected to /mycompany/mycompany/login.
I tried to directly hit the url http://localhost:8181/mycompany/login but it also gets redirected/changed to http://localhost:8181/mycompany/mycompany/login. This same issue repeats on clicking register and cart link also.
In LoginController i can see request mapping as "@RequestMapping("/mycompany/login")".
In logs i can see "No mapping found for HTTP request with URI [/mycompany/mycompany/login] in DispatcherServlet with name 'mycompany'" which is true as we do not have a mapping for this url.
Any pointer to where i can look for error.
A:
I'm going to assume that you are actually seeing it redirect to https://localhost:8081 (as opposed to 8181 and http).
Those URLs specifically are marked as requiring SSL in applicationContext-security.xml:
<sec:intercept-url pattern="/register*" requires-channel="https" />
<sec:intercept-url pattern="/login*/**" requires-channel="https" />
<sec:intercept-url pattern="/account/**" access="ROLE_USER" requires-channel="https" />
<sec:intercept-url pattern="/checkout/**" requires-channel="https" />
<sec:intercept-url pattern="/null-checkout/**" requires-channel="https" />
<sec:intercept-url pattern="/null-giftcard/**" requires-channel="https" />
<sec:intercept-url pattern="/confirmation/**" requires-channel="https" />
If you comment those out (or remove them) then you will no longer see redirects. The alternative is to just configure SSL on your Tomcat instance.
Furthermore, the @RequestMapping that you mentioned on LoginController should just be @RequestMapping("/login"). There is no need to include the /mycompany part of that (which I assume is the Tomcat context) since Spring URLs ignore it.
| {
"pile_set_name": "StackExchange"
} |
Q:
Matrix Multiplication not associative when matrices are vectors?
Wikipedia states:
Given three matrices A, B and C, the products (AB)C and A(BC) are
defined if and only the number of columns of A equals the number of
rows of B and the number of columns of B equals the number of rows of
C (in particular, if one of the product is defined, the other is also
defined)
Row and column vectors can be thought of as just special cases of matrices. So given the above I would expect:
$$(a^Tb)c = a^T(bc)$$
However the right side is undefined because you can’t multiply two column vectors, seemingly contradicting Wikipedia. Am I mistaken? If not, can we only consider matrix multiplication to be associative in contexts where we know no intermediate matrix becomes 1x1?
A:
The issue is that, technically, $(a^T b)c$ doesn't exist either. You see, we often pretend $a^T b$ is the scalar $k:=a\cdot b$, but it's really a $1\times 1$ matrix whose only entry is $k$. It's one thing to left-multiply $c$ by $k$; it's another to left-multiply $c$ by the $1\times 1$ matrix itself, which you can't do. If each of these vectors has $n$ entries with $n\ne 1$, $(a\cdot b)c=kI_n c\ne kI_1 c$ ($I_1 c$ is of course undefined), where $I_m$ is the $m\times m$ identity matrix.
| {
"pile_set_name": "StackExchange"
} |
Q:
Wordpress Search Query - Meta Queries & Custom Fields
Question for all of ya. I have the current query running on my search page template, and it seems to be working fine if my search query seems to be included in the title of the post, but when I include a meta query to try and also look in another spot for the search term, it doesn't gather any results, only the same results it had before.
Second question, for some reason it still is only displaying 6 (the number of posts set in WP Admin) posts, and not listening to the query.
<?php // WP_User_Query arguments
$search_term = get_search_query();
$args = array (
'post_type' => 'courses',
'order' => 'ASC',
'orderby' => 'title',
'posts_per_page' => -1,
'nopaging' => true,
's' => '*'.$search_term.'*',
'meta_query' => array(
array(
'key' => 'course_id',
'value' => $search_term,
'compare' => 'LIKE'
)
)
);
$wp_course_query = new WP_Query($args);
// Get the results
$courses = $wp_course_query; ?>
<?php // Check for results
if (!empty($courses->get_posts())) { ?>
<ul class="course-list">
<?php if(have_posts()) : while(have_posts()) : the_post(); ?>
<li> <?php the_title(); ?> </li>
<?php endwhile; endif; wp_reset_query(); ?>
</ul>
<?php } else { ?>
<p>No courses match that query</p>
<?php } ?>
Things I've tried:
Hard coding the value, nothing there.
Removing * from 's'
A:
It seems that this is just impossible in WordPress, so I had to do this another way.
$search_term = get_search_query();
$args = array (
'post_type' => 'courses',
'order' => 'ASC',
'orderby' => 'title',
'posts_per_page' => -1,
'nopaging' => true,
's' => $search_term
);
$args2 = array (
'post_type' => 'courses',
'posts_per_page' => -1,
'nopaging' => true,
'meta_query' => array(
array(
'key' => 'course_id',
'value' => $search_term,
'compare' => 'LIKE'
)
)
);
$courses1 = get_posts($args);
$courses2 = get_posts($args2);
$merged = array_merge($courses1, $courses2);
$post_ids = array();
foreach ($merged as $item) {
$post_ids[] = $item->ID;
}
$unique = array_unique($post_ids);
$posts = get_posts(array(
'post_type' => 'courses',
'order' => 'ASC',
'orderby' => 'title',
'post__in' => $unique,
'posts_per_page' => -1
)); ?>
| {
"pile_set_name": "StackExchange"
} |
Q:
Flutter low framerate performace
I saw linear degradation of framerate UI when I launch speed_dial animation plugin. The problem appear when I add sharedpref function here:
@override
Widget build(BuildContext context) {
sharedpref_function();
return Scaffold(
to listen a saved value, even If the sharedpref is empty I have this degradation.
After 10min whithout doing nothing before, I measure 1120ms/frame when I call _renderSpeedDial
Here is the full code :
bool _dialVisible = true;
Color _speedDial = Colors.pink;
sharedpref_function() async {
SharedPreferences prefs = await SharedPreferences.getInstance();
setState(() {
}
);
}
_renderSpeedDial() {
return SpeedDial(
animatedIcon: AnimatedIcons.add_event,
animatedIconTheme: IconThemeData(size: 22.0),
backgroundColor: _speedDial,
// child: Icon(Icons.add),
/* onOpen: () => print('OPENING DIAL'),
onClose: () => print('DIAL CLOSED'),*/
visible: _dialVisible,
curve: Curves.bounceIn,
children: [
SpeedDialChild(
child: Icon(Icons.fullscreen_exit, color: Colors.white),
backgroundColor: Color(0xffa088df),
onTap: () {
setState(() {
});
},
label: '1',
labelStyle: TextStyle(fontWeight: FontWeight.w500,color: Colors.white),
labelBackgroundColor:Color(0xffa088df),
),
],
);
}
@override
Widget build(BuildContext context) {
sharedpref_function(); // here the sharedpref I use to listen saved value
return Scaffold(
body: Stack(
children: <Widget>[
Padding
(
padding: const EdgeInsets.only(right:10.0, bottom:10.0),
child:
_renderSpeedDial(),
),
],
)
);
}
}
A:
Your sharedpref_function() method is being called inside your build method. That's not recommended at all because it will be called on every frame the UI needs to be rebuild and your code, having an animation there, will be called at 60fps (on every frame).
Move your method inside initState or didChangeDependencies (there're even more methods that get called once or a few times like didChangeDependencies).
When you need to update values, you could do it inside an onTap gesture and that's it.
Also, test your app in --release (release mode) to truly test the speed of your app.
| {
"pile_set_name": "StackExchange"
} |
Q:
Unix script or parser to delete stop words in a file
I am looking for a parser or script to remove stop words from a file.
This is the sample file:
entities_0_confidence|entities_0_name|entities_0_entity|entities_1_confidence|relation_relation|
-1.1956528741743269|ellen brown|Ellen_Brown|-3.9166730593775214|WOULD ATTORNEY FROM|||||||||||||||||||||
-2.3889038197374015|rick santorum|Rick_Santorum||CRITICIZED|||||||||||||||||||||
-1.5485422793287602|thomas jefferson|Thomas_Jefferson|-1.7299349891097682||IS LETTER TO|||||||||||||||||||||
-1.229126527004769|lewis powell|Lewis_Powell_%28conspirator%29|-3.024385187632112|IS JUSTICE OF|||||||||||||||||||||
-2.2268355006701155|michael bloomberg|Michael_Bloomberg|-2.1242762129476493|WON MAYOR OF À|||||||||||||||||||||
This is stop the word list:
IS, OF ,WITH ,WON,WOULD,X,©,® FOR BEST ACTRESS PRESENTING,À,È,ÉS,ŞI,АND,И
I just want to remove the words from each line and not the entire line. My current script is removing these words from other words as well.
For example:
my line in file - "TOLD to stop using this line"
Stop word - "To"
Output - "LD sp using this line"
My file/dataset contains 70k entries.
A:
The code will replace the stop words from beginning/end/in-between the column number passed in the fields variable.
fields="col_num=1“ #pass the column you want to remove stop words from
while word i;
do
str=“word=$i";
cat file | 'BEGIN{'$str';'$fields'} {gsub("^'$word'[ ]|[ ]'$word'$|^'$word'$",X,$col_num); gsub("[ ]'$word'[ ]", " ",$col_num); gsub(/^ /,X,$col_num); gsub(/ $/,X,$col_num); print}' > file".temp";
mv file".temp" file;
done < stop_words.txt
Hope that helps!!
| {
"pile_set_name": "StackExchange"
} |
Q:
frequencies in sound: multiple possibilities?
First, I am by no mean a sound engineer (as you will guess later).
I was just wondering something while looking at the waveform of a .wav
for a given shape of waveform on a duration of 2 sec for example, how can we make sure that the frequencies fft gives are the only correct one?
what if very little parts of a sinusoid could be considered instead of a full continuous sinusoidal movement that just varies in amplitude?
or, what if a sinusoid had a lot of very fast varying amplitudes ?
that would lead to an infinity of solutions I guess..
A:
The Discrete Fourier Transform, or DFT (the FFT is an algorithm that computes the DFT) of a length of finite duration (which any practical transform would need to be) is identical to the result of the transform of an infinitely long sequence formed by repeating the original sequence in time.
Knowing this provides the following insights related to your question:
First, anything that repeats (exact repetition) in time can only exist at discrete frequencies in the frequency domain. We see this with the Fourier Series Expansion specifically as shown in the graphic below.
The concept behind the Fourier Series Expansion is that any single valued continuous function can be represented as a sum of sinusoidal components and notably each component MUST have a frequency that is an integer of 1/T where T is the duration of the signal in time. Therefore IF those sinusoidal components were allowed to play out for all time (rather than being bound to the time interval [0,T]), the next cycle immediately after T would have to commence and proceed exactly as the waveform did at the start of the sequence (as each sinusoidal component would do the same). Thus it is often described that the Fourier Series Expansion decomposes any periodic function into a sum of sines and cosines (or equivalently and I believe mathematically simpler, complex exponentials).
The mathematical model of the time limited signal from [0,N-1] (for N samples of the DFT, which is equivalent to the analog bound of [0,T] in time), to also be a signal extending to infinity repeating with time provides for further intuition into the behavior and result of the DFT. Specifically we see that repeating a signal in time results in discrete uniform spaced impulses in the frequency domain, that can only exist at integer multiples of 1/T (including 0) where T is the length of the base waveform in time.
The second characteristic of the DFT is that it is done on a waveform that is sampled in time. Without going into significant detail, sampling in time is associated with repetition in frequency (for those familiar with A/D and D/A conversion this will be readily apparent). So with the DFT we have both characteristics of repetition in time and sampling in time, which therefore means we will have "sampling" in frequency (discrete frequencies where only non-zero values can exist) and repetition in frequency.
The repetition is an implied construct that actually helps significantly to provide an intuitive understanding of many signal processing constructs- especially when considering both analog and digital domains. To be clear, the DFT involves only a fixed duration sequence both in the time and frequency domains, but mathematically these sequences can be repeated for infinite duration with the same result.
For example, to your question of what happens where there is a partial cycle? If we realize this equally represents a waveform repeating for all time, we see that in this case there would be an abrupt transition in the waveform.
Such a waveform cannot be created or represented with a single sinusoidal tone. Going back to the Fourier Series Expansion, we know that it can be represented by multiple tones as long as they are spaced at integer multiples of the repetition rate. So where we thought one tone exists, similarly in the DFT there will be multiple tones as required to create such an abrupt transition. According to the DFT these frequencies really exist as what we are solving for in that process is the frequency components that are needed to create the time limited time domain waveform.
Below shows the case above (lower plot) compared to what we would get if there was a complete integer number of cycles over the time duration used (upper plot). This is one explanation of "Spectral Leakage" with the DFT but given as example insight into the relationship between the time duration of the DFT chosen and the frequencies that would result.
If the waveform was changing with time (either in frequency, or amplitude beyond the sinusoidal component itself, such as an envelope) this would require many frequency components to represent it. This is no different than a modulation view of the time domain waveform: to transmit signals over the air we modulate the amplitude or frequency of a carrier frequency (that is better suited to go over the air) which results in several frequencies being present around that carrier. If the waveform is not actually repeating with time (which is likely the case), then instead of discrete tones we will get a continuous band of frequencies around the carrier.
| {
"pile_set_name": "StackExchange"
} |
Q:
Probability of sums with 6 dice
You roll six independent fair dice.
What is the probability that their sum is divisible by 6?
I don't really know where to start. Does the ordering of the dice matter? (1,2,2,2,2,3) vs (3,2,2,2,2,1). I feel like it does. I'm not strong with counting arguments so I don't know how to find the number of ways that you can arrange that.
A:
Hint: Whatever be the sum of the first five, the probability that adding the sixth number will give a number divisible by $6$ is $\frac{1}{6}$.
| {
"pile_set_name": "StackExchange"
} |
Q:
how to initialize a class field?
I have seen developers initialize their class fields in different ways. Three of them are very popular!
What's the difference and is any of them more proper?
class Test {
ArrayList<String> myArr = new ArrayList<String>(); // First Method
}
class Test {
ArrayList<String> myArr;
public Test() {
myArr = new ArrayList<String>(); // Second Method
}
}
class Test {
ArrayList<String> myArr;
public Test() {
}
public void Init() {
myArr = new ArrayList<String>(); // Third Method
}
}
A:
1 and 2 are more-or-less equivalent and also allow you to use the final modifier on the field.
(The difference is that in case 1, the field is initialized before constructor invocation, whereas in case 2, the field is initialized when the constructor is invoked.)
3 is a lazy initialization pattern.
| {
"pile_set_name": "StackExchange"
} |
Q:
focus isnt working for text field
Source ->
<input name="ctl00$MainContent$MapUserControl$6551_Edit_artintheparkssculpturelocations_32_id" type="text" maxlength="50" id="ctl00_MainContent_MapUserControl_6551_Edit_artintheparkssculpturelocations_32_id" onchange="MaxLength(50, this, 'id')" class="control-label form-control" id2="6551_Edit_artintheparkssculpturelocations_32_id">
I can update the text in the field by
x = document.getElementById('ctl00_MainContent_MapUserControl_6551_Edit_artintheparkssculpturelocations_32_id')
x.value ="bla";
However I can't update the focus...
x.focus();
returns undefined as expected but nothing happens the text field.
I have also tried autofocus, and grabbin the element with Jquery instead of JS....
Still no joy...any idea why?
A:
Have faced similar issue. Calling focus through setTimeOut is what you need.
setTimeout(function() { x.focus()}, 1);
| {
"pile_set_name": "StackExchange"
} |
Q:
PHP: Namespace resolution in object method chains
We use method chaining in several of our core systems. We're trying to namespace some of those systems away from our modules. However I'm having trouble getting any kind of namespace resolution with chaining to work.
So while this works (as usual):
$GLOBALS['model']->User()->User_Friends()->getAll();
this, on the other hand:
$GLOBALS['model']->Core\User()->User_Friends()->getAll();
throws the error:
Parse error: syntax error, unexpected T_NS_SEPARATOR
Is there any way around this?
I'm almost already assuming this is a no-go. But asking to make sure I'm not missing something.
Depending on your point-of-view (definitely mine), it is a bug.
A:
The resolution of the namespace can occur within the method User, not as a property of the method itself.
In code:
class model {
private $user = false;
public function User () {
if ($this->user == false)
$this->user = new Core\User(); // <--- namespace use happens here
return $this->user;
}
}
Thus, the return of the method User is the User class from the namespace Core, of which the method User_Friends() is a part.
EDIT
I suggest you take another look at the docs as well as the "Basics" article.
EDIT 2
Using __NAMESPACE__ to determine which namespace to operate in, from within an overloaded method:
class model {
private $objects = array();
public function __call($name, $arguments=false) {
$ns = __NAMESPACE__;
if (strlen($ns) < 1)
$ns = 'none';
if (!isset($this->objects[$ns]))
$this->objects[$ns] = array();
if (!isset($this->objects[$ns][$name])) {
$class_desc = (strlen($ns) > 0 ? __NAMESPACE__ . '\\' : ''). $name;
$this->objects[$ns][$name] = new $class_desc($arguments);
}
return $this->objects[$ns][$name];
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
how to get count of records from 2 tables based on value in a particular column from 1st table using MySQL
These are the tables I am fetching count from
register
+----+-------------+--------+
| id | empSignupId | cityId |
+----+-------------+--------+
| 42 | 4 | 1 |
| 47 | 3 | 1 |
| 48 | 11 | 1 |
| 54 | 20 | 1 |
| 55 | 21 | 2 |
| 56 | 22 | 2 |
+----+-------------+--------+
guest_list
+-----+------------+-------------+
| id | guestName | empSignupId |
+-----+------------+-------------+
| 103 | Mallica SS | 3 |
| 104 | Kavya | 3 |
| 108 | Vinay BR | 11 |
| 109 | Akash MS | 11 |
+-----+------------+-------------+
cities
+----+---------------+
| id | cityName |
+----+---------------+
| 1 | Bengaluru |
| 2 | Chennai |
| 3 | Sydney |
| 4 | New York City |
| 5 | Shanghai |
| 6 | Chicago |
+----+---------------+
I need to fetch the count of people registered from particular city which includes people, their guests, if guests are not present also it should show the count of people.
This is what I tried
SELECT COUNT(gl.id) + COUNT(rfs.id), ct.cityName, rfs.cityId
FROM register rfs
INNER JOIN cities ct ON ct.id=rfs.cityId
INNER JOIN guest_list gl ON gl.empSignupId = rfs.empSignupId
GROUP BY rfs.cityId;
+-------------------------------+-----------+--------+
| COUNT(gl.id) + COUNT(rfs.id) | cityName | cityId |
+-------------------------------+-----------+--------+
| 8 | Bengaluru | 1 |
+-------------------------------+-----------+--------+
I also need the count of people from other cities to be displayed, since there are no guests from some cities its not returning that count.
Please help me figure this out, I am still new to MySQL.. any help is greatly appreciated.
A:
You fist need to aggregate guest_list table in order to get the number of records per empSignupId:
SELECT empSignupId, COUNT(empSignupId) AS countGuest
FROM guest_list gl
GROUP BY empSignupId
Output:
empSignupId countGuest
----------------------
3 2
11 2
Now you have to use a LEFT JOIN to the derived table above in order to also get the number of records for each city:
SELECT COALESCE(SUM(countGuest), 0) + COUNT(rfs.id), ct.cityName, rfs.cityId
FROM register rfs
INNER JOIN cities ct ON ct.id=rfs.cityId
LEFT JOIN (
SELECT empSignupId, COUNT(empSignupId) AS countGuest
FROM guest_list gl
GROUP BY empSignupId
) gl ON gl.empSignupId = rfs.empSignupId
GROUP BY rfs.cityId;
Output:
COALESCE(SUM(countGuest), 0) + COUNT(rfs.id) cityName cityId
------------------------------------------------------------------
8 Bengaluru 1
2 Chennai 2
Using LEFT JOIN instead of INNER JOIN guarantees that we also get cities without guests.
Demo here
Note: If you also want to get cities without registrations then you need to place the cities table first and use a LEFT JOIN to register.
| {
"pile_set_name": "StackExchange"
} |
Q:
Applications of $f(x_0)=f'(x_0)$
If a function $f(x)$ has a derivative $f'(x)$ then where $f'(x_0) = 0$ there is an extreme point at $x=x_0$.
And where $f''(x_0)=0$ there is an inflection point at $x=x_0$.
I am asking are there any significance or applications where $f(x_0)=f'(x_0)$ at some point $x=x_0$?
I am referring to a case like $f(x)=x^2$ and $f'(x)=2x$, they intersect at $(0,0)$ and $(2,4)$ what is special or can be derived from these points?
A:
If I'm interpreting this correctly, I believe you are asking if there is anything special about this happening locally, i.e at a certain point $x_0$, does the fact that $f(x_0)=f'(x_0)$ mean anything? I would say no; note that by just translating the function appropriately, you can make this happen at any point. To see this, for any differentiable $f$, and any point $x_0$ define $g_{x_0}(x)=f(x)+(f'(x_0)-f(x_0))$, and this function will have that property at the point $x_0$.
| {
"pile_set_name": "StackExchange"
} |
Q:
What is this audio datatype and how do I convert it to wav/l16?
I am recording audio in a web browser and sending it to a flask backend. From there, I want to transcribe the audio using Watson Speech to Text. I cannot figure out what data format I'm receiving the audio and how to convert it to a format that works for watson.
I believe watson expects a bytestring like b'\x0c\xff\x0c\xffd. The data I receive from the browser looks like [ -4 -27 -34 -9 1 -8 -1 2 10 -28], which I can't directly convert to bytes because of the negative values (using bytes() gives me that error).
I'm really at a loss for what kind of conversion I need to be making here. Watson doesn't return any errors for any kind of data I throw at it just doesn't respond.
A:
Those values should be fine, but you have to define how you want them stored before getting the bytes representation of them.
You'd simply want to convert those values to signed 2-byte/16-bit integers, then get the bytes representation of those.
| {
"pile_set_name": "StackExchange"
} |
Q:
MVC 4 Web API register filter
I am using MVC 4 Web API to create a service layer for an application. I am trying to create a global filter that will act on all incoming requests to the API. Now I understand that this has to be configured differently than standard MVC global action filters. But I'm having issues getting any of the examples I'm finding online to work.
The problem I am running into is in registering the filter with Web API.
I have my Global.asax set up like this...
public class MvcApplication : System.Web.HttpApplication
{
protected void Application_Start()
{
AreaRegistration.RegisterAllAreas();
MVCConfig.RegisterRoutes(RouteTable.Routes);
MVCConfig.RegisterGlobalFilters(GlobalFilters.Filters);
WebApiConfig.RegisterRoutes(GlobalConfiguration.Configuration);
WebApiConfig.RegisterGlobalFilters(GlobalConfiguration.Configuration.Filters);
}
}
My standard Mvc routing and filters work correctly. As does my WebApi routing. Here is what I have for my webApi filter registration...
public static void RegisterGlobalFilters(System.Web.Http.Filters.HttpFilterCollection filters)
{
filters.Add(new PerformanceTestFilter());
}
And here is the PerformanceTestFilter...
public class PerformanceTestFilter : ActionFilterAttribute
{
private readonly Stopwatch _stopWatch = new Stopwatch();
public override void OnActionExecuting(ActionExecutingContext filterContext)
{
_stopWatch.Reset();
_stopWatch.Start();
}
public override void OnActionExecuted(ActionExecutedContext filterContext)
{
_stopWatch.Stop();
var executionTime = _stopWatch.ElapsedMilliseconds;
// Do something with the executionTime
}
}
This filter works fine when it is registered with the standard Mvc GlobalFilterCollection, but when I try to register it with System.Web.Http.Filters.HttpFilterCollection I get an error saying that it is not assignable to parameter type System.Web.Http.Filters.IFilter.
So I'm assuming that my PerformanceTestFilter needs to inherit from something other than ActionFilterAttribute in order to be registered as a webapi filter. I'm just not sure what that needs to be.
I imagine I will need to create two individual filters to work with mvc and webapi respectively. If there is a way to create a filter that could be registered to both, that would be great. But my primary concern is simply to get it working for webapi.
Thanks
A:
The following should work. We actually use this for our web API project.
GlobalConfiguration.Configuration.Filters is of type HttpFilterCollection
var filters = System.Web.Http.GlobalConfiguration.Configuration.Filters;
filters.Clear();
filters.Add(new ValidationActionFilterAttribute());
public class ValidationActionFilterAttribute : FilterAttribute, IActionFilter, IFilter
{
...
}
Also, if you're working in a project that contains both MVC and WebAPI assembilies, could you check what's the namespace your
ActionFilterAttribute's namespace. It's fairly confusing cause there
are two ActionFilterAttributes under both:
System.Web.Http.Filters
System.Web.Http.Mvc
Source: Why is my ASP.NET Web API ActionFilterAttribute OnActionExecuting not firing?
It appears that you will need to have two filters, one for API and one for MVC. You can factor the common code into a separate class, and then just use the specific filter to call through to your common class, thus not violating DRY and essentially using the actual filters as wrappers which can be registered as filters.
| {
"pile_set_name": "StackExchange"
} |
Q:
In bison is there a way to return the Name of a token instead of its type
I am working with Flex and Bison. in my parse.y (bison) I define tokens. When the token is return it returns an int I was wondering if there is a way to take that int and map it back to the actual name in the bison source.
For example in my parser.y
//define my tokens that are shared with my lexer (flex)
%token <tokenData> ID
%token <tokenData> NUMCONST
in my grammar I then use
number : NUMCONST {std::cout<<"Line "<<$1->linenum<<" Token: [I want NUMCONST]"<<<std::endl;}
I know I can display the int that is returned from the lexer but is there away to return the token's type such as "NUMCONST" or "ID". I want token "type" instead of token "int"
A:
Yes you can, but you need to enable the feature in your bison file.
If you put the directive %token-table into your bison file, then bison will generate a table of token names called yytname. (You can also enable this feature with the -k or --token-table command-line flags.)
yytname[i] is the name of the token whose "internal bison token code number" is i. That's not the same as the number returned by yylex, because bison recodes the tokens using an (undocumented) table called yytranslate.
The token names in the yytname table are the token aliases if you use that feature. For example, if your grammar included:
%token EQEQ "=="
%%
exp: exp "==" exp
| exp '+' exp
the names for the tokens corresponding to the two operators show in the exp rule are "==" and '+'.
yytname also includes the names of non-terminals, in case you need those for any purpose.
Rather than using yytranslate[t], you might want to use YYTRANSLATE(t), which is what the bison-generated scanner itself does. That macro translates out-of-range integers to 2, which has the corresponding name $undefined. That name will also show up for any single-character tokens which are not used anywhere in the bison grammar.
Both yytname and yytranslate are declared static const in the bison-generated scanner, so you can use them only in code which is present in that file. If you want to expose a function which does the translation, you can put the function in the grammar epilogue, after the second %%. (You might need such a function if you wanted to find the name corresponding to a token number in the scanner, for example.) It might look something like this:
const char token_name(int t) {
return yytname[YYTRANSLATE(t)];
}
Normally, there is no need to do this. If you merely want to track what the parser is doing, you're much better off enabling bison's trace facility.
A:
bison generates an enum called yytokentype that contains an enumerated list of all the tokens in the grammar. It does not provide an equivalent mapping to strings containing all the token names.
So, you'll have to implement this mapping yourself. That is, implementing a utility function that takes a yytokentype parameter, and returns the name of the given token, which you can subsequently use in your diagnostic messages. Another, boring switch farm.
Having said that, it shouldn't be too difficult to write a utility Perl script, or an equivalent, that reads <filename>.tab.h that came out of bison, parses out the yytokentype enumeration, and robo-generates the mapping function. Stick that into your Makefile, with a suitable dependency rule, and you got yourself an automatic robo-generator of a token-to-name mapping function.
| {
"pile_set_name": "StackExchange"
} |
Q:
I like badges, but only when earned
I just found that I received a "Documentation User" badge. Huh? Here's the extent of my interaction with the (now frozen) Documentation section:
I checked out the various Swift Language topic. Maybe - maybe - drilled into a few.
I participated in a SO interview that (I believe) dealt with how the Documentation section was used.
That's it.
Okay, one more thing - I've read a few (probably not all) MSO posts about the "sunsetting" of this part of SO.
For this I get a Silver badge? I'm thinking this is a mistake. Could someone enlighten me? Maybe check the scripts dealing with this? Thanks!
A:
So good news! We're going to take your Peer Review badge away soon:
The truth is we really appreciate all the contributions we got on Documentation and want to acknowledge Documentation users who earned any badges at all. Most people will end up with fewer badges and nobody will get more than they started with. For more information, please see the "What happens to badges?" section of this announcement.
| {
"pile_set_name": "StackExchange"
} |
Q:
How do I break from notifier.watch
I'm using ruby 2.5.1 and rb-inotify I have the following
@notifier = INotify::Notifier.new
@notifier.watch("#{ENV['HOME']}/Downloads", :moved_to, :create) do |event|
puts "#{event.name} is now in Downloads dir."
break if event.name == "filename"
end
puts '1'
@notifier.run
puts '2'
I want to stop the notifier once it finds a certain filename and continue with the script. Is this possible?
When I break from the notifier.watch I get the following error;
LocalJumpError: break from proc-closure
A:
What you need is @notifier.stop
| {
"pile_set_name": "StackExchange"
} |
Q:
About shares in mining pool
I know that profits are assigned by the shares. But I want to know some details about computing the share.
For example, I get a job with difficulty 25000 to solve:
Do I need to finish 25000 times of hash computing in a limited time to get a valid share?
or
Is there a possibility that I just need to finish parts of the 25000 to get a valid share?
In extreme cases, do I only need to calculate one hash to get a valid share?
This problem has bothered me for a long time. Can anyone help me solve this problem? Thanks!
A:
First, not all shares are of equal value. A share with a difficulty of 25000 is worth more that a share with a difficulty of 250.
Second, it's worth understanding that difficulty (in PoW terms) is harder the lower the number, but in pools, will often be termed the other way round. I.e. they refer to the same thing but flipped around. With PoW you are trying to find a number smaller than another number. For example, with 0 being the hardest and 10 being the easiest, if you have a difficulty of 3, you are trying to find a number smaller than 3 (so 0, 1 or 2). With a difficulty of 3 (smallest hardest), you can flip that around to say greater than 7, and this is what you'll see with pools when referring to share difficulty. As 7 increases, the work is harder (and worth more), because you're ultimately trying to find a number smaller than max minus difficulty.
The hash function is just giving you a random number between 0 and 2^256-1. In the example above with 0 being hardest and 10 being easiest, that would be equivalent to a hash function returning a random number between 0 and 9.
Do I need to finish 25000 times of hash computing in a limited time to get a valid share?
No, you do not need to hash 25000 times to solve a job with a difficulty (in pool terms) of 25000. You need to find a valid hash* within a certain amount of time. The pool will send you work based on how quickly you solved previous work so you end up submitting work at an approximate rate, typically ~30 seconds per job.
*Note that 25000 is only the top 32 bits of the actual difficulty number. See this answer which explains further.
Is there a possibility that I just need to finish parts of the 25000 to get a valid share? In extreme cases, do I only need to calculate one hash to get a valid share?
You will on average find a valid hash (number) within the timeframe the pool targeted your job difficulty for. Sometimes it will take you longer, others less; extremes included.
| {
"pile_set_name": "StackExchange"
} |
Q:
Usage of "aliased" in SQLAlchemy ORM
From the SQLAlchemy ORM Tutorial:
You can control the names using the label() construct for scalar attributes and aliased for class constructs:
>>> from sqlalchemy.orm import aliased
>>> user_alias = aliased(User, name='user_alias')
>>> for row in session.query(user_alias, user_alias.name.label('name_label')).all():
... print row.user_alias, row.name_label
This seems to be a lot more typing and a lot less readable than the plain class-instrumented descriptors:
>>> for row in session.query(User, User.name).all():
... print row.User, row.name
But it must exist for a reason. How should it be used? What are some good use cases?
A:
aliased() or alias() are used whenever you need to use the SELECT ... FROM my_table my_table_alias ... construct in SQL, mostly when using the same table more than once in a query (self-joins, with or without extra tables). You also need to alias subqueries in certain cases.
There's an example in the documentation: http://www.sqlalchemy.org/docs/orm/query.html?highlight=aliased#sqlalchemy.orm.util.AliasedClass
A:
As @jd. said,mostly when using the same table more than once in a query.
Example:
dict_code_type, dict_code_status = aliased(DictCode), aliased(DictCode)
query = Device.query \
.join(dict_code_type, dict_code_type.codeValue == Device.deviceType) \
.join(dict_code_status, dict_code_status.codeValue == Device.status) \
.with_entities(Device.id, Device.deviceName, Device.status,
Device.deviceID, Device.deviceUsername, Device.token,
dict_code_type.codeLabel.label('deviceTypeLabel'),
dict_code_status.codeLabel.label('statusLabel'), Device.createAt, Device.authType) \
.filter(and_(dict_code_type.code == 'deviceType', dict_code_status.code == 'status'))
| {
"pile_set_name": "StackExchange"
} |
Q:
Add(not concatenate) a number to QByteArray
I have the following QByteArray:
QByteArray ba;
ba.resize(3);
ba[0]=ba[2]=0x8a;
ba[1]=0x0d; //so ba=8a0d8a
I want to add a hexadecimal number to the above QByteArray. For eg. on adding 0x01, ba should contain 8a0d8b. Any such operation involving carry should be propagated forward as in a normal hex addition. I have tried using + operator:
ba=ba+1;
but it concatenates (resulting in 8a0d8a01 in the above case) instead of performing actual operation. How can this be done ?
A:
I think this is the easiest solution:
uint32_t num = (b[3] << 24) + (b[2] << 16) + (b[1] << 8) + (b[0]);
num++; // add one
b[0] = num & 0xff;
b[1] = (num >> 8) & 0xff;
b[2] = (num >> 16) & 0xff;
b[3] = (num >> 24) & 0xff;
Basically you convert to an arithmetic integer back and forth. Just make sure you don't overflow. This is a simple example. You can make it a class with a state that returns QByteArray on some method or you can make a function that does this once at a time.
| {
"pile_set_name": "StackExchange"
} |
Q:
How do I change the size and specific height values of Unity generated terrain?
I'm having problems with terrain scripting while trying to do a infinite-world with Unity's terrain generation.
How do I change the terrain height at specific points? And how do I set the amount of terrain generated?
Edit: What I'm having problem with is doing that from a script, from the editor is plain easy.
A:
Nevermind. After an hour of trying I have found the method.
Well, it's actually a variable, and it's a Vector3 called "size", who could have guessed it?
I still can't believe I've missed it for that long.
For heights, still as obvious, "SetHeights method";
| {
"pile_set_name": "StackExchange"
} |
Q:
Kill child process created with fork
The following will output :
Start of script
PID=29688
Start of script
PID=0
Running child process 1
Done with child process
PID=29689
Start of script
PID=0
Running child process 1
Done with child process
It's working as intended, however I would like to kill the previous child PID.
How can a kill the PID of the child with out kiling the MAIN ?
Thank you !
my $bla = 1;
while (1) {
print "Start of script\n";
run_sleep();
}
sub run_sleep {
sleep(3);
my $pid = fork;
return if $pid; # in the parent process
print("PID=" . $pid . "\n");
print "Running child process " . $bla++ . "\n";
exit(0); # end child process
}
A:
When you fork a child, and then fail to wait() on it, it will become a defunct process (a zombie in Unix parlance) when it exits. You'll notice that its parent process ID becomes 1, and it will not go away until the OS rebooted.
So the traditional pseudocode for forking looks something like this:
if ($pid = fork()) {
# pid is non-zero, this is the parent
waitpid($pid) # tell OS that we care about the child
do other parental stuff
}
else {
# pid is 0 so this is the child process
do_childish_things()
}
Your code doesn't do that, so you're probably getting zombies, and then getting frustrated that you can't get rid of them.
| {
"pile_set_name": "StackExchange"
} |
Q:
Shortest Path Algorithm with Fuel Constraint and Variable Refueling
Suppose you have an undirected weighted graph. You want to find the shortest path from the source to the target node while starting with some initial "fuel". The weight of each edge is equal to the amount of "fuel" that you lose going across the edge. At each node, you can have a predetermined amount of fuel added to your fuel count - this value can be 0. A node can be visited more than once, but the fuel will only be added the first time you arrive at the node. **All nodes can have different amounts of fuel to provide.
This problem could be related to a train travelling from town A to town B. Even though the two are directly connected by a simple track, there is a shortage of coal, so the train does not have enough fuel to make the trip. Instead, it must make the much shorter trip from town A to town C which is known to have enough fuel to cover the trip back to A and then onward to B. So, the shortest path would be the distance from A to C plus the distance from C to A plus the distance from A to B. We assume that fuel cost and distance is equivalent.
I have seen an example where the nodes always fill the "tank" up to its maximum capacity, but I haven't seen an algorithm that handles different amounts of refueling at different nodes. What is an efficient algorithm to tackle this problem?
A:
Unfortunately this problem is NP-hard. Given an instance of traveling salesman path from s to t with decision threshold d (Is there an st-path visiting all vertices of length at most d?), make an instance of this problem as follows. Add a new destination vertex connected to t by a very long edge. Give starting fuel d. Set the length of the new edge and the fuel at each vertex other than the destination so that (1) the total fuel at all vertices is equal to the length of the new edge (2) it is not possible to use the new edge without collecting all of the fuel. It is possible to reach the destination if and only if there is a short traveling salesman path.
Accordingly, algorithms for this problem will resemble those for TSP. Preprocess by constructing a complete graph on the source, target, and vertices with nonzero fuel. The length of each edge is equal to the distance.
If there are sufficiently few special vertices, then exponential-time (O(2^n poly(n))) dynamic programming is possible. For each pair consisting of a subset of vertices (in order of nondecreasing size) and a vertex in that subset, determine the cheapest way to visit all of the subset and end at the specified vertex. Do this efficiently by using the precomputed results for the subset minus the vertex and each possible last waypoint. There's an optimization that prunes the subsolutions that are worse than a known solution, which may help if it's not necessary to use very many waypoints.
Otherwise, the play may be integer programming. Here's one formulation, quite probably improvable. Let x(i, e) be a variable that is 1 if directed edge e is taken as the ith step (counting from the zeroth) else 0. Let f(v) be the fuel available at vertex v. Let y(i) be a variable that is the fuel in hand after i steps. Assume that the total number of steps is T.
minimize sum_i sum_{edges e} cost(e) x(i, e)
subject to
for each i, for each vertex v,
sum_{edges e with head v} x(i, e) - sum_{edges e with tail v} x(i + 1, e) =
-1 if i = 0 and v is the source
1 if i + 1 = T and v is the target
0 otherwise
for each vertex v, sum_i sum_{edges e with head v} x(i, e) <= 1
for each vertex v, sum_i sum_{edges e with tail v} x(i, e) <= 1
y(0) <= initial fuel
for each i,
y(i) >= sum_{edges e} cost(e) x(i, e)
for each i, for each vertex v,
y(i + 1) <= y(i) + sum_{edges e} (-cost(e) + f(head of e)) x(i, e)
for each i, y(i) >= 0
for each edge e, x(e) in {0, 1}
| {
"pile_set_name": "StackExchange"
} |
Q:
Delphi - Saving Records to File using Streams
Delphi Tokyo - I have a parameter file that I am needing to save (and later load) from disk. The parameters are a series of record objects. There is one HEADER record and then multiple COMMAND records. These are true records (i.e type = records). The HEADER record has String, Boolean, Integer, and TStringList types within it. I save, which appears to work fine, but when I load, whatever is AFTER a TStringList causes a Stream read error. For example...
type tEDP_PROJ = record
Version : Integer;
Name: String;
...
ColList1: TStringList;
ColList2: TStringList;
ReadyToRun : Boolean;
...
end;
When I read ReadyToRun I get a Stream read error. If I move it BEFORE TStringList (on both SAVE and LOAD routines) then ReadyToRun will load properly, but whatever is after the TStringList will cause an error. It is interesting to note that ColList2 loads fine (even though it is NOT the first TStringList).
I am specifying the Encoding method when I save the TStringList.
...
ColList1.SaveToStream(SavingStream, TEncoding.Unicode);
ColList2.SaveToStream(SavingStream, TEncoding.Unicode);
I am using the same encoding when I load from the (file) Stream.
...
ColList1.LoadFromStream(SavingStream, TEncoding.Unicode);
ColList2.LoadFromStream(SavingStream, TEncoding.Unicode);
Note that when I create the StringList, I am just doing the standard create...
ColList1 := TStringList.Create;
When I save and load, I am following the examples Remy gave here...
The TStringList appears to be changing the way that the stream reads non-TStringList types... What do I need to do to fix this?
A:
Why are you using TEncoding.Unicode? TEncoding.UTF8 would have made more sense.
In any case, this is not an encoding issue. What you are attempting to do will simply not work the way you are trying to do it, because TStrings data is variable-length and needs to be handled accordingly. However, TStrings does not save any kind of terminating delimiter or size information to an output stream. When loading in a stream, TStrings.LoadFromStream() simply reads the ENTIRE stream (well, everything between the current Position and the End-Of-Stream, anyway). That is why you are getting streaming errors when trying to read/write any non-TStrings data after any TStrings data.
Just like the earlier code needed to serialize String data and other variable-length data into a flat format to know where one field ends and the next begins, you need to serialize TStrings data as well.
One option is to save a TStrings object to an intermediate TMemoryStream first, then write that stream's Size to your output stream followed by the TMemoryStream's data. When loading back later, first read the Size, then read the specified number of bytes into an intermediate TMemoryStream, and then load that stream into your receiving TStrings object:
procedure WriteInt64ToStream(Stream: TStream; Value: Int64);
begin
Stream.WriteBuffer(Value, Sizeof(Value));
end;
function ReadInt64FromStream(Stream: TStream): Int64;
begin
Stream.ReadBuffer(Result, Sizeof(Result));
end;
procedure WriteStringsToStream(Stream: TStream; Values: TStrings);
var
MS: TMemoryStream;
Size: Int64;
begin
MS := TMemoryStream.Create;
try
Values.SaveToStream(MS, TEncoding.UTF8);
Size := MS.Size;
WriteInt64ToStream(Stream, Size);
if Size > 0 then
begin
MS.Position := 0;
Stream.CopyFrom(MS, Size);
end;
finally
MS.Free;
end;
end;
procedure ReadStringsFromStream(Stream: TStream; Values: TStrings);
var
MS: TMemoryStream;
Size: Int64;
begin
Size := ReadInt64FromStream(Stream);
MS := TMemoryStream.Create;
try
if Size > 0 then
begin
MS.CopyFrom(Stream, Size);
MS.Position := 0;
end;
Values.LoadFromStream(MS, TEncoding.UTF8);
finally
MS.Free;
end;
end;
Another option is to write the number of string elements in the TStrings object to your output stream, and then write the individual strings:
procedure WriteStringsToStream(Stream: TStream; Values: TStrings);
var
Count, I: Integer;
begin
Count := Values.Count;
WriteIntegerToStream(Stream, Count);
for I := 0 to Count-1 do
WriteStringToStream(Stream, Values[I]);
end;
procedure ReadStringsFromStream(Stream: TStream; Values: TStrings);
var
Count, I: Integer;
begin
Count := ReadIntegerFromStream(Stream);
if Count > 0 then
begin
Values.BeginUpdate;
try
for I := 0 to Count-1 do
Values.Add(ReadStringFromStream(Stream));
finally
Values.EndUpdate;
end;
end;
end;
Either way, you can then do this when streaming your individual records:
WriteIntegerToStream(SavingStream, Version);
WriteStringToStream(SavingStream, Name);
...
WriteStringsToStream(SavingStream, ColList1);
WriteStringsToStream(SavingStream, ColList2);
WriteBooleanToStream(SavingStream, ReadyToRun);
Version := ReadIntegerFromStream(SavingStream);
Name := ReadStringFromStream(SavingStream);
...
ReadStringsFromStream(SavingStream, ColList1);
ReadStringsFromStream(SavingStream, ColList2);
ReadyToRun := ReadBooleanFromStream(SavingStream);
| {
"pile_set_name": "StackExchange"
} |
Q:
Relating to Inelastic Collisions
There is a very simple equation for an inelastic collision but it really only applies to 2d scenarios:
$$v = \frac{(m_1 u_1 + m_2 u_2)}{(m_1+m_2)}$$
What would be the equation for an inelastic collision in a 3d environment?
A:
Short answer: Same thing.
Longer answer: recall that both momentum ($p$) and velocity ($v$) are both vectors. You're used to vectors in one or, at most, two dimensions but we can expand the vector space into arbitrarily many dimensions so that your vector $p$ would have components $(p_1, p_2, ... , p_n)$ for $n$ linearly independent components. For a three dimensional collision problem you would just have $(p_x, p_y, p_z)$ for each momentum vector and $(v_x, v_y, v_z)$ for each velocity vector. Just like in two dimensional collision problem, each component $x$, $y$, and additionally in this case, $z$ of the total momentum is conserved.
| {
"pile_set_name": "StackExchange"
} |
Q:
Visual Studio 2013 fails to build from network share
I have a project on a Mac which I'm trying to build, over a network share, on a PC.
However Visual Studio reports:
1>LINK : fatal error LNK1201: error writing to program database
'X:\XYZ\Builds\VisualStudio2013\Debug\XYZ.pdb'; check for insufficient
disk space, invalid path, or insufficient privilege
Yet:
>dir XYZ.pdb
Directory of X:\XYZ\Builds\VisualStudio2013\Debug
20/04/2015 17:32 9,456,640 XYZ.pdb
0 Dir(s) 15,825,752,064 bytes free
There it is, created by VisualStudio a second ago, so it must have write permissions and there's plenty of disk space. I've had a poke around the permissions and I can't see what's wrong.
Any suggestions on how to make this work? It'd be very handy!
EDIT: I've upgraded SMB on the mac to the latest version and that's not helped either!
A:
tl;dr Try adding a veto oplock files setting to your smb.conf that specifies the .pdb file of your VS solution per the instructions here.
One possibility is that the supposed lack of permissions is due to Visual Studio itself attempting to open the file multiple times. I've seen weird issues like this with Visual Studio when building on a network share, even when the network share was running on a Windows server. However, on Windows Server, there's a timeout in such scenarios (default of 35 seconds) after which it would allow the second file handle to open and then continue about its business. I ended up having to watch Wireshark traces of the SMB traffic to figure out what was going on there.
IIRC, the exchange went something like:
VS opens a handle to the .pdb. By default, it receives exclusive access since it's the only handle open to the file at the time.
VS attempts to open a second handle to the .pdb, but it's currently open for exclusive access already.
The SMB server sends a message to ask the client holding the first handle (which also happens to be VS) whether it's ok to switch the handle into shared access mode.
VS is unable to respond to the server's request because it's blocked waiting on a response from the server to its request to open the second handle.
A timeout expires on the SMB server, after which it forcibly switches the handle into shared mode. This timeout is enabled by default on Windows SMB/CIFS servers.
The SMB server now allows the second file handle to open and VS continues along its merry way.
This behavior is called 'opportunistic locking' and Microsoft has a document on how to configure it on Windows file servers here. OplockBreakWait is the amount of time (in seconds) that the server will wait for a client to respond to an oplock break request.
However, it sounds like perhaps the oplock break request timeout might not be happening on your samba implementation. There's a document here in samba's documentation on how to configure oplocks in samba. It mentions an option that can be specified in smb.conf called veto oplock files that allows you to specify certain files for which oplocking will be disabled. You could try adding your .pdb to the veto oplock files setting in your smb.conf to see if this fixes your problem.
Incidentally, the samba document linked above mentions that its default oplock break wait time is 0, which I would assume disables the timeout, resulting in the behavior you're seeing. Unfortunately, the link in that document to the actual documentation for the oplock break wait time option is broken, so I can't confirm for sure that that's what it does. According to the linked samba documentation, "Samba recommends: “Do not change this parameter unless you have read and understood the Samba oplock code,”" so you're probably better off attempting to modify veto oplock files than oplock break wait time unless reading the source code of samba is your idea of fun. :)
| {
"pile_set_name": "StackExchange"
} |
Q:
How do I register a file association in Ubuntu
I'm in the process of creating a Ubuntu Installer and I need to register my own file extension I cant find any examples of how to do this.
Could people provide me with some script snippets on how to do this?
NB: Im using InstallJammer to help me create the installer, which allows me to call external scripts - so that is why a script would be beneficial.
A:
The recommended way is to write an ubuntu package, for example with CDBS (overview, docs, examples, bonus non-cdbs tutorial).
Then you add $PACKAGE.sharedmimeinfo and $APPNAME.desktop files in the debian/ directory. The sharedmimeinfo file describes the file type, the desktop file describes your app. The latter should contain a MimeType=application/x-$APPNAME; that matches the filetype.
A:
An InstallJammer installer won't integrate very well with the distribution, but here goes.
Use xdg-mime install and xdg-mime default to set up the mimetype and associate it.
| {
"pile_set_name": "StackExchange"
} |
Q:
Can an object other than a black hole have an event horizon?
I haven't done the mathematics of it but is the radius of a potentially naturally occurring object such as a star held stable by quark degenerate matter or hypothesized preon degenerate matter small enough such that the object has an event horizon. I know you can just say if you had some object of this density and radius then it has an event horizon but, is it possible in nature?
Also what defines a black hole? Is it the fact it is an object with an escape velocity the speed of light or is it something that collapses to infinite density hence a star with an event horizon held together by some degenerate matter pressure?
No idea if any of this is correct. Apparently the laws of physics break down inside a black hole, so even though the pressure may be great enough just considering forces, it couldn't actually exist.
A:
If it is small enough to have an event horizon it will crush to form a singularity, at least given GR. This is because once a horizon forms, everything inside is compelled to move into the center as all time-like paths, thus all those which all particles known to exist (and all which can exist assuming strict causality) are restricted to following, end up hitting the center. So the mass of neutron star MUST crush into the center, once inside a horizon radius.
But you can have horizons without black holes, just not an object shielded by one that is not a black hole. There is an event horizon far from Earth due to the expansion of the Universe.
| {
"pile_set_name": "StackExchange"
} |
Q:
Adding a component to jpanel in netbeans
I have been trying for the past few hours to figure out how to add a label component to a window however with no prevail. I have created a new desktop application project in Netbeans and it comes with pre-generated code. I want to add a label to it but it just does not show?. I am unsure as to why because i am following the normal panel.add(component) convention.
Would really appreciate some help!. I pasted the full file sourecode here http://pastebin.com/qJk6bSWn .
Any ideas?
A:
What layout is your JPanel using? If it's using the Netbeans GUI builder default of free design you won't be able to manually add components. You'll need to set it to some layout manager.
Parts of your gui can have the Free Design layout, but you'll need to change the layout of the components that you want to manually add to.
| {
"pile_set_name": "StackExchange"
} |
Q:
Auto-generate VB.NET forms using SQL Server 2008
I want to automatically generate VB.NET forms using tables from a SQL Server database (one form / database table). It is perhaps possible with writing custom custom code for it, but if there is already some feature that does the job that would be great (database has 40+ tables, manually doing this is a tedious task).
Any answers, help, links, tips is greatly appreciated.
Regards,
Ayub
A:
It takes just a minute to fix, all functionality allready exists in Visual Studio.
Fire up Visual Studio, click on Add new datasource... to start the Data Source Configuration Wizard:
Select Database and follow the wizard:
When connected to the database, select the tables you are interrested in and press the Finnish button:
Now, this will create a strongly named dataset in your solution, if you double click the xsd file you'll see the tables you selected in the schema editor, but leave that for now:
Now, select "Show Data Sources" from the data-menu and you will see all tables you selected in the wizard. To the left of each field its an icon that tells what type of control that field will be represented by on the resulting form:
Now you can deside how the data will be presented on the form, as a datagridview or in detail mode, just use the dropdown on the table name (only when in form-design-mode).
If you have selected details-mode on the table, then you can change what control the field will be represented by (must be in form-design-mode, not code-mode):
Then just drag the table from the data source view to an empty form and it will magically create controls to edit/add/delete and move around the data.
This is the result if DataGridView-mode was selected:
And if Details was selected on the table:
In code behind it also magically add some code to load the data to the adapter when the form loads and some save/validating code:
Private Sub AccountBindingNavigatorSaveItem_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles AccountBindingNavigatorSaveItem.Click
Me.Validate()
Me.AccountBindingSource.EndEdit()
Me.AccountTableAdapter.Update(Me.MyDBDataSet.Account)
End Sub
Private Sub Form2_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load
'TODO: This line of code loads data into the 'MyDBDataSet.Account' table. You can move, or remove it, as needed.
Me.AccountTableAdapter.Fill(Me.MyDBDataSet.Account)
End Sub
| {
"pile_set_name": "StackExchange"
} |
Q:
Erro ao usar pow(a,b)
#include <stdio.h>
#include<math.h>
int main() {
double pi = 3.14159;
double R, A;
scanf("%lf", &R);
A = (pi)* pow(double R, int 2);
printf("%lf\n", A);
return 0;
}
Esse codigo está retornando o erro: Too few arguments to function 'pow'
por que?
A:
Tente chamar a função da seguinte forma:
A = pi * pow(R, 2);
e veja se o erro persiste.
O parenteses no pi não é necessário (mas também não causa erro), o erro era porque você estava invocando a função com os tipos de dados, o que não é preciso
| {
"pile_set_name": "StackExchange"
} |
Q:
JPQL Statement Invalid unbound variable "serial_number" in query
How can i make this work? I want to count serial number with a value of 1? Thanks guys i am a beginner.
serial_Number = 1;
Query query = (Query) es.em.createQuery("SELECT COUNT(p.serial_Number) FROM Product p where p.product_Id = serials_Number");
A:
The following should work
Query query = (Query) es.em.createQuery("SELECT COUNT(p.serial_Number) FROM Product p where p.product_Id =:serial_Number").setParameter("serial_Number", serial_Number);
Then use..
query.getSingleResult();
to get the singular entity.
| {
"pile_set_name": "StackExchange"
} |
Q:
working principle of the ndgrid matlab function
I'm trying to migrate an algorithm matlab to c + +.I'm newbie in matlab.
I want to know the working principle of the ndgrid matlab function or if there are implementation of this function in c++?
Thank you.
A:
Matlab can gererate C code for you.
See:
link
The generated code does however depend on matlab libraries.
| {
"pile_set_name": "StackExchange"
} |
Q:
Is JMS suitable for an online game?
I'd like to throw together a small game and put it online. It would be multiplayer (ideally it would be MMO, but it's a side project, so I'll settle for MO hehe), the content is rather unimportant. I'm planning on writing the game (server and client) in Java.
I'm considering options I have for getting information around reliably. Will JMS be sufficient for this? Will I need more (if so, what)? Are there better alternatives?
I've made a few games in the past, but nothing multiplayer. I work with an app that uses JMS, and there's plenty of tutorials, so that's why I figured it would work... but I'm really open to anything.
Thanks!
Edit: It appears I have a lot to learn about JMS. Perhaps my question should be rephrased to be: "What implementation of JMS will best serve my purposes for an MMO?"
Criteria thus far:
Free
Low overhead
Easy to configure
A:
JMS would assume that your players are all on the same local network. I don't think it would work as well if your game is played over the Internet.
A:
Don't forget that JMS is an API and doesn't specify an implementation. I suspect that for a game you're going to require prompt delivery, and choosing an implementation may depend on attributes including this.
You may want to check out JGroups. As well as implementing JMS, it is enormously configurable and can be used to implement many different messaging patterns. You can choose to enforce reliability, ordering etc. and tune for different applications / clients etc.
| {
"pile_set_name": "StackExchange"
} |
Q:
iOS Push notification Not working
I have to implement push notification service to my application, So i had created App ID with push notification enabled in production. there very first time when install the application in my device the push notification allow and don't allow pop-up comes and if it click all . it is not generating the push notification. next next time when i launched it won't ask for any pop up and i'm not able to generate device token, Please help me in this.
Thanks,
Nikhil.CH
A:
First you need to be sure that you are getting the token or an error after displaying the "allow" popup. Methods to register notification had a breaking change in iOS8, and if you use old one it will fail silently.
Here a snippet:
if ([application respondsToSelector:@selector(registerUserNotificationSettings:)])
{
// iOS 8
[application registerUserNotificationSettings:[UIUserNotificationSettings settingsForTypes:(UIUserNotificationTypeSound | UIUserNotificationTypeAlert | UIUserNotificationTypeBadge) categories:nil]];
}
else{
//.. the old one
}
Most of time that something fails can be due to wrong prov profiles and certificates matching, or because you are testing behind a firewall or VPN.
To test them I use PUSHER a wonderful software.
| {
"pile_set_name": "StackExchange"
} |
Q:
array_push vs $str .= in PHP
which one of the 2 have the best performance?
In javascript I was heard douglas crockford say that you shouldn't use str += if you are concatenating a large string but use array.push instead.
I've seen lots of code where developers use $str .= to concatenate a large string in PHP as well, but since "everything" in PHP is based on arrays (try dumping an object), my thought was that the same rule applies for PHP.
Can anyone confirm this?
A:
Strings are mutable in PHP so using .= does not have the same affect in php as using += in javascript. That is, you will not not end up with two different strings every time you use the operator.
See:
php String Concatenation, Performance
Are php strings immutable?
A:
.= is for strings.
array_push() is for arrays.
They aren't the same thing in PHP. Using one on the other will generate an error.
| {
"pile_set_name": "StackExchange"
} |
Q:
Find a Value in a Container
I have created a simple function to determine if a container contains a value. It allows for certain conditions to be set as well: where to to start searching for value, which way to search, whether to find the first or the last occurrence of the value, and whether or not to include the starting point. I want its return value to be either the index of the matching element, or -1 if no matches were found.
template<class Container, class T>
int Vec_Find(const Container& c, int size, const T& val, int pivot_idx = 0,
bool find_first = true, bool ltr = true, bool include_pivot = true)
{
if (find_first && ltr)
{
if (include_pivot) pivot_idx++;
int i = std::distance(std::begin(c),
std::find(std::begin(c) + pivot_idx, std::end(c), val));
if (i != size) return i;
}
else if (find_first && !ltr)
{
if (!include_pivot) pivot_idx--;
int i = std::distance(std::find(std::rbegin(c) + (size - pivot_idx - 1),
std::rend(c), val), std::rend(c) - 1);
if (i != -1) return i;
}
else if (!find_first && ltr)
{
if (include_pivot && (c[pivot_idx] == val)) return pivot_idx;
int i = std::distance(std::find(std::rbegin(c),
std::rend(c) - (pivot_idx + 1), val), std::rend(c) - 1);
if (i != pivot_idx) return i;
}
else
{
if (include_pivot) pivot_idx++;
int i = std::distance(std::begin(c),
std::find(std::begin(c), std::begin(c) + pivot_idx, val));
if (i != pivot_idx) return i;
}
return -1;
}
I am a student to C++ and am looking for ways to code better, I appreciate all suggestions.
To Editors: I wasn't sure if I should leave my long lines on one line or fit to screen, please let me know if they should be on one line, and I will update it.
A:
Parameters
As a starting point, I'd usually advise against passing bools as parameters--especially passing more than one bool to a particular function. I have better ways to spend my time than memorizing the meaning of:
foo(..., false, false, true);
vs.
foo(..., false, true, false);
I suppose I could barely see something like this if you were doing something like passing some bools, and it was putting them together into a bit set, so true, false, true basically just meant 101, but for something like this where each has a unique meaning, it seems unusually difficult to read.
Basic Utility
At least in my opinion, however, that nearly pales compared to the mere fact that I don't see much point in the function existing at all.
People who are accustomed to C++ are generally fairly accustomed to how the standard library works. Most of use would generally rather get an iterator than an index. If we want to search from end to beginning, passing rbegin() and rend() to find makes that intent fairly clear.
For example, if I want to search for the last item, starting 4 from the end, I'd have a much easier time understanding this:
auto p = std::find(foo.rbegin()+4, foo.rend(), value);
...rather than:
int p = Vec_find(foo, foo.size(), value, 4, false, true, true);
or:
int p = Vec_find(foo, foo.size(), value, 4, true, false, true);
[At least as I read things, those are basically the same--either find the last item, or find the first, starting from the end. Oh, and pardon me if I'd passed the wrong thing for the size parameter--I couldn't figure out exactly what it was supposed to do.]
Granted, iterators do tend to lead to fairly verbose code, but in this case, using your code can be even more verbose. I'm pretty sure when ranges become generally available (and sooner, for people who only need to target one compiler, or don't mind using Boost) quite a few people will welcome them with open arms. They can reduce verbosity quite a bit. Given a choice, however, between using this and using the standard library directly, I'd take the standard library without a second thought.
| {
"pile_set_name": "StackExchange"
} |
Q:
Deploy rails react app with webpacker gem on AWS elastic beanstalk
I'm trying to deploy a rails 5.1 & react app created with webpacker gem using AWS Elastic Beanstalk. The problem is I keep getting the following error:
Webpacker requires Node.js >= 6.0.0 and you are using 4.6.0
I'm using Node 9.5.0 on my computer. Any suggestions??
A:
For those that run into needing to also install Yarn, I have found the below just worked for me:
commands:
01_install_yarn:
command: "sudo wget https://dl.yarnpkg.com/rpm/yarn.repo -O /etc/yum.repos.d/yarn.repo && curl --silent --location https://rpm.nodesource.com/setup_6.x | sudo bash - && sudo yum install yarn -y"
02_download_nodejs:
command: curl --silent --location https://rpm.nodesource.com/setup_8.x | sudo bash -
03_install_nodejs:
command: yum -y install nodejs
A:
To install nodejs using yum (assuming you're using the default Amazon Linux)
https://nodejs.org/en/download/package-manager/#enterprise-linux-and-fedora
curl --silent --location https://rpm.nodesource.com/setup_8.x | sudo bash -
yum -y install nodejs
Now to execute this on your instances, you need to add the required commands to a config file inside the .ebextensions dir, something like: .ebextensions/01_install_dependencies.config
File contents:
commands:
01_download_nodejs:
command: curl --silent --location https://rpm.nodesource.com/setup_8.x | sudo bash -
02_install_nodejs:
command: yum -y install nodejs
| {
"pile_set_name": "StackExchange"
} |
Q:
Using Gridify after images finish loading ascyncronously
I am building an app in Backbone.js and using gridify to arrange my photos in a grid fashion. The problem is that the images get loaded asynchronously and this seems to be causing gridify issues. Is there a way to call the gridify function after all ascyn call are finished? I tried placing the call in post render, but this does not seem to help. Here is my code:
In TemplateContext:
var images = [];
this.model.get('images').each(function (imageModel) {
var image = imageModel.toJSON()
images.push(image);
}, this);
return {
images: images,
view:this
}
In Post Render:
var options =
{
srcNode: 'img', // grid items (class, node)
margin: '20px', // margin in pixel, default: 0px
width: '250px', // grid item width in pixel, default: 220px
max_width: '', // dynamic gird item width if specified, (pixel)
resizable: true, // re-layout if window resize
transition: 'all 0.5s ease' // support transition for CSS3, default: all 0.5s ease
}
$('.grid').gridify(options);
and in my template:
<div id="activity" class = "grid" style="border: 1px solid black;">
{{#if images }}
{{#each images}}
<img src ="?q=myApp/api/get_images&imgId={{id}}&imgSize=medium" >
{{/each}}
{{else}}
No images have been uploaded
{{/if}}
</div>
This causes all the images to be stacked on top of each other. I think it's because gridify doesn't know what the dimensions should be due to the fact the images get loaded after gridify is called. If I just use static images in my template, it works fine. Anyone have any suggestions on how to handle this either in backbone or with gridify?
thanks
jason
A:
What I ended up doing was creating a div with the given dimensions. Then I placed the images in the div and using gridify on the div itself. That way, gidify does its thing to the divs and the images can load later.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to export an Influxdb database which is in a docker container?
I want to export my influxdb database as mentioned here, but my InfluxDB is in a docker container, and I'm still a beginner with it. How can I execute the following command into my container ? influxdb backup -database name /opt/data
A:
You can do a docker exec & docker cp to backup your data in case you have not used host mounts -
docker exec -it $CONTAINER_ID bash -c "influxdb backup -database name /opt/data"
Once your data is generated in /opt/data, copy it to the current directory on Docker host -
docker cp $CONTAINER_ID:/opt/data ./
| {
"pile_set_name": "StackExchange"
} |
Q:
how to write multi-line functions in cider repl?
I'm using Clojure in Emacs with a Cider REPL.
Is it possible to have multi-line functions in a cider REPL? How can I do this?
A:
C-j should do it. As a bonus you get indentation.
| {
"pile_set_name": "StackExchange"
} |
Q:
WCF is throwing UnauthorizedAccessException after upgrade to .NET 4.0
I have a pretty simple client-server ASP.NET app; communication is via WCF service. All worked perferctly in VS 2008, now I upgraded to VS2010 and every time the client code is trying to instantiate a channel to the server:
new ChannelFactory<IMemberService>("Members.MemberService").CreateChannel();
it throws an UnauthorizedAccessException "Access denied".
Were there any breaking changes in the new version, or do I need to add some configuration?
I'm pretty stuck, any ideas would be highly appreciated!
BTW, binding used is basicHttpBinding, I don't know if thats important here.
Thank you,
Andrey
A:
Please disregard, turned out to be configuration error
| {
"pile_set_name": "StackExchange"
} |
Q:
Decrypt a GPG file using a batch file
I am decrypting a gpg file using a batch file with the below code.
gpg.exe --output test.csv --batch --passphrase-fd 0 --decrypt WSB330TJ.CSTDJIDF.TXT.asc.14.04.22_00.59.gpg
Although it does Decrypt the file but I have to enter the passphrase manually.
How can I improve it so that it automatically pick the passphrase and decrypt the file without any manual intervention?
What should I add here?
A:
You tell GnuPG to read the passphrase from stdin by using --passphrase-fd 0. There are different options to read the passphrase, from man gpg:
--passphrase-fd n
Read the passphrase from file descriptor n. Only the first line
will be read from file descriptor n. If you use 0 for n, the
passphrase will be read from STDIN. This can only be used if only
one passphrase is supplied.
--passphrase-file file
Read the passphrase from file file. Only the first line will be
read from file file. This can only be used if only one passphrase
is supplied. Obviously, a passphrase stored in a file is of ques-
tionable security if other users can read this file. Don't use this
option if you can avoid it.
--passphrase string
Use string as the passphrase. This can only be used if only one
passphrase is supplied. Obviously, this is of very questionable
security on a multi-user system. Don't use this option if you can
avoid it.
If you use GnuPG 2, remember to use --batch, otherwise the passphrase options will be ignored.
If you stored the passphrase in a file, use --passphrase-file password.txt, if you want to pass it as a string use --passphrase "f00b4r" (both times using appropriate parameter values, of course).
@Thierry noted in the comments that (especially when using Windows) make sure to end the file with a UNIX line feed (\n / LN) instead of a Windows line feed + carriage return (\n\r / LNRF).
A:
For myself I had to do gpg --batch --passphrase "MyPassword" --decrypt-files C:\PGPFiles\*.pgp. Doing it with --passphrase not before --decrypt-files would always prompt me for the password. Also as Thierry said when I was using a password file I had to use Notepad++ to convert to unix style (Edit->EOL->Unix/OSX Format)
| {
"pile_set_name": "StackExchange"
} |
Q:
Identity providers that work with Cosmosdb
What identity providers work with CosmosDB? I have a Xamarin Forms Mobile app and an Angular 4 Web app that I want users to be able to log in to either to get at their data. I want to avoid having to right a massive middle wear program, but resource token broker app would be ok.
I am surprised that there is no native support for Azure AD B2C.
I have tried using Azure AD B2C MSAL but it doesn't work very well and it's very buggy. I would have thought this would have been high on Microsoft's to do list.
A:
Cosmos DB supports master keys (a primary/secondary but both have full control) and "resource tokens". These resource tokens can be created by giving a user in Cosmos DB access to a specific resource.
See
https://docs.microsoft.com/en-us/rest/api/documentdb/access-control-on-documentdb-resources
https://docs.microsoft.com/en-us/azure/cosmos-db/secure-access-to-data
As for how to integrate with Azure AD B2C. There is no native integration, Azure AD B2C cannot by itself issue resource tokens for Cosmos DB. However I imagine you could implement a micro-service that would authenticate a user using Azure AD B2C, validate the ID token returned by Azure AD B2C and then use the Cosmos DB client library to generate a resource token.
| {
"pile_set_name": "StackExchange"
} |
Q:
Can we / should we mark every visa question as a duplicate?
Every single question that asks "do I need a visa" is answered on the IATA Travel Centre. There's no point in trying to guess what rules apply to a certain person when the IATA has it all. So... should we?
A:
No, for several reasons:
The IATA website asks for a lot of irrelevant and sometimes hard to understand questions. I do not find it easy to use and I am very familiar with the regulations, e.g. in the Schengen area.
IATA does not have it all. We already have much more info in a series of question and askers still come up with corner cases and things that are not covered, or at least not covered explicitly.
Consider this question, asked today: Can I travel to EU states with a Bulgarian Blue Card? How can you find out using the website you mentioned? Are you confident that an EU Blue Card is simply a “residence permit”? Do you fill that in as travel document or somewhere else? If I enter what I believe is the right info, I end up on a page that only mentions residence permits for EEA nationals' family members. Does the lack of any other exemption positively mean that there is none? You need a lot of faith both in the website and in your ability to use it to rely on the lack of any explicit info as an answer to the question!
IATA is not an authoritative source and could be wrong. Granted, I don't have any specific example and if they are wrong, you are still in trouble because you will probably face someone who will rely on the TIMATIC database to deny boarding but still. We already have great material here, with references to the controlling regulations and lots of practical tips, what would be the point of linking to an inferior source?
The IATA website only says “visa required”, not how to obtain one, under what conditions a given visa is valid, what can happen at the border (all this is a huge part of our visa questions).
Questions are only marked as duplicate when they have been asked here. We could still close visa questions as “trivial” but then we might as well close most questions because the info is out there somewhere.
Also, what about land borders?
| {
"pile_set_name": "StackExchange"
} |
Q:
Can't fully rotate object with quarternion rotation enabled
In the process of animating, I have come to realise that my rig is doing a strange twirl, which isn't desired.
It's very easy to emulate, simply open a new file and set the cube's rotation setting to Quarternion, and then keyframe it at it's base position, and then keyframe it 360 degrees around any axis. When you hit play, it doesn't rotate.
EDIT:
Thanks to FFeller for pointing the idea of intermediate keyframes, which solve the problem on the basic cube, but on my rig, which was generated by rigify, it still applies. It seems eerily similar to a gimbal lock problem, but aren't Quarternions supposed to solve that problem?
As such, how can I solve this problem, as more intermediate keyframes don't seem to work.
A:
Quaternions are only in the range of -1 to 1, and so 0 and 360 (and 720 and 1080 etc.) are basically identical. That's why the cube doesn't rotate, because it has nowhere to go.
Quaternions also generally take the shortest path. That's why you need multiple keyframes. The algorithm might be deciding that it's quicker to go back the way you came vs. go forward, which gives you undesirable rotations. More keyframes should solve it.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to save value of check boxes in android?
I am using this code and now its working fine
ProcessList.setOnItemClickListener(new OnItemClickListener()
{
@Override
public void onItemClick(AdapterView<?> a, View v, int position, long id)
{
AlertDialog.Builder adb=new AlertDialog.Builder(location.this);
adb.setMessage("Selected Item is = "+ProcessList.getItemAtPosition(position));
adb.setPositiveButton("Ok", null);
adb.show();
}
});
A:
use getSelectedItem(); it return the Editable and if you need the string use getSelectedItem().toString()
| {
"pile_set_name": "StackExchange"
} |
Q:
How to inject JavaScript callback to detect onclick event, using iOS WKWebView?
I'm using a WKWebView to show a website which has some HTML that includes three buttons. I want to run some Swift code in the native app when a specific button is clicked.
About the HTML
The three buttons look like this:
<input type="button" value="Edit Info" class="button" onclick="javascript:GotoURL(1)">
<input type="button" value="Start Over" class="button" onclick="javascript:GotoURL(2)">
<input type="button" value="Submit" class="button" onclick="javascript:GotoURL(3)">
The GotoURL function they call looks like this:
function GotoURL(site)
{
if (site == '1')
document.myWebForm.action = 'Controller?op=editinfo';
if (site == '2')
document.myWebForm.action = 'Controller?op=reset';
if (site == '3')
document.myWebForm.action = 'Controller?op=csrupdate';
document.myWebForm.submit();
}
Current WKWebView Implementation
When I click any of the buttons in the webview, this function is called on my WKNavigationDelegate:
func webView(_ webView: WKWebView, didStartProvisionalNavigation navigation: WKNavigation!) {
// ...?
}
But of course, navigation is opaque and therefore contains no information about which of the three buttons the user clicked.
What's the simplest way to detect when this button is clicked?
I want to respond when the user clicks Submit and ignore other button presses.
I see some other approaches on Stack Overflow using WKUserContentController, but they appear to require the web site to call something like:
window.webkit.messageHandlers.log.postMessage("submit");
I do not control this website so I cannot add this line in its source code, and I don't know the best way to inject it in the correct place using WKWebView.
A:
User Scripts are JS that you inject into your web page at either the start of the document load or after the document is done loading. User scripts are extremely powerful because they allow client-side customization of web page, allow injection of event listeners and can even be used to inject scripts that can in turn call back into the Native app. The following code snippet creates a user script that is injected at end of document load. The user script is added to the WKUserContentController instance that is a property on the WKWebViewConfiguration object.
// Create WKWebViewConfiguration instance
var webCfg:WKWebViewConfiguration = WKWebViewConfiguration()
// Setup WKUserContentController instance for injecting user script
var userController:WKUserContentController = WKUserContentController()
// Get script that's to be injected into the document
let js:String = buttonClickEventTriggeredScriptToAddToDocument()
// Specify when and where and what user script needs to be injected into the web document
var userScript:WKUserScript = WKUserScript(source: js,
injectionTime: WKUserScriptInjectionTime.atDocumentEnd,
forMainFrameOnly: false)
// Add the user script to the WKUserContentController instance
userController.addUserScript(userScript)
// Configure the WKWebViewConfiguration instance with the WKUserContentController
webCfg.userContentController = userController;
Your web page can post messages to your native app via the window.webkit.messageHandlers.<name>.postMessage (<message body>) method.
Here, “name” is the name of the message being posted back. The JS can post back any JS object as message body and the JS object would be automatically mapped to corresponding Swift native object.
The following JS code snippet posts back a message when a button click event occurs on a button with Id “ClickMeButton”.
var button = document.getElementById("clickMeButton");
button.addEventListener("click", function() {
varmessageToPost = {'ButtonId':'clickMeButton'};
window.webkit.messageHandlers.buttonClicked.postMessage(messageToPost);
},false);
In order to receive messages posted by your web page, your native app needs to implement the WKScriptMessageHandler protocol.
The protocol defines a single required method. The WKScriptMessage instance returned in the callback can be queried for details on the message being posted back.
func userContentController(userContentController: WKUserContentController,
didReceiveScriptMessage message: WKScriptMessage) {
if let messageBody:NSDictionary= message.body as? NSDictionary{
// Do stuff with messageBody
}
}
Finally, the native class that implements WKScriptMessageHandler protocol needs to register itself as a message handler with the WKWebView as follows:
// Add a script message handler for receiving "buttonClicked" event notifications posted
// from the JS document
userController.addScriptMessageHandler(self, name: "buttonClicked")
A:
You can always inject source code using evaluateJavaScript(_:) on the web view. From there, either replace the event handler on the buttons (posting a message and then invoking the original function in the new handler) or add an event handler on the buttons or an ancestor element that captures click events and posts a message. (The original event handler will also run.)
document.getElementById("submit-button").addEventListener("click", function () {
window.webkit.messageHandlers.log.postMessage("submit");
});
If the buttons don’t have an ID, you can add a handler on the document that captures all (bubbling) click events and then posts a message based on the event target (using the button’s text or location as a determinant).
A:
You can inject JS code using evaluateJavaScript() function to the WKWebView. You can capture all onclick events with following JS code, which detect only the click on submit button. All original event handlers will also run - don't worry!
document.addEventListener("click", function(e)
{
e = e || window.event;
//if(e.target.value == 'Submit') //you can identify this button like this, but better like:
if(e.target.getAttribute('onclick') == 'javascript:GotoURL(3)')
//if this site has more elements with onclick="javascript:GotoURL(3)"
//if(e.target.value == 'Submit' && e.target.getAttribute('onclick') == 'javascript:GotoURL(3)')
{
console.log(e.target.value);
//if you need then you can call the following line on this place too:
//window.webkit.messageHandlers.log.postMessage("submit");
}
});
//DO NOT add the next line in your code! It is only for my demo - buttons need this in my demo for execution
function GotoURL(site){}//DO NOT add this line! It is only for my demo!
<input type="button" value="Edit Info" class="button" onclick="javascript:GotoURL(1)">
<input type="button" value="Start Over" class="button" onclick="javascript:GotoURL(2)">
<input type="button" value="Submit" class="button" onclick="javascript:GotoURL(3)">
| {
"pile_set_name": "StackExchange"
} |
Q:
Failed to convert message exception with RabbitMQ source + Log sink stream
I'm trying out Spring Cloud Data Flow and today I updated to the latest versions and since that I'm not able to create this simple example which should simple log the AMQP message...
rabbit | log
When I deploy this stream and simply publish a String message on the consumed queue, this works ok. But when it is a serialized PoJo it does not. The older versions of the data flow server + started apps based on spring boot 1.5.x did just do this.
Caused by: org.springframework.messaging.MessageDeliveryException: failed to send Message to channel 'output'; nested exception is java.lang.IllegalStateException: Failed to convert message: 'GenericMessage [payload={"absolute_path":"/+~JF4472914347363856925.tmp","filename":"+~JF4472914347363856925.tmp","timestamp":1536315010932,"sshd_server":"localhost","sshd_port":22}, headers={amqp_receivedDeliveryMode=PERSISTENT, amqp_receivedRoutingKey=sftp.uploaded, amqp_receivedExchange=exchange, amqp_deliveryTag=1, amqp_consumerQueue=sftp_uploaded, amqp_redelivered=false, id=d3d84d90-53ca-4c39-cdef-8665d35ddcf1, amqp_consumerTag=amq.ctag-8kP4KDjn13oae1Qutmw4IA, contentType=text/json, timestamp=1536315013950}]' to outbound message.
at org.springframework.integration.support.utils.IntegrationUtils.wrapInDeliveryExceptionIfNecessary(IntegrationUtils.java:163) ~[spring-integration-core-5.0.6.RELEASE.jar!/:5.0.6.RELEASE]
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:475) ~[spring-integration-core-5.0.6.RELEASE.jar!/:5.0.6.RELEASE]
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:394) ~[spring-integration-core-5.0.6.RELEASE.jar!/:5.0.6.RELEASE]
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:181) ~[spring-messaging-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:160) ~[spring-messaging-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:47) ~[spring-messaging-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:108) ~[spring-messaging-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.integration.endpoint.MessageProducerSupport.sendMessage(MessageProducerSupport.java:203) ~[spring-integration-core-5.0.6.RELEASE.jar!/:5.0.6.RELEASE]
at org.springframework.integration.amqp.inbound.AmqpInboundChannelAdapter.access$600(AmqpInboundChannelAdapter.java:60) ~[spring-integration-amqp-5.0.6.RELEASE.jar!/:5.0.6.RELEASE]
at org.springframework.integration.amqp.inbound.AmqpInboundChannelAdapter$Listener.createAndSend(AmqpInboundChannelAdapter.java:240) ~[spring-integration-amqp-5.0.6.RELEASE.jar!/:5.0.6.RELEASE]
at org.springframework.integration.amqp.inbound.AmqpInboundChannelAdapter$Listener.onMessage(AmqpInboundChannelAdapter.java:207) ~[spring-integration-amqp-5.0.6.RELEASE.jar!/:5.0.6.RELEASE]
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.doInvokeListener(AbstractMessageListenerContainer.java:1414) ~[spring-rabbit-2.0.4.RELEASE.jar!/:2.0.4.RELEASE]
... 22 common frames omitted
Caused by: java.lang.IllegalStateException: Failed to convert message: 'GenericMessage [payload={"absolute_path":"/+~JF4472914347363856925.tmp","filename":"+~JF4472914347363856925.tmp","timestamp":1536315010932,"sshd_server":"localhost","sshd_port":22}, headers={amqp_receivedDeliveryMode=PERSISTENT, amqp_receivedRoutingKey=sftp.uploaded, amqp_receivedExchange=exchange, amqp_deliveryTag=1, amqp_consumerQueue=sftp_uploaded, amqp_redelivered=false, id=d3d84d90-53ca-4c39-cdef-8665d35ddcf1, amqp_consumerTag=amq.ctag-8kP4KDjn13oae1Qutmw4IA, contentType=text/json, timestamp=1536315013950}]' to outbound message.
at org.springframework.cloud.stream.binding.MessageConverterConfigurer$OutboundContentTypeConvertingInterceptor.doPreSend(MessageConverterConfigurer.java:324) ~[spring-cloud-stream-2.0.1.RELEASE.jar!/:2.0.1.RELEASE]
at org.springframework.cloud.stream.binding.MessageConverterConfigurer$AbstractContentTypeInterceptor.preSend(MessageConverterConfigurer.java:351) ~[spring-cloud-stream-2.0.1.RELEASE.jar!/:2.0.1.RELEASE]
at org.springframework.integration.channel.AbstractMessageChannel$ChannelInterceptorList.preSend(AbstractMessageChannel.java:589) ~[spring-integration-core-5.0.6.RELEASE.jar!/:5.0.6.RELEASE]
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:435) ~[spring-integration-core-5.0.6.RELEASE.jar!/:5.0.6.RELEASE]
... 32 common frames omitted
Versions
spring-cloud-dataflow-server-local:1.6.2.RELEASE
Darwin-SR1-stream-applications-kafka-maven
A:
There were significant changes and enhancements in content type negotiation in Spring Cloud Stream 2.*. You can read up on it here https://docs.spring.io/spring-cloud-stream/docs/Fishtown.BUILD-SNAPSHOT/reference/htmlsingle/#content-type-management.
Basically what I am seeing is that you don't have an appropriate MessageConverter in the stack.
Also, from what I see your content type is text/json for which we do not provide a converter. Consider changing it to application/json.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to convert objects into an array?
I have stored data in local storage in an array. Now I want to remove certain item from that array present in local storage.
For that first I have
var items = JSON.parse(localStorage.getItem('shoppingCart'));
var resturantid = localStorage.getItem('resturant_id');
var filtered = [];
for (var q = 0; q < items.length; q++) {
if (items[q].resturantid == resturantid) {
filtered.push(items[q]);
}
}
console.log(typeof filtered, filtered);
output is OBJECT
in console
(2) [{…}, {…}]
0: {name: "veg-momo", price: 12, count: 8, resturant: "Test Developer", resturantid: 2, …}
1: {name: "afdafasdf", price: 123, count: 4, resturant: "Test Developer", resturantid: 2, …}
length: 2
__proto__: Array(0)
typeof gives me object and because of this I haven't been able to use map function as it says array.map is not a function.
I want these things to happen just to remove certain item in local storage key in which array is set.
A:
Your object is something like {0: {…}, 1: {…}} you need to make an array of values like [{...}, {...}]! To do so use Object.values() and then apply map() or filter() or reduce() as per your program needs.
let obj = Object.assign({},[{name: "veg-momo", price: 12, count: 8, resturant: "Test Developer", resturantid: 2}, {name: "afdafasdf", price: 123, count: 4, resturant: "Test Developer", resturantid: 2}]);
// obj.map(x => {console.log(x)}); //error: Uncaught TypeError: obj.map is not a function
Object.values(obj).map(x => console.log(x)); // Works!
| {
"pile_set_name": "StackExchange"
} |
Q:
How to stop Jquery dropdown from scrolling with page?
So im having a problem with a Jquery dropdown, where if i scroll down the page, the dropdown will appear farther down rather then where they should be directly under the button..
<!doctype html>
<html>
<head>
<title></title>
<link href="/Nexus 5/Website style.css" type="text/css" rel="stylesheet"/>
<script type="text/javascript" src="jquery.js"></script>
<script type="text/javascript">
var timeoutID;
$(function(){
$('.dropdown').mouseenter(function(){
$('.sublinks').stop(false, true).hide();
window.clearTimeout(timeoutID);
var submenu = $(this).parent().next();
submenu.css({
position:'absolute',
top: $(this).offset().top + $(this).height() + 'px',
left: $(this).offset().left + 'px',
zIndex:1000
});
submenu.stop().slideDown(300);
submenu.mouseleave(function(){
$(this).slideUp(300);
});
submenu.mouseenter(function(){
window.clearTimeout(timeoutID);
});
});
$('.dropdown').mouseleave(function(){
timeoutID = window.setTimeout(function() {$('.sublinks').stop(false, true).slideUp(300);}, 250);
});
});
</script>
</head>
<body>
<div id="header" style="position: fixed; ">
<img src="smalllogo.png" style="float:left; width:180px; height:73px;">
<div class="nav">
<ul>
<li><a href="home.html">Home</a></li>
<li><a href="Nexus 5/Nexus 5 (home).html" class="dropdown" >Nexus 5</a></li>
<li class="sublinks">
<a href="Nexus 5/Nexus 5 (info).html">Info</a>
<a href="Nexus 5/Nexus 5 (root & unlock).html">Root & Unlock</a>
<a href="Nexus 5/Nexus 5 (recovery).html">Recoveries</a>
<a href="Nexus 5/Nexus 5 (roms).html">ROMs</a>
<a href="Nexus 5/Nexus 5 (kernels).html">Kernels</a>
<a href="Nexus 5/Nexus 5 (other).html">Other</a>
</li>
<li><a href="Galaxy S4/GS4 (home).html" class="dropdown">Galaxy S4</a></li>
<li class="sublinks">
<a href="Galaxy S4/GS4 (info).html">Info</a>
<a href="Galaxy S4/GS4 (root & unlock).html">Root & Unlock</a>
<a href="Galaxy S4/GS4 (recovery).html">Recoveries</a>
<a href="Galaxy S4/GS4 (roms).html">ROMs</a>
<a href="Galaxy S4/GS4 (other).html">Other</a>
</li>
<li><a href="../about.html">About</a></li>
</ul>
</div>
</div>
Then when rolling over, and scrolled down, the jquery links go way down the page.
This can be viewed live from androiddevelopmentdepot.com
A:
I couldn't understand why you're calculating the top position of it. Fixing top, could solve your problem?
Try this:
submenu.css({
position:'absolute',
top: 75,
left: $(this).offset().left + 'px',
zIndex:1000
});
| {
"pile_set_name": "StackExchange"
} |
Q:
Better way to check when variable equals to another value
I have a variable that I need to check frequently when my game is running. To simplify this, the check is: if (score >= achievement1) {do something}.
It seemed overkill to put this check in the Update() function which is called every frame;
So instead, I call the InvokeRepeating function in the Start (); function:
InvokeRepeating ("checkscoreforachievement", 0f, 2f);
Is this smart or is there a better way doing this? End result should be that within a few seconds after achieving a certain score, the achievement is triggered.
The reason I'm asking is that there are a few more things I need to do regularly when my game is running, so I'll end up with quite a few of these processes. Wondering if that isn't too much of a resource drain. Can't find good documentation on this subject.
A:
No, InvokeRepeating is not better in this case.
It's not better because you are calling that function every 2 seconds which means that when score >= achievement1 evaluates to true, it will take about 2 seconds to detect that. Also, InvokeRepeating uses reflection which is slow. There is no advantage of using it here. Using the Update function like you are currently doing it is totally fine.
A much more better solution than using the Update function would be to check if score >= achievement1 only when score is changed with auto property.
public int achievement1 = 0;
private float _score;
public float score
{
get
{
return _score;
}
set
{
if (_score != value)
{
_score = value;
if (_score >= achievement1)
{
//Do something
}
}
}
}
This will only do the check when the score variable is set or changed instead of every frame but the Update code is fine.
| {
"pile_set_name": "StackExchange"
} |
Q:
How do i compare php object element of type bool
I have a PHP object with an element of type bool but if I compare it to == true or == 1 or anything it doesn't match, if I also dump it out it returns an empty string but if I run is_bool() on it I get true. So PHP knows the element is a boolean but for some reason I can't compare it?
var_dump($obj);
object(stdClass)#1628 (33) {
["BoolElement"]=>
bool(true)
}
echo is_bool($obj->BoolElement); // true
Any ideas?
A:
use ===
This compares the type, so it should work more strictly.
| {
"pile_set_name": "StackExchange"
} |
Q:
Embed a View into a render array
In Drupal 7, I used the following code implementing views_embed_view in a render array.
$view_output = views_embed_view($view_name, $display_id, $uid);
$page['profile']['bio_mobile'] = array(
'#type' => 'markup',
'#markup' => $view_output,
'#prefix' => '<div class="mobile-display">',
'#suffix' => '</div>',
);
How do I do that using Drupal 8? Although the function is still documented, it return nothing more than just the array I passed in.
A:
Indeed, #markup only allows string, but you can by using a block type
$view_output = views_embed_view($view_name, $display_id, $uid);
$page['profile']['bio_mobile'] = array(
'#type' => 'block',
'content' => [
'system_main' => $view_output,
],
'#prefix' => '<div class="mobile-display">',
'#suffix' => '</div>',
);
A:
I don't know about using #markup but I can create a variable with a renderable View array like so:
// Staff snippet.
$view = Views::getView('staff');
$view->setDisplay('user_snippet');
$view->preExecute();
$view->execute();
if (count($view->result)) {
$variables['user_snippet'] = $view->buildRenderable('user_snippet');
}
Then in the twig file I am sending the variable to:
{% if user_snippet %}
{{ user_snippet }}
{% endif %}
| {
"pile_set_name": "StackExchange"
} |
Q:
Is the comment/answer reputation reversed?
It appears that a user needs 150 rep to comment. So I see a lot of new users posting their comments as answers because, what other choice do they have? A user with less than 150 rep doesn't have the experience to not comment (as an answer). So that leaves it to the mature community to clean up after these newbies with moderator flags.
Perhaps the reputation system should be reversed..
Make it take 150 rep to answer. Comments can be made at any rep - perhaps not allowing external links in comments by users w/ less than 150 rep. Comments earn rep at some rate.. say 2 pts per upvote per 24 hour period that the comment is available for viewing. Only make comments earn rep for users with less than 150 rep.
That way you allow the community to choose when a user is mature enough to start posting answers.
** Replace all the numeric values with variables and tweak it to your heart's content. I'm just proposing that the incentive be given to let newbies comment instead of posting comments as answers.
A:
The purpose of having a minimum reputation to comment is to discourage people from commenting. And while there are new users who do post their comments as answers, there are plenty more who never attempt to post comments. As exemplified by the fact that users frequently come to MSO asking about how they can post comments on a question without the rep.
This is a Q&A site. We want to encourage questions and answers, not questions and comments. We do this by limiting the size of comments, limiting their editability, and getting new users used to the Q&A paradigm by preventing them from using the comment feature everywhere.
Your suggestion would encourage users to post answers as comments. That is far more detrimental to the purpose of the site than posting comments as answers.
| {
"pile_set_name": "StackExchange"
} |
Q:
Scale wpf Bar Chart manually
I am scanning computers and their scale can not be like 1.5,2,2.5 they should be in whole numbers like 1,2,3,4.
my current code is
ICollection<KeyValuePair<String, int>> data = new Dictionary<String, int>();
data.Add(new KeyValuePair<string, int>(Protocol, protocolCount));
mycolseries = new ColumnSeries
{
ItemsSource = data,
Title = Protocol,
IndependentValuePath = "Key",
DependentValuePath = "Value",
};
mainChart.Series.Add(mycolseries);
if i change dependent value to "key" it gives strange errors like "new should be used with the invocation or element, or has not been initialized"
A:
public void Window1(){
setChartSCale()
}
private void setChartScale()
{
lamainChart.Interval = 1;
lamainChart.Orientation = AxisOrientation.Y;
lamainChart.ShowGridLines = true;
//lamainChart.Maximum = 50;
lamainChart.Minimum = 0;
}
call this method recursivly
private void addRecursiveLedgendAfterInit(string Protocol, int protocolCount)
{
ICollection<KeyValuePair<String, int>> data = new Dictionary<String, int>();
data.Add(new KeyValuePair<string, int>(Protocol, protocolCount));
mycolseries = new ColumnSeries
{
ItemsSource = data,
Title = Protocol,
IndependentValuePath = "Key",
DependentRangeAxis =lamainChart,
DependentValuePath = "Value"
};
mainChart.Series.Add(mycolseries);
}
| {
"pile_set_name": "StackExchange"
} |
Q:
FileNotFound exception
I am trying to read a file in a servlet. I am using eclipse IDE.
I get a FileNotFoundException if I provide relative file name.
List<String> ls=new ArrayList<String>();
Scanner input = new Scanner(new File("Input.txt"));
while(input.hasNextLine()) {
ls.add(input.nextLine());
}
The same code works if I put the absolute path like this:
Scanner input = new Scanner(new File("F:/Spring and other stuff/AjaxDemo/src/com/pdd/ajax/Input.txt"));
The Java file and text file are there in the same folder.
Does it searches text file in some other folder ?
A:
Setting the Working Directory
One option you have when working inside of Eclipse is to set the working directory in your launch configuration. To do this:
Navigate to Run | Run Configurations...
Select your configuration in the left hand pane
Select the Arguments tab
Navigate to the Working directory section
Select Other
Enter in your desired base directory
You can validate this in a test by printing:
System.getProperty("user.dir")
This has the benefit of not changing your code for production vs. test.
Recommendation
However, the best approach is to always be explicit about the working directory by way of configuration. This puts the working directory under the direct control of your application and away from tools and servlet containers such as Eclipse and Tomcat. To do this, you would use the following File constructor
new File(parent, file)
| {
"pile_set_name": "StackExchange"
} |
Q:
Collapse 3 Text Colums into 1 within a wide Pandas DataFrame
I have a dataset with one data type spread across multiple columns. I'd like to reduce these to a single column. I have a function that accomplishes this, but its a cumbersome process and I'm hoping there's a cleaner way to accomplish this. Here's a toy sample of my data:
UID COMPANY EML MAI TEL
273 7UP nan nan TEL
273 7UP nan MAI nan
906 WSJ nan nan TEL
906 WSJ EML nan nan
736 AIG nan MAI nan
What I'd like to get to:
UID COMPANY CONTACT_INFO
273 7UP MT
906 WSJ ET
736 AIG M
I've solved this by writing a function that converts EML, MAI or TEL to a prime number, aggregates the results then converts the sum into the constituent contact types. This works, and is reasonably quick. Here's a sample:
def columnRedux(df):
newDF = df.copy()
newDF.fillna('-', inplace=True)
newDF['CONTACT_INFO'] = newDF['EML'] + newDF['MAI'] + newDF['TEL']
newDF.replace('EML--', 7, inplace=True)
newDF.replace('-MAI-', 101, inplace=True)
newDF.replace('--TEL', 1009, inplace=True)
small = newDF.groupby(['UID', 'COMPANY'], as_index=False)['CONTACT_INFO'].sum()
small.replace(7, 'E', inplace=True)
small.replace(101, 'M', inplace=True)
small.replace(108, 'EM', inplace=True)
small.replace(1009, 'T', inplace=True)
small.replace(1016, 'ET', inplace=True)
small.replace(1110, 'MT', inplace=True)
small.replace(1117, 'EMT', inplace=True)
return small
df1 = pd.DataFrame(
{'EML' : [np.nan, np.nan, np.nan, 'EML', np.nan, np.nan, 'EML', np.nan, np.nan, 'EML', 'EML', np.nan],
'MAI' : [np.nan, 'MAI', np.nan, np.nan, 'MAI', np.nan, np.nan, np.nan, 'MAI', np.nan, np.nan, 'MAI'],
'COMPANY' : ['7UP', '7UP', 'UPS', 'UPS', 'UPS', 'WSJ', 'WSJ', 'TJX', 'AIG', 'CDW', 'HEB', 'HEB'],
'TEL' : ['TEL', np.nan, 'TEL', np.nan, np.nan, 'TEL', np.nan, 'TEL', np.nan, np.nan, np.nan, np.nan],
'UID' : [273, 273, 865, 865, 865, 906, 906, 736, 316, 458, 531, 531]},
columns=['UID', 'COMPANY', 'EML', 'MAI', 'TEL'])
cleanDF = columnRedux(df1)
My issue is that I have several data sets, each with its own set of "wide" columns. Some have 5+ columns to be reduced. Hard coding the conversions for all of the variations is not trivial. Is there a cleaner way to accomplish this?
A:
Maybe not the "nicest" solution. But one would be to use a simple groupby and condition the included elements:
df = df.groupby(['UID','COMPANY'])[['EML','MAI','TEL']]\
.apply(lambda x: ''.join(sorted([i[0] for y in x.values for i in y if pd.notnull(i)])))\
.reset_index()\
.rename(columns={0:'CONTACT_INFO'})
Or an alternative would be to convert the grouped dataframes to type str and replace the strings and sum. Quite readable I'd say.
m = {
'nan':'',
'EML':'E',
'MAI':'M',
'TEL':'T'
}
df = df.groupby(['UID','COMPANY'])[['EML','MAI','TEL']]\
.apply(lambda x: x.astype(str).replace(m).sum().sum())\
.reset_index()\
.rename(columns={0:'CONTACT_INFO'})
Full example:
import pandas as pd
import numpy as np
data = '''\
UID COMPANY EML MAI TEL
273 7UP nan nan TEL
273 7UP nan MAI nan
906 WSJ nan nan TEL
906 WSJ EML nan nan
736 AIG nan MAI nan'''
fileobj = pd.compat.StringIO(data)
df = pd.read_csv(fileobj, sep='\s+').replace('NaN',np.nan)
# use a nested list comprehension to flatten the array and remove nans.
df = df.groupby(['UID','COMPANY'])[['EML','MAI','TEL']]\
.apply(lambda x: ''.join(sorted([i[0] for y in x.values for i in y if pd.notnull(i)])))\
.reset_index()\
.rename(columns={0:'CONTACT_INFO'})
print(df)
Returns:
UID COMPANY CONTACT_INFO
273 7UP MT
736 AIG M
906 WSJ ET
dtype: object
A:
Let's try this:
(df1.set_index(['UID','COMPANY']).notnull() * df1.columns[2:].str[0])\
.sum(level=[0,1]).sum(1).reset_index(name='CONTACT_INFO')
Output:
UID COMPANY CONTACT_INFO
0 273 7UP MT
1 865 UPS EMT
2 906 WSJ ET
3 736 TJX T
4 316 AIG M
5 458 CDW E
6 531 HEB EM
Split up for @AntonvBR:
df2 = df1.set_index(['UID','COMPANY'])
df_out = ((df2.notnull() * df2.columns.str[0])
.sum(level=[0,1]) #consolidate rows of contact info to one line
.sum(1) #sum across columns to create one column
.reset_index(name='CONTACT_INFO'))
print(df_out)
Output:
UID COMPANY CONTACT_INFO
0 273 7UP MT
1 865 UPS EMT
2 906 WSJ ET
3 736 TJX T
4 316 AIG M
5 458 CDW E
6 531 HEB EM
| {
"pile_set_name": "StackExchange"
} |
Q:
"You know" in spoken English
Possible Duplicate:
How to use “you know”
Why is "you know" most commonly used in spoken English. Or to phrase it differently, why do native speakers use this expression a lot in spoken English? Is it a good way of speaking? Does it have to do anything with a particular country, culture, etc?
A:
In most cases it's a space-filler with no meaning at all other than "er..."; or it means "well you do know, just give me a few moments to think of what it's called..."
"Where are you going?"
"I'm going to the, you know, shopping mall."
| {
"pile_set_name": "StackExchange"
} |
Q:
Shared secondary axes in matplotlib
How to set a shared secondary axes using subplots in matplotlib.
Here is the minimal code to display the issue:
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
def countour_every(ax, every, x_data, y_data,
color='black', linestyle='-', marker='o', **kwargs):
"""Draw a line with countour marks at each every points"""
line, = ax.plot(x_data, y_data, linestyle)
return line
def prettify_axes(ax, data):
"""Makes my plot pretty"""
if 'title' in data:
ax.set_title(data['title'])
if 'y_lim' in data:
ax.set_ylim(data['y_lim'])
if 'x_lim' in data:
ax.set_xlim(data['x_lim'])
# Draw legend only if labels were set (HOW TO DO IT?)
# if ax("has_some_label_set"):
ax.legend(loc='upper right', prop={'size': 6})
ax.title.set_fontsize(7)
ax.xaxis.set_tick_params(labelsize=6)
ax.xaxis.set_tick_params(direction='in')
ax.xaxis.label.set_size(7)
ax.yaxis.set_tick_params(labelsize=6)
ax.yaxis.set_tick_params(direction='in')
ax.yaxis.label.set_size(7)
def prettify_second_axes(ax):
ax.yaxis.set_tick_params(labelsize=7)
ax.yaxis.set_tick_params(labelcolor='red')
ax.yaxis.label.set_size(7)
def compare_plot(ax, data):
line1 = countour_every(ax, 10, **data[0])
if 'label' in data[0]:
line1.set_label(data[0]['label'])
line2 = countour_every(ax, 10, **data[1])
if 'label' in data[1]:
line2.set_label(data[1]['label'])
ax2 = ax.twinx()
line3 = ax.plot(
data[0]['x_data'],
data[0]['y_data']-data[1]['y_data'], '-',
color='red', alpha=.2, zorder=1)
prettify_axes(ax, data[0])
prettify_second_axes(ax2)
d0 = {'x_data': np.arange(0, 10), 'y_data': abs(np.random.random(10)), 'y_lim': [-1, 1], 'color': '.7', 'linestyle': '-', 'label': 'd0'}
d1 = {'x_data': np.arange(0, 10), 'y_data': -abs(np.random.random(10)), 'y_lim': [-1, 1], 'color': '.7', 'linestyle': '--', 'label': 'd1'}
d2 = {'x_data': np.arange(0, 10), 'y_data': np.random.random(10), 'y_lim': [-1, 1], 'color': '.7', 'linestyle': '-.'}
d3 = {'x_data': np.arange(0, 10), 'y_data': -np.ones(10), 'y_lim': [-1, 1], 'color': '.7', 'linestyle': '-.'}
fig, axes = plt.subplots(2, 2, sharex=True, sharey=True)
fig.set_size_inches(6, 6)
compare_plot(axes[0][0], [d0, d1])
compare_plot(axes[0][1], [d0, d2])
compare_plot(axes[1][0], [d1, d0])
compare_plot(axes[1][1], [d3, d2])
fig.suptitle('A comparison chart')
fig.set_tight_layout({'rect': [0, 0.03, 1, 0.95]})
fig.text(0.5, 0.03, 'Position', ha='center')
fig.text(0.005, 0.5, 'Amplitude', va='center', rotation='vertical')
fig.text(0.975, 0.5, 'Error', color='red', va='center', rotation='vertical')
fig.savefig('demo.png', dpi=300)
That generates the following image
We can see that the X axis and the Y axis is correctly shared, but the secondary twin axis, is repeated in all subplots.
Also the secondary axis isn't scaling correctly to fit the data. (that should occurs independently of the principal y axis being limited).
A:
You will need to share the twin axes manually and also remove the ticklabels
def compare_plot(ax, data):
# ...
ax2 = ax.twinx()
# ...
return ax2
sax1 = compare_plot(axes[0][0], [d0, d1])
sax2 = compare_plot(axes[0][1], [d0, d2])
sax3 = compare_plot(axes[1][0], [d1, d0])
sax4 = compare_plot(axes[1][1], [d3, d2])
for sax in [sax2, sax3, sax4]:
sax1.get_shared_y_axes().join(sax1, sax)
sax1.autoscale()
for sax in [sax1,sax3]:
sax.yaxis.set_tick_params(labelright=False)
| {
"pile_set_name": "StackExchange"
} |
Q:
How To Disable Button when editext is empty in android
i want to disable send button if my msgtext(editext) is empty
i have a dialogbox which have edittext and send button
when dialog box is open
and check if edittext is empty then my button is disable
here is my code
public void openDialog(){
dialog = new Dialog(CommentsActivity.this);
dialog.requestWindowFeature(Window.FEATURE_NO_TITLE);
dialog.setContentView(R.layout.dialogcommentlayout);
dialog.getWindow().getAttributes().windowAnimations = R.style.DialogAnimation;
dialog.getWindow().setGravity(Gravity.FILL);
dialog.setCanceledOnTouchOutside(false);
msgtext=(EditText)dialog.findViewById(R.id.et_sent_msg);
msgtext.addTextChangedListener(textWatcher);
checkFieldsForEmptyValues();
//buttton for send comment
send=(Button)dialog.findViewById(R.id.sent_msg);
send.setOnClickListener(new OnClickListener() {
@Override
public void onClick(View v)
{
new sendReply().execute();
}
});
mHlvSimpleList= (ListView) dialog.findViewById(R.id.feedlist);
mHlvSimpleList.setOnItemClickListener(new OnItemClickListener() {
@Override
public void onItemClick(AdapterView<?> parent, View view,
int position, long id) {
// TODO Auto-generated method stub
Toast.makeText(CommentsActivity.this, "Listview", Toast.LENGTH_SHORT).show();
}
});
dialog.show();
//asynktask to show feed comment in dialog
}
private void checkFieldsForEmptyValues(){
String s1 = msgtext.getText().toString().trim();
if(s1.equals(""))
{
send.setEnabled(false);
}
}
when my debugger check
if(s1.equals(""))
{
send.setEnabled(false);
}
Giving error NULLPOINTER Exeption
at
send.setEnabled(false); this line ...
please tell me where i m doing wrong and what ?
thanku
A:
Your send Button is still null when you execute send.setEnabled(false);, inside checkFieldsForEmptyValues(). You have to initialize it first, then execute that method, so it isn't null. Like this:
//buttton for send comment
send=(Button)dialog.findViewById(R.id.sent_msg);
checkFieldsForEmptyValues();
This will solve your NullPointerException issue.
| {
"pile_set_name": "StackExchange"
} |
Q:
How can I make this massive Ruby if/elsif statement more compact and cleaner?
The following if/elsif statement is clearly a behemoth. The purpose of it is to change the phrasing of some text based on if certain data has been filled in by the user. I feel like there's got to be a better way to do this without taking up 30+ lines of code, but I'm just not sure how since I'm trying to customize the text pretty significantly based on the data available.
if !birthdate.blank? && !location.blank? && !joined.blank? && !death.blank?
"<p class='birthinfo'>#{name} was born on #{birthdate.strftime("%A, %B %e, %Y")} in #{location}. #{sex} passed away on #{death.strftime("%B %e, %Y")} at the age of #{calculate_age(birthdate, death)}. #{sex} was a member of #{link_to user.login, profile_path(user.permalink)}'s family for #{distance_of_time_in_words(joined,death)}.</p>"
elsif !birthdate.blank? && !location.blank? && !joined.blank? && death.blank?
"<p class='birthinfo'>#{name} was born on #{birthdate.strftime("%A, %B %e, %Y")} in #{location} and is #{time_ago_in_words(birthdate)} old. #{sex} has been a member of #{link_to user.login, profile_path(user.permalink)}'s family for #{time_ago_in_words(joined)}.</p>"
elsif birthdate.blank? && !location.blank? && !joined.blank? && !death.blank?
"<p class='birthinfo'>#{name} was born in #{location}. #{sex} passed away on #{death.strftime("%B %e, %Y")}. #{sex} was a member of #{link_to user.login, profile_path(user.permalink)}'s family for #{distance_of_time_in_words(joined,death)}.</p>"
elsif birthdate.blank? && !location.blank? && !joined.blank? && death.blank?
"<p class='birthinfo'>#{name} was born in #{location}. #{sex} has been a member of #{link_to user.login, profile_path(user.permalink)}'s family for #{time_ago_in_words(joined)}.</p>"
elsif birthdate.blank? && location.blank? && !joined.blank? && !death.blank?
"<p class='birthinfo'>#{name} was a member of #{link_to user.login, profile_path(user.permalink)}'s family for #{distance_of_time_in_words(joined,death)}. #{sex} passed away on #{death.strftime("%B %e, %Y")}.</p>"
elsif birthdate.blank? && location.blank? && !joined.blank? && death.blank?
"<p class='birthinfo'>#{name} has been a member of #{link_to user.login, profile_path(user.permalink)}'s family for #{time_ago_in_words(joined)}.</p>"
elsif !birthdate.blank? && location.blank? && !joined.blank? && !death.blank?
"<p class='birthinfo'>#{name} was born on #{birthdate.strftime("%A, %B %e, %Y")}. #{sex} passed away on #{death.strftime("%B %e, %Y")} at the age of #{calculate_age(birthdate, death)}. #{sex} was a member of #{link_to user.login, profile_path(user.permalink)}'s family for #{distance_of_time_in_words(joined,death)}.</p>"
elsif !birthdate.blank? && location.blank? && !joined.blank? && death.blank?
"<p class='birthinfo'>#{name} was born on #{birthdate.strftime("%A, %B %e, %Y")} and is #{time_ago_in_words(birthdate)} old. #{sex} has been a member of #{link_to user.login, profile_path(user.permalink)}'s family for #{time_ago_in_words(joined)}.</p>"
elsif !birthdate.blank? && !location.blank? && joined.blank? && !death.blank?
"<p class='birthinfo'>#{name} was born on #{birthdate.strftime("%A, %B %e, %Y")} in #{location}. #{sex} passed away on #{death.strftime("%B %e, %Y")} at the age of #{calculate_age(birthdate, death)}. #{sex} was a member of #{link_to user.login, profile_path(user.permalink)}'s family.</p>"
elsif !birthdate.blank? && !location.blank? && joined.blank? && death.blank?
"<p class='birthinfo'>#{name} was born on #{birthdate.strftime("%A, %B %e, %Y")} in #{location} and is #{time_ago_in_words(birthdate)} old. #{sex} is a member of #{link_to user.login, profile_path(user.permalink)}'s family.</p>"
elsif !birthdate.blank? && location.blank? && joined.blank? && !death.blank?
"<p class='birthinfo'>#{name} was born on #{birthdate.strftime("%A, %B %e, %Y")}. #{sex} passed away on #{death.strftime("%B %e, %Y")} at the age of #{calculate_age(birthdate, death)}. #{sex} was a member of #{link_to user.login, profile_path(user.permalink)}'s family.</p>"
elsif !birthdate.blank? && location.blank? && joined.blank? && death.blank?
"<p class='birthinfo'>#{name} was born on #{birthdate.strftime("%A, %B %e, %Y")} and is #{time_ago_in_words(birthdate)} old. #{sex} is a member of #{link_to user.login, profile_path(user.permalink)}'s family.</p>"
elsif birthdate.blank? && !location.blank? && joined.blank? && !death.blank?
"<p class='birthinfo'>#{name} was born in #{location}. #{sex} passed away on #{death.strftime("%B %e, %Y")}. #{sex} was a member of #{link_to user.login, profile_path(user.permalink)}'s family.</p>"
elsif birthdate.blank? && !location.blank? && joined.blank? && death.blank?
"<p class='birthinfo'>#{name} was born in #{location}. #{sex} is a member of #{link_to user.login, profile_path(user.permalink)}'s family.</p>"
else
"<p class='birthinfo'>#{name} is a member of #{link_to user.login, profile_path(user.permalink)}'s family.</p>"
end
A:
I think that you want to make all of these conditions more readable, and eliminate the repetition that exists both in your logic checks and in your string creation.
The first thing that I notice is that you repeat this all over the place:
<p class='birthinfo'>#{name} was born in
I would try to factor these out into either functions that take arguments and return formatted text, or classes that evaluate into expressions.
You're also not taking advantage of nesting at all. Instead of having every condition check the value of four expressions, you should try something like this:
if birthdate.blank?
# half of your expressions
else
# other half of your expressions
It might not make your code more readable, but it's worth trying out. The last thing that I'd suggest is that you might be able to cleverly rearrange your text so that it is both easy to construct piecewise and still reads well to the end-user. Here's an example:
notice = "#{name} was born on #{birthdate.strftime("%A, %B %e, %Y")} in #{location}."
One code snippet to generate this could be:
notice = "#{name} was born on #{birthdate.strftime("%A, %B %e, %Y")}"
if !location.is_blank?
notice += " in #{location}"
end
notice += "."
That's much easier to read that the analogous code that you have, which would check each condition separately and make a totally different string depending on the values of the boolean variables. The worst part about the logic that you have now is that if you add a fifth variable, you have to add a lot more special cases to your code. Building up your code into independent pieces would be much easier to read and maintain.
A:
I'd break it down into parts. DRY. Only generate a given segment of text once. Use a StringIO to keep the string generation separable.
sio = StringIO.new("")
know_birthdate, know_location, did_join, has_died = [ birthdate, location, joined, death ].map { |s| !s.blank? }
print_death = lambda do
sio.print ". #{sex} passed away on #{death.strftime("%B %e, %Y")}"
end
show_birth = know_birthdate or know_location
sio.print "<p class='birthinfo'>#{name} "
if show_birth
sio.print "was born"
sio.print " on #{birthdate.strftime("%A, %B %e, %Y")}" if know_birthdate
sio.print " in #{location}" if know_location
if has_died
print_death[]
sio.print " at the age of #{calculate_age(birthdate, death)}" if know_birthdate
elsif know_birthdate
sio.print " and is #{time_ago_in_words(birthdate)} old"
end
sio.print ". #{sex} "
end
sio.print "#{(has_died ? "was" : did_join ? "has been" : "is")} a member of #{link_to user.login, profile_path(user.permalink)}'s family"
sio.print " for #{distance_of_time_in_words(joined,death)}" if did_join and has_died
print_death[] if has_died and not show_birth
sio.print ".</p>"
sio.to_s
This makes the logic much easier to follow, and makes it much easier to make changes.
| {
"pile_set_name": "StackExchange"
} |
Q:
Git operations occasionally hang in Jenkins on Windows
We are running continuous Jenkins builds of a Git project hosted at Assembla.
Jenkins is running on Tomcat 6 under its own user and generally works fine.
However every once in a while (say once in every 10 builds), the checkout operation at the beginning of the build job simply hangs. At other times the Git tag operation at the end of the build also hangs. I believe this did not ever happen in command-line operation (on the same host with the same user).
When hung, the Windows process tree shows taskhost.exe ? tomcat6.exe ? git.exe ? ssh.exe
When externally killing the Git and ssh processes of a hung tag command, the following stacktrace is seen in the job console output - the error is strange since the directory mentioned already exists and has the private key installed.
hudson.plugins.git.GitException: Command "C:/Program Files (x86)/Git/bin/git.exe push <repository> <tag-name>" returned status code 1:
stdout:
stderr: Could not create directory 'c/Users/<user-name>/.ssh'.
at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:779)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:741)
Here is a full listing of the processes and handles (except the long list of Tomcat threads). The build task is currently hung on pull. Another strange phenomenon here is that there is a scheduled SCM poll every 5 minutes, but it hasn't run for a few days - it is probably also stuck somehow.
Process PID CPU Private Working Description Company Name
Bytes Set
-----------------------------------------------------------------------------------------------------------------------------------------
System Idle Process 0 84.09 0 K 24 K
Interrupts n/a 0.28 0 K 0 K Hardware Interrupts
DPCs n/a 0.85 0 K 0 K Deferred Procedure Calls
System 4 112 K 300 K
smss.exe 240 620 K 1,196 K Windows Session Manager Microsoft Corporation
sppsvc.exe 2664 3,312 K 9,100 K Microsoft Software Protection Platform Service Microsoft Corporation
csrss.exe 344 3,516 K 5,120 K Client Server Runtime Process Microsoft Corporation
conhost.exe 1316 1,184 K 2,804 K Console Window Host Microsoft Corporation
conhost.exe 3148 1,140 K 2,696 K Console Window Host Microsoft Corporation
wininit.exe 396 1,944 K 4,624 K Windows Start-Up Application Microsoft Corporation
services.exe 496 5,096 K 10,444 K Services and Controller app Microsoft Corporation
svchost.exe 616 4,776 K 9,940 K Host Process for Windows Services Microsoft Corporation
WmiPrvSE.exe 2468 2,692 K 6,052 K WMI Provider Host Microsoft Corporation
dllhost.exe 2180 2.27 2,160 K 5,392 K COM Surrogate Microsoft Corporation
svchost.exe 692 0.28 4,512 K 8,568 K Host Process for Windows Services Microsoft Corporation
svchost.exe 776 9,804 K 12,528 K Host Process for Windows Services Microsoft Corporation
svchost.exe 832 22,052 K 34,980 K Host Process for Windows Services Microsoft Corporation
svchost.exe 888 7,988 K 14,528 K Host Process for Windows Services Microsoft Corporation
svchost.exe 944 8,844 K 15,740 K Host Process for Windows Services Microsoft Corporation
dwm.exe 800 1,692 K 4,636 K Desktop Window Manager Microsoft Corporation
dwm.exe 3908 1,800 K 4,748 K Desktop Window Manager Microsoft Corporation
svchost.exe 984 13,036 K 17,004 K Host Process for Windows Services Microsoft Corporation
svchost.exe 284 8,536 K 11,152 K Host Process for Windows Services Microsoft Corporation
spoolsv.exe 1064 9,808 K 16,696 K Spooler SubSystem App Microsoft Corporation
svchost.exe 1168 1,116 K 2,740 K Host Process for Windows Services Microsoft Corporation
Tomcat6.exe 1308 0.28 331,512 K 302,568 K Commons Daemon Service Runner Apache Software Foundation
git.exe 1812 3,308 K 4,880 K
ssh.exe 2996 2,940 K 5,792 K
vmtoolsd.exe 1352 8,260 K 12,892 K VMware Tools Core Service VMware, Inc.
VMUpgradeHelper.exe 1416 2,452 K 6,588 K VMware virtual hardware upgrade helper application VMware, Inc.
svchost.exe 1880 3,796 K 9,224 K Host Process for Windows Services Microsoft Corporation
rdpclip.exe 3020 2,584 K 7,076 K RDP Clip Monitor Microsoft Corporation
rdpclip.exe 4072 1,948 K 6,236 K RDP Clip Monitor Microsoft Corporation
svchost.exe 1932 1,964 K 5,532 K Host Process for Windows Services Microsoft Corporation
dllhost.exe 1996 4,500 K 11,340 K COM Surrogate Microsoft Corporation
msdtc.exe 1284 3,604 K 7,880 K Microsoft Distributed Transaction Coordinator Service Microsoft Corporation
taskhost.exe 2492 3,076 K 6,252 K Host Process for Windows Tasks Microsoft Corporation
taskhost.exe 3548 2,896 K 6,088 K Host Process for Windows Tasks Microsoft Corporation
lsass.exe 504 8,516 K 16,548 K Local Security Authority Process Microsoft Corporation
lsm.exe 512 3,468 K 6,480 K Local Session Manager Service Microsoft Corporation
csrss.exe 408 1,836 K 3,796 K Client Server Runtime Process Microsoft Corporation
winlogon.exe 436 1,780 K 4,392 K Windows Logon Application Microsoft Corporation
LogonUI.exe 784 7,344 K 14,460 K Windows Logon User Interface Host Microsoft Corporation
csrss.exe 2184 2,756 K 7,532 K Client Server Runtime Process Microsoft Corporation
winlogon.exe 2952 1,960 K 5,192 K Windows Logon Application Microsoft Corporation
explorer.exe 1836 23,536 K 45,060 K Windows Explorer Microsoft Corporation
VMwareTray.exe 2168 2,824 K 6,400 K VMware Tools tray application VMware, Inc.
regedit.exe 2772 6,212 K 9,584 K Registry Editor Microsoft Corporation
procexp64.exe 3648 11.93 21,904 K 37,056 K Sysinternals Process Explorer Sysinternals - www.sysinternals.com
csrss.exe 3140 2,732 K 5,612 K Client Server Runtime Process Microsoft Corporation
conhost.exe 2500 1,312 K 3,452 K Console Window Host Microsoft Corporation
winlogon.exe 3172 1,900 K 4,980 K Windows Logon Application Microsoft Corporation
explorer.exe 868 28,840 K 45,200 K Windows Explorer Microsoft Corporation
VMwareTray.exe 3300 2,672 K 6,252 K VMware Tools tray application VMware, Inc.
rundll32.exe 3328 1,828 K 5,584 K Windows host process (Rundll32) Microsoft Corporation
cmd.exe 2832 2,240 K 2,588 K Windows Command Processor Microsoft Corporation
Process: Tomcat6.exe Pid: 1308
Type Name
Desktop \Default
Directory \KnownDlls
Directory \BaseNamedObjects
Event \BaseNamedObjects\TOMCAT6SIGNAL
File C:\Program Files\Apache Software Foundation\Tomcat 6.0\logs\commons-daemon.2012-05-24.log
File C:\Program Files\Apache Software Foundation\Tomcat 6.0\logs\tomcat6-stdout.2012-05-24.log
File C:\Program Files\Apache Software Foundation\Tomcat 6.0\logs\tomcat6-stdout.2012-05-24.log
File C:\Program Files\Apache Software Foundation\Tomcat 6.0\logs\tomcat6-stderr.2012-05-24.log
File C:\Program Files\Apache Software Foundation\Tomcat 6.0\logs\tomcat6-stderr.2012-05-24.log
File C:\Program Files\Apache Software Foundation\Tomcat 6.0
File C:\Users\<username>\AppData\Local\Temp\hsperfdata_<username>\1308
File C:\Program Files\Java\jre7\lib\rt.jar
File C:\Program Files\Apache Software Foundation\Tomcat 6.0\bin\bootstrap.jar
File C:\Program Files\Apache Software Foundation\Tomcat 6.0\bin\tomcat-juli.jar
File C:\Program Files\Apache Software Foundation\Tomcat 6.0\logs\catalina.2012-05-30.log
File C:\Program Files\Apache Software Foundation\Tomcat 6.0\logs\localhost.2012-05-24.log
File C:\Program Files\Apache Software Foundation\Tomcat 6.0\logs\manager.2012-05-24.log
File C:\Program Files\Apache Software Foundation\Tomcat 6.0\logs\host-manager.2012-05-24.log
File C:\Program Files\Apache Software Foundation\Tomcat 6.0\logs\tomcat6-stderr.2012-05-24.log
File C:\Program Files\Apache Software Foundation\Tomcat 6.0\logs\tomcat6-stdout.2012-05-24.log
File C:\Program Files\Java\jre7\lib\ext\dnsns.jar
File C:\Program Files\Apache Software Foundation\Tomcat 6.0\lib\annotations-api.jar
File C:\Program Files\Apache Software Foundation\Tomcat 6.0\lib\catalina-ant.jar
File C:\Program Files\Apache Software Foundation\Tomcat 6.0\lib\catalina-ha.jar
File C:\Program Files\Apache Software Foundation\Tomcat 6.0\lib\catalina-tribes.jar
File C:\Program Files\Apache Software Foundation\Tomcat 6.0\lib\catalina.jar
File C:\Program Files\Apache Software Foundation\Tomcat 6.0\lib\ecj-3.7.jar
File C:\Program Files\Apache Software Foundation\Tomcat 6.0\lib\el-api.jar
File C:\Program Files\Apache Software Foundation\Tomcat 6.0\lib\jasper-el.jar
File C:\Program Files\Apache Software Foundation\Tomcat 6.0\lib\jasper.jar
File C:\Program Files\Apache Software Foundation\Tomcat 6.0\lib\jsp-api.jar
File C:\Program Files\Apache Software Foundation\Tomcat 6.0\lib\servlet-api.jar
File C:\Program Files\Apache Software Foundation\Tomcat 6.0\lib\tomcat-coyote.jar
File C:\Program Files\Apache Software Foundation\Tomcat 6.0\lib\tomcat-dbcp.jar
File C:\Program Files\Apache Software Foundation\Tomcat 6.0\lib\tomcat-i18n-es.jar
File C:\Program Files\Apache Software Foundation\Tomcat 6.0\lib\tomcat-i18n-fr.jar
File C:\Program Files\Apache Software Foundation\Tomcat 6.0\lib\tomcat-i18n-ja.jar
File C:\Program Files\Java\jre7\lib\resources.jar
File C:\Program Files\Java\jre7\lib\ext\localedata.jar
File C:\Program Files\Java\jre7\lib\jsse.jar
File C:\Program Files\Java\jre7\lib\jce.jar
File C:\Program Files\Java\jre7\lib\ext\sunec.jar
File C:\Program Files\Java\jre7\lib\ext\sunmscapi.jar
File C:\Program Files\Java\jre7\lib\ext\sunjce_provider.jar
File \Device\Afd
File \Device\KsecDD
File \Device\Nsi
File C:\Windows\System32\en-US\KernelBase.dll.mui
File E:\jenkins\workspace\<job1>\.git\objects\pack\pack-f36e1122944b1a18c4f6a8dd9d38915125dffa9e.pack
File C:\Windows\Fonts\symbol.ttf
File \Device\Afd
File \Device\Afd
File C:\Program Files\Java\jre7\lib\charsets.jar
File C:\Program Files\Java\jre7\lib\charsets.jar
File \Device\Afd
File \Device\Afd
File E:\jenkins\jobs\<job2>\builds\2012-05-30_17-45-06\log
File \Device\Afd
File \Device\Afd
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\aether-api-1.13.1.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\aether-connector-wagon-1.13.1.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\aether-impl-1.13.1.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\aether-spi-1.13.1.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\aether-util-1.13.1.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\ant-1.8.3.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\ant-launcher-1.8.3.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\aopalliance-1.0.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\commons-cli-1.2.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\commons-httpclient-3.1.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\commons-io-1.4.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\commons-net-3.0.1.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\doxia-sink-api-1.0.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\guava-11.0.1.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\jackrabbit-jcr-commons-2.2.5.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\jackrabbit-webdav-2.2.5.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\javax.inject-1.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\jsch-0.1.44-1.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\jsr305-1.3.9.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\lib-jenkins-maven-artifact-manager-1.2.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\lib-jenkins-maven-embedder-3.9.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\maven-aether-provider-3.0.4.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\maven-agent-1.2.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\maven-artifact-3.0.4.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\maven-compat-3.0.4.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\maven-core-3.0.4.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\maven-embedder-3.0.4.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\maven-interceptor-1.2.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\maven-model-3.0.4.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\maven-model-<username>-3.0.4.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\maven-plugin-api-3.0.4.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\maven-reporting-api-3.0.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\maven-repository-metadata-3.0.4.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\maven-settings-3.0.4.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\maven-settings-<username>-3.0.4.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\maven2.1-interceptor-1.2.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\maven3-agent-1.2.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\maven3-interceptor-1.2.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\nekohtml-1.9.13.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\plexus-cipher-1.7.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\plexus-classworlds-2.3.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\plexus-component-annotations-1.5.5.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\plexus-interactivity-api-1.0-alpha-6.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\plexus-interpolation-1.14.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\plexus-sec-dispatcher-1.3.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\plexus-utils-2.0.6.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\sisu-guava-0.9.9.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\sisu-guice-3.1.0.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\sisu-inject-bean-2.3.0.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\sisu-inject-plexus-2.3.0.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\slf4j-api-1.6.2.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\wagon-file-2.2.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\wagon-ftp-2.2.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\wagon-http-2.2.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\wagon-http-shared-2.2.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\wagon-provider-api-2.2.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\wagon-ssh-2.2.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\wagon-ssh-common-2.2.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\wagon-ssh-external-2.2.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\wagon-webdav-jackrabbit-2.2.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\xercesImpl-2.9.1.jar
File E:\jenkins\plugins\maven-plugin\WEB-INF\lib\xml-apis-1.3.04.jar
File E:\jenkins\plugins\active-directory\WEB-INF\lib\active-directory-1.0.jar
File E:\jenkins\plugins\active-directory\WEB-INF\lib\ado20-1.0.jar
File E:\jenkins\plugins\active-directory\WEB-INF\lib\com4j-20080107.jar
File E:\jenkins\plugins\artifactdeployer\WEB-INF\lib\ant-filesystem-tasks-0.0.2.jar
File E:\jenkins\plugins\artifactdeployer\WEB-INF\lib\avalon-framework-4.1.3.jar
File E:\jenkins\plugins\artifactdeployer\WEB-INF\lib\commons-codec-1.4.jar
File E:\jenkins\plugins\artifactdeployer\WEB-INF\lib\commons-logging-1.1.jar
File E:\jenkins\plugins\artifactdeployer\WEB-INF\lib\log4j-1.2.9.jar
File E:\jenkins\plugins\artifactdeployer\WEB-INF\lib\logkit-1.0.1.jar
File E:\jenkins\plugins\git\WEB-INF\lib\annotation-indexer-1.2.jar
File E:\jenkins\plugins\git\WEB-INF\lib\bridge-method-annotation-1.4.jar
File E:\jenkins\plugins\git\WEB-INF\lib\joda-time-1.5.1.jar
File E:\jenkins\plugins\git\WEB-INF\lib\jsch-0.1.44-1.jar
File E:\jenkins\plugins\git\WEB-INF\lib\org.eclipse.jgit-0.12.1.jar
File E:\jenkins\plugins\jira\WEB-INF\lib\activation-1.1.1.jar
File E:\jenkins\plugins\jira\WEB-INF\lib\axis-1.4.jar
File E:\jenkins\plugins\jira\WEB-INF\lib\commons-codec-1.4.jar
File E:\jenkins\plugins\jira\WEB-INF\lib\commons-discovery-0.4.jar
File E:\jenkins\plugins\jira\WEB-INF\lib\commons-logging-1.0.4.jar
File E:\jenkins\plugins\jira\WEB-INF\lib\jaxrpc-api-1.1.jar
File E:\jenkins\plugins\jira\WEB-INF\lib\log4j-1.2.9.jar
File E:\jenkins\plugins\jira\WEB-INF\lib\saaj-api-1.3.jar
File E:\jenkins\plugins\jira\WEB-INF\lib\wsdl4j-1.6.2.jar
File E:\jenkins\plugins\periodicbackup\WEB-INF\lib\guava-r05.jar
File E:\jenkins\plugins\periodicbackup\WEB-INF\lib\junit-4.7.jar
File E:\jenkins\plugins\periodicbackup\WEB-INF\lib\plexus-archiver-1.0-alpha-11.jar
File E:\jenkins\plugins\periodicbackup\WEB-INF\lib\plexus-classworlds-1.2-alpha-6.jar
File E:\jenkins\plugins\periodicbackup\WEB-INF\lib\plexus-component-api-1.0-alpha-15.jar
File E:\jenkins\plugins\periodicbackup\WEB-INF\lib\plexus-container-default-1.0-alpha-15.jar
File E:\jenkins\plugins\periodicbackup\WEB-INF\lib\plexus-io-1.0-alpha-3.jar
File E:\jenkins\plugins\periodicbackup\WEB-INF\lib\plexus-utils-1.5.1.jar
File E:\jenkins\plugins\subversion\WEB-INF\lib\svnkit-1.3.4-jenkins-4.jar
File \Device\Afd
File \Device\Afd
File \Device\Afd
File \Device\Afd
File \Device\Afd
File \Device\Afd
File \Device\Afd
File \Device\Afd
File \Device\Afd
File \Device\Afd
File \Device\Afd
File \Device\Afd
File \Device\Afd
File \Device\Afd
File E:\jenkins\workspace\<job2>\.git\objects\pack\pack-a25789d9a15085fdc370bf63603670b6ef0aa516.pack
File E:\jenkins\workspace\<job3>\.git\objects\pack\pack-3debf683446a7b50138fa83d20b8a176adc40d74.pack
File E:\jenkins\workspace\<job4>\.git\objects\pack\pack-08e86702e225df334aa0281cdc34f6fe04a1a896.pack
File E:\jenkins\workspace\<job5>\.git\objects\pack\pack-691d5f70f196faae3152545cc4e8a0668ee43182.pack
File E:\jenkins\workspace\<job6>\.git\objects\pack\pack-a1f605f6d4ea56878d88c9d85c40884f7c9dc2e9.pack
File E:\jenkins\workspace\<job7>\.git\objects\pack\pack-77c1adab9d7dcd56337a59bf7aa6ab1fc5423f0c.pack
File \Device\NamedPipe\
File C:\Windows\Fonts\arial.ttf
File C:\Windows\Fonts\wingding.ttf
File \Device\Afd
File E:\jenkins\workspace\<job4>\.git\objects\pack\pack-29e3fac2d55ff8f5851726098c1310848ba61982.pack
File E:\jenkins\jobs\<job2>\scm-polling.log
File C:\Windows\winsxs\amd64_microsoft.windows.common-controls_6595b64144ccf1df_5.82.7600.16385_none_a44af8ec57f961cf
File \Device\Afd
File C:\Program Files\Apache Software Foundation\Tomcat 6.0\work\Catalina\localhost\gitblit-0.9.3\wicketFilter-filestore\9946\429\2051F8E78B66AEB8107264447FDEE93E\pm-null
File \Device\Afd
File C:\Program Files\Apache Software Foundation\Tomcat 6.0\webapps\jenkins\WEB-INF\lib\jenkins-core-1.461.jar
File E:\jenkins\workspace\<job3>\.git\objects\pack\pack-df631556f14db215a3790cdb2dbdc2d8ec6a4dce.pack
File E:\jenkins\workspace\<job3>\.git\objects\pack\pack-41dc6597dd98bff4284be2be0437bc96b14a660f.pack
File C:\Program Files\Apache Software Foundation\Tomcat 6.0\webapps\jenkins\WEB-INF\lib\jenkins-core-1.461.jar
Key HKLM\SYSTEM\ControlSet001\Control\Nls\Sorting\Versions
Key HKLM
Key HKLM\SYSTEM\ControlSet001\Control\SESSION MANAGER
Key HKU\S-1-5-21-1089811676-525746212-2675575413-1203\Control Panel\International
Key HKU\S-1-5-21-1089811676-525746212-2675575413-1203
Key HKLM\SYSTEM\ControlSet001\services\WinSock2\Parameters\Protocol_Catalog9
Key HKLM\SYSTEM\ControlSet001\services\WinSock2\Parameters\NameSpace_Catalog5
Key HKU
Key HKU\S-1-5-21-1089811676-525746212-2675575413-1203\Software\Microsoft\Windows\CurrentVersion\Explorer
Key HKU\S-1-5-21-1089811676-525746212-2675575413-1203\Software\Microsoft\Windows NT\CurrentVersion
Key HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\AppCompatFlags
Key HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options
Process git.exe(1812)
Section \BaseNamedObjects\hsperfdata_<username>_1308
Section \BaseNamedObjects\windows_shell_global_counters
WindowStation \Windows\WindowStations\Service-0x0-12330$
WindowStation \Windows\WindowStations\Service-0x0-12330$
Here is the SSH search (from a different point in time than the list of processes above). I don't know what to make of it.
A:
I solved it by:
setting the Path to Git executable to C:\Program Files (x86)\Git\cmd\git.exe (i.e. cmd and not bin!) and
by setting the environment variable %HOME% to $USERPROFILE
(otherwise HOME defaults to $HOMEDRIVE$HOMEPATH, which was H:\ in my case, but typically $HOME is set to $USERPROFILE).
| {
"pile_set_name": "StackExchange"
} |
Q:
How do I deal with a senior member who's irritated answering my questions even though it is the first time that I'm asking him about those?
I've been in the team for a couple of years and about a year back, our team absorbed the responsibilities from another team which consisted of senior/very technical people.
For this process, there was a very huge amount of information to absorb but an extremely brief transition time was given by the management. This resulted in our team members burning out because there was a lot to absorb suddenly. Our team worked till midnights everyday for 4-5 months by resorting to trial and error mostly and figuring out their own solutions.
Now most of the team members have quit the company. I am taking on the responsibilities from other people and there's still new things for me to learn.
Now I have to ask a member from the original team. For example, when I forsee that a solution will work immediately but cause unintended consequences after a few months.
When I interact with this person, the one from the original team, then he always responds in snappy ways to my questions. He also takes a stance that these are very obvious things and gets angry in explaining about it. I try to explain that I'm taking this responsibility for the first time and that I'm going to ask them these questions only once because it's the first time for me. They respond back "that's okay" but still get angry with me and take up the same stance again.
I also find it rude when they behave in a similar manner in team meetings by speaking to me in a condescending manner.
How do I deal with this senior person who's behaving rudely towards me, especially in front of the whole team?
A:
I've been in the team for a couple of years and about a year back, our team absorbed the responsibilities from another team which consisted of senior/very technical people.
To me it seems like the general expectation (at least from the POV of those handing over) was that your team is now enabled to tackle things on their own.
Our team worked till midnights everyday for 4-5 months by resorting to trial and error mostly and figuring out their own solutions. Now most of the team members have quit the company.
This is the fault of your team (manager ?) for not having planned for the bus factor. Given that the knowledge transfer happened an year back, no developer would feel comfortable in being held responsible for it in the future date. I believe this to be the root cause of the frustration of the senior dev you mention. Why must he be helping you on a project handed over an year back? From his point of view, the expectation was that the team was already enabled. He may also need to dig through a lot of old details which he may not directly remember now, so this "boring work" can add to the frustration.
In any case, its a coincidence that you still have this senior resource available with your organization, he himself could have switched in the duration.
I try to explain that I'm taking this responsibility for the first time
So given the general expectation above, it doesn't matter whether you are taking the responsibility for the first time. Because your team already took the knowledge, the right source for you to refer should be your team, and not him.
I also find it rude when they behave in a similar manner in team meetings by speaking to me in a condescending manner.
Is this just directed to you or your entire team? I would say it is directed towards anyone who talks to him on behalf of the team (which currently happens to be you). So don't take it so personally.
How do I deal with this senior person who's behaving rudely towards me, especially in front of the whole team?
You begin by having empathy for this person. He may have his own priorities, commitments, projects, deadlines to meet. Being dragged in a project from which he moved an year before does not help him meet them. This person may have also faced a lot of heat during the earlier transfer of project, given the high expectations from management that you mention (when a bridge breaks people on both sides get affected).
One way to move forward would be to set a 1:1 meeting/small group meeting with him, and set expectations respectfully around his time and engagement that is needed. You may want to loop in a manager to get alignment from his leadership if that is a blocker.
But at the end of the day, you have to own up the responsibilities being given to you (or being thrust upon you?) and stop bothering the senior dev for the trivial affairs. If you think this is an over expectation from the management for you to fulfill, I would suggest to brush your resume :)
| {
"pile_set_name": "StackExchange"
} |
Q:
Unexpected Regular expression result using dot symbol
I'm tring to detect file format: for instanse 23.ts
if I use string.match( entities[3], 'ts$' ) -- returns ts
if I use string.match( entities[3], '\.ts$' ) --returns error
if I use string.match( entities[3], '\\.ts$' ) --returns nil
if I use string.match( entities[3], '[\\.]{1}ts$' ) --returns nil
if I use string.match( entities[3], '.ts$' ) --works
for all strings like ats, bts, cts ...
WHY? How can I detect . symbol?
A:
You need to escape . with %..
| {
"pile_set_name": "StackExchange"
} |
Q:
Is it possible to run multiple update queries in one using Joomla?
I want to run many SQL update queries at one time using JOOMLA 2.5. Below my code:
require_once '../includes/framework.php';
$query = "UPDATE #__mytable SET myfield='value' where id=1; UPDATE #__mytable SET
myfield='value' where id=2; UPDATE #__mytable SET myfield='value' where id=3;";
$db = JFactory::getDbo();
$db->setQuery($query);
$db->query();
But it shows me a syntax error. I tried to test directly in MYSQL and it works.
A:
PHP does not allow multiple queries by default. You can force it to do so by adding a parameter to mysql_connect, but I wouldn't recommend it (it opens huge security holes for SQL injections).
I don't know how JFactory handles this, but I would be surprised if it were different.
More infos about it: http://de3.php.net/manual/en/function.mysql-query.php#91669
| {
"pile_set_name": "StackExchange"
} |
Q:
Driving a 7 Segment Display with a register vs wire
I few days ago when I got my FPGA I created a module to drive my 7-Segment display. I used only continuous assignments to drive the leds.
module set_number(input [3:0] x, output [6:0] seg);
assign seg = x == 0 ? 7'b1000000 :
x == 1 ? 7'b1111001 :
x == 2 ? 7'b0100100 :
x == 3 ? 7'b0110000 :
x == 4 ? 7'b0011001 :
x == 5 ? 7'b0010010 :
x == 6 ? 7'b0000010 :
x == 7 ? 7'b1111000 :
x == 8 ? 7'b0000000 :
x == 9 ? 7'b0010000 :
7'b1111111;
endmodule
I was reading a book and they have a 7-Seg display written like this using registers and a combinational always block.
module hex_to_sseg
(
input wire [3:0] hex,
input wire dp,
output reg [7:0] sseg // output active low
);
always @*
begin
case(hex)
4'h0: sseg[6:0] = 7'b0000001;
4'h1: sseg[6:0] = 7'b1001111;
4'h2: sseg[6:0] = 7'b0010010;
4'h3: sseg[6:0] = 7'b0000110;
4'h4: sseg[6:0] = 7'b1001100;
4'h5: sseg[6:0] = 7'b0100100;
4'h6: sseg[6:0] = 7'b0100000;
4'h7: sseg[6:0] = 7'b0001111;
4'h8: sseg[6:0] = 7'b0000000;
4'h9: sseg[6:0] = 7'b0000100;
4'ha: sseg[6:0] = 7'b0001000;
4'hb: sseg[6:0] = 7'b1100000;
4'hc: sseg[6:0] = 7'b0110001;
4'hd: sseg[6:0] = 7'b1000010;
4'he: sseg[6:0] = 7'b0110000;
default: sseg[6:0] = 7'b0111000; //4'hf
endcase
sseg[7] = dp;
Is there any practical difference between my approach with the wire and the book's approach with the register and combinational logic?
A:
As transistor mentions in his answer, the logic represented by the two pieces of code you posted are not the same; it has differences in the bit ordering and the latter sample will display hex characters A-F.
But, you are asking about the difference between using a wire variable and an assign statement vs. a reg variable and an always @* construct.
One confusing thing about verilog is that using a reg data type in your code does not always imply that a register will be implemented in the synthesized logic.
In the cases you posted, both solutions would be implemented using combinatorial logic with no physical registers. It is very common to use the always @* construct to model combinatorial logic.
Sequential circuits, which will include physical registers (flip-flops), are also modeled using the always construct, but these circuits will have a posedge and/or negedge signal specified in the sensitivity list.
Of the cases you posted, I would personally prefer to use the always @* version of the code as I think that it more clearly shows the intent of the code; it is easy to see that it represents a decoding table.
A:
There are a number of different ways to express the same description in Verilog. The continuous assign statement is good for writing an equation for a single signal, but it's not really RTL. If you wanted to assign multiple signals based on the the same set of inputs, then the always block let you show the flow of decisions made in a procedural block of code. This is a much better way of showing your intent by using a set of more human readable RTL statements.
A:
Many synthesizers use treat conditional operator (?:) as explicit 2:1 mux. Nested conditional operators with synthesize as the design is write. In your case, it will be a chain of ten 2:1 muses. Here is a diagram of your code synthesized with Yosys 0.3.0 on edaplayground
By converting your code to a case statement it will synthesize like the following (also synthesized with Yosys 0.3.0 on edaplayground). In this case Yosys used a priority mux, but it could have just as easially picked a
Functionally the two are identical. The case statement version will typically have better and more even timing. With the nested conditional operators, when x==0 the prorogation delay is 2 logic gates, when x>=9 it is 11 logic gates. The case statement version also gives the synthesizer more flexibility, allowing, the synthesizer to pick the best options for the situation factoring in available resources, resources needed for other logic, and timing requirements.
In general, it is better use a case statement and let the synthesizer pick the appropriate muxes.
| {
"pile_set_name": "StackExchange"
} |
Q:
Get the groups of the device in Graph
I need to get the group membership of the device in AAD.
It is possible to get the groups, select one, get members and find a device there in one call
https://graph.microsoft.com/v1.0/groups/xxx/members/yyy
I can get to the same device as by calling
https://graph.microsoft.com/v1.0/me/registeredDevices/yyy
But I haven’t found anything for a reversed approach for the device. It is simple to find the group membership of a user
https://graph.microsoft.com/v1.0/me/memberOf
but this apparently doesn’t seem to be possible for the device. Neither in v1.0, nor in beta.
Am I missing something? Thanks!
A:
If I understand, you are looking for something that will tell you what groups a device is a member of like:
GET https://graph.microsoft.com/v1.0/devices/memberOf
We don't have anything like that today.
However we do have an alternative, that might work for you, and it provides transitive closure too (which memberOf does not). See: https://developer.microsoft.com/en-us/graph/docs/api-reference/v1.0/api/directoryobject_getmembergroups
POST https://graph.microsoft.com/v1.0/devices/{deviceId}/getMemberGroups
{
"securityEnabledOnly":false
}
The response will contain a list of group ids for which the device is a member. To get the group details, you will have to make another call. You could use the getByIds action for this - see https://developer.microsoft.com/en-us/graph/docs/api-reference/v1.0/api/directoryobject_getbyids. If you want to do this in one call, you could take a look at using batching: https://developer.microsoft.com/en-us/graph/docs/concepts/json_batching.
Hope this helps,
| {
"pile_set_name": "StackExchange"
} |
Q:
DBM::Deep not working with perl hash reference
I am using a DBM::Deep hash object like so:
my $dbm = DBM::Deep->new(
file => "dbm.db",
locking => 1,
autoflush => 1,
type => "DBM::Deep->TYPE_HASH",
);
#code..
$dbm = $hash_reference;
However, this doesn't work. $dbm holds the correct values during the program, but after it exits dbm.db is empty and when I start up another program that tries to use dbm.db, there's nothing in it. But whenever I copy the hash reference like this (it's a two level deep hash):
for my $id (keys %$hash_reference) {
for(keys %{$hash_reference->{$id}}) {
$todo->{$id}->{$_} = $hash_reference->{$id}->{$_};
}
}
Then it will copy everything over correctly and the values will still be there after program execution. The DBM author seems to stress though that his DBM::Deep objects work just like a regular hash, so does anyone know if there is an easier way to do this? Thanks!
A:
You’re throwing away the object. Try this instead:
%$dbm = %$hash_reference;
| {
"pile_set_name": "StackExchange"
} |
Q:
Best free auto updater c#
what is best free auto update tool for .net desktop applications currently(for this date)? I use ".NET Application Updater Component", but it's made in 2002.
---NOT ClickOnce (it has many limitations )
A:
Try this one http://autoupdater.codeplex.com/
| {
"pile_set_name": "StackExchange"
} |
Q:
Specifying the decimal precision for a call to std::copy
I have the following function that saves a vector to a CSV file:
#include <math.h>
#include <vector>
#include <string>
#include <fstream>
#include <iostream>
#include <iterator>
using namespace std;
bool save_vector(vector<double>* pdata, size_t length,
const string& file_path)
{
ofstream os(file_path.c_str(), ios::binary | ios::out);
if (!os.is_open())
{
cout << "Failure!" << endl;
return false;
}
copy(pdata->begin(), pdata->end(), ostream_iterator<double>(os, ","));
os.close();
return true;
}
In the resulting CSV file, the numbers in pdata are saved with variable precision, and none are saved with the precision I want (10 decimal places).
I know about the function std::setprecision. However, this function, according to the docs,
should only be used as a stream manipulator.
(I'm actually not sure I'm interpreting "stream manipulator" correctly; I'm assuming this means I can't use it in my function as currently written.)
Is there a way for me to specify the decimal precision using the copy function? If not, how should I get rid of copy so that I can use setprecision in the function above?
A:
You can call
os.precision(10);
before the copy.
| {
"pile_set_name": "StackExchange"
} |
Q:
Unique extension of the absolute value
Let $(K,u)$ be a complete valued field, $u$ be its discrete absolute value (corresponds to a discrete valuation on $K$), then:
($\ast)$ Let $E/K$ is a finite separable field extension, then the absolute value $w$ of $E$ which extends $u$ is unique.
In the book Algebraic Number Theory by Fröhlich, A., and Taylor, M.J., it first proves the above property, then gives an alternate proof as follows:
Let $\mathfrak o$ denotes the valuation ring in $K$ associates with $u$, $\mathfrak o_E$ the integral closure of $\mathfrak o$ in $E$,
then in order to prove ($\ast$), it suffices to show that $\mathfrak o_E$ has the unique prime ideal $\mathfrak B$ above $\mathfrak o$.
($\mathfrak B$ is above $\mathfrak o$ means $\mathfrak B \cap \mathfrak o=\mathfrak p$ is an unique prime ideal in $\mathfrak o$.)
My question is, how the uniqueness of $\mathfrak B$ implies the uniqueness of extension $w$, why it suffices?
I know there exists a absolute value $w'$ of $E$ extends $u$ which gives $\mathfrak {B} =\{ x; w'(x)<1 \}$, but this only works for such specific $w'$, for an arbitrary extension $w$, not even being discrete, I am not sure how it gives a prime ideal in $\mathfrak o_E$ to obtain the uniquemess.
A:
You talk about Theorem 16 on Page 103 and its alternate proof in a special case starting at the bottom of Page 105.
Indeed there seems to be a small gap in the alternate proof as it only deals with discrete extensions $w$ of the given discrete absolute value $u$. Probably what the authors had in mind is this: let us assume that both $u$ and $w$ are discrete absolute values in Theorem 16, then we can give an alternate proof by algebraic methods.
| {
"pile_set_name": "StackExchange"
} |
Q:
What are the irreducible representations of $(\mathbb{Z},+)$?
I'm wondering what are the irreducible representations of the group ($\mathbb{Z}$,+).
Knowing that for $\mathbb{Z}_n$ the 1-dimensional representations are the nth roots of unity, I considered taking the limit when $n \to \infty$ but I think I'm ending up with the U(1) group instead of $\mathbb{Z}$ and I have no other idea...
A:
You are probably looking for the complex representations (is that right?). Firstly I claim any finite-dimensional simple rep is one-dimensional. Let $G=\langle g\rangle$ be infinite cyclic, this will stand in place of $\mathbb{Z}$. If $g$ acts on some nonzero finite-dimensional complex vector space $V$ by a linear map $\rho(g)$ then this linear map has an eigenvector, which spans a one-dimensional subrep. If the rep was irreducible, this subrep must be all of $V$, so $\dim V=1$.
Now note that if $\lambda \in \mathbb{C}^*$ (I mean the non-zero complex numbers) then $g \mapsto \lambda$ induces a homomorphism $\rho_\lambda: G \to \mathsf{GL}_1(\mathbb{C})$, i.e. a representation of $G$. Clearly any one-dimensional representation arises this way. Furthermore if $\lambda \neq \mu$ then the representations afforded by $\rho_\lambda$ and $\rho_\mu$ are not isomorphic: not much conjugacy happens in $\mathsf{GL}_1$ !
In conclusion, the simple representations are in natural bijection with $\mathbb{C}^*$: this is called the character group.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to make hair material different from the plane itself?
In Cycles Render, I added hair to a plane (as I was making a grass field) and I don't seem to know how to make the actual flat plane's material different from the hairs themselves, because I'd like to make the plane have an image texture and the hairs have a yellow-like color to them, but of course, if I attempt to make the material yellow-ish, the plane AND the hairs will become yellow. I'm done with the plane, I just want to control the hairs' materials.
A:
You can change the strand's material by adding another material slot in the materials pane, then go to particle settings panel > render rollout and select the new material in the material slot dropdown list.
Add a new material slot:
And select it in the particle settings:
| {
"pile_set_name": "StackExchange"
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.