text
stringlengths 64
81.1k
| meta
dict |
---|---|
Q:
how to know if the post has pagination () or not
i need a solid solution for checking if a post has pagination or not, regardless of the number of current page in the pagination, and even if the post is in the first page of the pagination.
the check is inside the loop.
thanks.
A:
Just check for the global $numpages variable:
<?php
global $numpages;
if ( is_singular() && $numpages > 1 ) {
// This is a single post
// and has more than one page;
// Do something.
}
?>
| {
"pile_set_name": "StackExchange"
} |
Q:
How to implement and fire an event when a change occurs in a property of `T` in `List` within the owning class
How to implement and fire an event when a change occurs in a property of T in List<T> within the owning class
I mean, not on the collection itself but in a property of T.
Is there any pattern how to do it?
My current code
public class Section
{
public string Title { get; set; }
public List<Question> Questions { get; set; } = new List<Question>();
public int AnsweredQuestion
{
get
{
return Questions.Count(x => x.State != DeviceNodeTechStateEnum.Undefined);
}
}
public int NonAnsweredQuestion
{
get
{
return Questions.Count(x => x.State == DeviceNodeTechStateEnum.Undefined);
}
}
public string QuestionStats
{
get
{
return string.Format("{0}/{1}", AnsweredQuestion, Questions.Count);
}
}
}
public class Question : INotifyPropertyChanged
{
public Guid ID { get; set; }
public string _note { get; set; }
public string Note
{
get
{
return this._note;
}
set
{
if (value != this._note)
{
this._note = value;
NotifyPropertyChanged();
}
}
}
private DeviceNodeTechStateEnum _state { get; set; }
public DeviceNodeTechStateEnum State
{
get
{
return this._state;
}
set
{
if (value != this._state)
{
this._state = value;
NotifyPropertyChanged();
}
}
}
public event PropertyChangedEventHandler PropertyChanged;
private void NotifyPropertyChanged([CallerMemberName] String propertyName = "")
{
PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName));
}
}
Basically I need to know if public DeviceNodeTechStateEnum State has changes in Section class.
A:
I've used this pattern before, where you basically wrap a List, extend it to implement INotifyPropertyChanged, and hook into any methods that Add,Insert or Remove items from the list so that you can wire/unwire the items PropertyChanged event.
public class ItemPropertyChangedNotifyingList<T> : IList<T>, INotifyPropertyChanged where T : INotifyPropertyChanged
{
private List<T> _listImplementation = new List<T>();
public void Add(T item)
{
item.PropertyChanged += ItemOnPropertyChanged;
_listImplementation.Add(item);
}
private void ItemOnPropertyChanged(object sender, PropertyChangedEventArgs e)
{
PropertyChanged?.Invoke(sender, e);
}
public IEnumerator<T> GetEnumerator()
{
return _listImplementation.GetEnumerator();
}
IEnumerator IEnumerable.GetEnumerator()
{
return ((IEnumerable) _listImplementation).GetEnumerator();
}
public void Clear()
{
_listImplementation.ForEach(x => x.PropertyChanged -= ItemOnPropertyChanged);
_listImplementation.Clear();
}
public bool Contains(T item)
{
return _listImplementation.Contains(item);
}
public void CopyTo(T[] array, int arrayIndex)
{
_listImplementation.CopyTo(array, arrayIndex);
}
public bool Remove(T item)
{
item.PropertyChanged -= ItemOnPropertyChanged;
return _listImplementation.Remove(item);
}
public int Count => _listImplementation.Count;
public bool IsReadOnly => false;
public int IndexOf(T item)
{
return _listImplementation.IndexOf(item);
}
public void Insert(int index, T item)
{
item.PropertyChanged += ItemOnPropertyChanged;
_listImplementation.Insert(index, item);
}
public void RemoveAt(int index)
{
if (index < 0 || index >= Count) throw new ArgumentOutOfRangeException(nameof(index));
_listImplementation[index].PropertyChanged -= ItemOnPropertyChanged;
_listImplementation.RemoveAt(index);
}
public T this[int index]
{
get => _listImplementation[index];
set => _listImplementation[index] = value;
}
public event PropertyChangedEventHandler PropertyChanged;
}
When handling the PropertyChanged events of this wrapped list, the sender argument will be the instance of the item that raised the event.
| {
"pile_set_name": "StackExchange"
} |
Q:
"No Method Declared with Objective-C Selector 'playPause:'" & "Use of Unresolved Identifier UIBarButtonItem"
I'm Making a music player and I'm not sure why I'm getting this error in my code. I'm new to coding and would highly appreciate some help (This is the image of the code)
A:
You have made a mistake, replace it UIBarButtonItem.playwith UIBarButtonSystemItem.play
| {
"pile_set_name": "StackExchange"
} |
Q:
SSRS Chart, is there a way to left-align the y-axis?
I have a bar chart and I would like the text to be left-aligned. I've tried checking the Vertical Axis Properties and there are Label and Label Font settings, but alignment is not one of the settings.
What I've tried:
I have tried to use the alignment icons on the toolbar, but they are grayed out. I've also tried right-clicking on the chart, choosing Chart > Layout, and all of the alignment options are grayed out there as well.
What is the best method to align the labels to the left, while keeping the gradient bars looking the same please?
A:
I do not think this is possible using a chart.
What you can do instead is to create a table with two columns, grouped on the same field as your chart Category. In the first column you can simply put the Category label and in the second column add a chart, you can use a data bar.
This will allow you to format your 'labels' and chart bars as you see fit.
| {
"pile_set_name": "StackExchange"
} |
Q:
diameter on a compact metric space
I have troubles showing the following:
Let $(X,\rho)$ be a compact metric space and $F \subset X$ a closed subset. Prove that if diam $F < \infty$, then there exist $x_{0}, y_{0} \in F$ such that diam $F= \rho(x_{0},y_{0})$.
It looks so trivial yet hard to find the trick.
A:
For the sake of having an answer:
Since $F$ is closed in $X$, it is compact. Then $F \times F$ is compact, too, and $\rho: F \times F \to [0,\infty)$ is a continuous function on a compact set, so it attains its maximum. In other words, there is a point $(x_0,y_0) \in F \times F$ such that $\rho(x_0,y_0) = \max{\{\rho(x,y)\,:\,(x,y) \in F \times F\}} = \operatorname{diam}{F}$ which is precisely the statement you ask about.
Remarks.
We didn't use that $F$ has finite diameter, because it follows from our argument.
It would be enough to assume that $F$ is a compact subset of $X$ instead of assuming compactness of $X$ itself.
Compactness is necessary: equip a countable set $X = \{x_n\}_{n \geq 2}$ with the metric given by $d(x_n,x_m) = \max{\{1-1/n,1-1/m\}}$ if $n \neq m$. Then $\operatorname{diam}{X} = 1$ but no two points are at distance $1$ to each other.
| {
"pile_set_name": "StackExchange"
} |
Q:
Non linear Differential Equation
Let $\Omega:=\{(x_1,x_2) \subset \mathbb{R}^2 | x_2>0\}$. I want to solve the differential equation $$\begin{pmatrix} \dot{x_1} \\\dot{x_2} \end{pmatrix}=\begin{pmatrix}x_2^2-x_1^2 \\-2x_1x_2\end{pmatrix}$$I have no idea how to do that, since I have only seen linear differential equations so far. I'd appreaciate any starting point here. I have the hint to look at $z=x_1+ix_2$ but that confuses me even more. Thank you!
A:
If $z(t)=x_1 (t) + \Bbb i x_2 (t)$ then $$- z^2 = (x_2 ^2 - x_1 ^2) - 2 x_1 x_2 \Bbb i$$ Therefore, your equation can be rewritten as $\dot z = - z ^2$ or, equivalently, $$\frac{\dot z}{z^2} = -1$$ Integrate this with respect to $t$, getting $1 /z = t + C$, or $z(t) = 1/(t+C)$, with $C=a + b \Bbb i$ an integration constant. To get back to $x_1$ and $x_2$, write $$x_1 + x_2 \Bbb i = \frac1{t+a + b \Bbb i} = \frac{t+a - b \Bbb i}{(t+a)^2 + b^2}$$ Equating the real and imaginary parts, $$x_1 = \frac{t+a}{(t+a)^2 + b^2}\qquad x_2 = \frac{-b}{(t+a)^2 + b^2}$$ To have $x_2 > 0$ (as in the definition of $\Omega$), you will have to take $b<0$.
A:
Observe how the right hand side looks like the real and the imaginary term of a squared complex number. In fact, if you write
$$z^2=x_1^2-x_2^2+i (2x_1x_2)$$
you can see, that if you multiply the second equation by $i$ and add it to the first one, you will get
$$\dot{z}=-z^2$$
which is trivially solvable.
This is a very handy trick that can be used in physics quite frequently when there is some intrinsic rotational motion going on (or something like that).
| {
"pile_set_name": "StackExchange"
} |
Q:
AutoMapping objects
I have a weird situation where I have objects and Lists of objects as part of my entities and contracts to interface with a third-party service. I'm going to try to see if I can replace the actual object class with something more specific in the entities and contracts to get around this, but I am curious if there is a way to get AutoMapper to handle this as is.
Here are some dummy classes:
public class From
{
public object Item { get; set; }
}
public class FromObject
{
public string Value { get; set; }
}
public class To
{
public object Item { get; set; }
}
public class ToObject
{
public string Value { get; set; }
}
And the quick replication:
Mapper.CreateMap<From, To>();
Mapper.CreateMap<FromObject, ToObject>();
From from = new From { Item = new FromObject { Value = "Test" } };
To to = Mapper.Map<To>(from);
string type = to.Item.GetType().Name; // FromObject
Basically, the question is this: Is there a way to get AutoMapper to understand that from.Item is a FromObject and apply the mapping to ToObject? I'm thinking there's probably not a way to make it automatic, since there's nothing that would indicate that to.Item has to be a ToObject, but is there a way to specify during the CreateMap or Map calls that this should be taken into account?
A:
I don't think there is an "automatic" way of doing it, since AutoMapper won't be able to figure out that From.Item is FromObject and To.Item is ToObject.
But, while creating mapping, you can specify that
Mapper.CreateMap<FromObject, ToObject>();
Mapper.CreateMap<From, To>()
.ForMember(dest => dest.Item, opt => opt.MapFrom(src => Mapper.Map<ToObject>(src.Item)));
From from = new From { Item = new FromObject { Value = "Test" } };
To to = Mapper.Map<To>(from);
string type = to.Item.GetType().Name; // ToObject
| {
"pile_set_name": "StackExchange"
} |
Q:
Ruby/Sinatra send_file not working
I'm using send_file on a Sinatra app:
get '/update/dl/:upd' do
filename ="/uploads/#{params[:upd]}"
send_file(filename, :filename => "t.cer", :type => "application/octet-stream")
end
The folder /uploads/ it's not public, it's on the app dir. When I try to go to localhost:4567/update/dl/some_file in Chrome it returns me a 404, like with Firefox, when seeing the headers, it's a 404. But if I try with Safari it downloads the file. So I guess somthing's wrong with my code (and Safari's, but let's left that to Apple :P). What could be wrong? Thanks!
A:
I get it to work fine in chrome if I remove the initial slash in filename so it's "filename instead of "/filename. The 404 comes from a file not found error in send_file
# foo.rb
require 'sinatra'
get '/update/dl/:upd' do
filename ="uploads/#{params[:upd]}"
# just send the file if it's an accepted file
if filename =~ /^[a-zA-Z0-9]*.cer$/
send_file(filename, :filename => "t.cer", :type => "application/octet-stream")
end
end
However, there's really a big security hole in this, a user can download anything that the sinatra process has access too, I named my sinatra app foo.rb and this request downloads the sinatra script:
http://localhost:4567/update/dl/..%2Ffoo.rb
| {
"pile_set_name": "StackExchange"
} |
Q:
"CSRF Failed: CSRF token missing or incorrect." in Django Rest: UpdateModelMixin
I am using UpdateModelMixin from Django rest framework to update the entries from the Test model.
from django.utils.decorators import method_decorator
from django.views.decorators.cache import never_cache
from rest_framework import mixins, filters, viewsets
decorators = [never_cache]
@method_decorator(decorators, name='dispatch')
class TestViewSet(mixins.ListModelMixin,
mixins.RetrieveModelMixin,
mixins.UpdateModelMixin,
viewsets.GenericViewSet):
queryset = Test.objects.all()
serializer_class = TestSerializer
filter_backends = [filters.DjangoFilterBackend]
filter_class = TestFilter
When I try to update the object from the Test Model It's giving following error -
"detail": "CSRF Failed: CSRF token missing or incorrect."
Can anyone please help me to resolve this issue?
A:
This is an old question, but it deserves an answer, since people might come to this page looking for one.
If there is a csrf token problem, it means that the page you are using is not passing the csrf token. Since you don't say how you are accessing the data, I am going to assume it is JavaScript, since that is most likely, so I'll give you an example of how to fix it there, but the process is simply to get the csrf token from the existing cookie, then pass it along as a cookie to the API.
Here is an example AJAX call from JavaScript that passes a csrf token:
var data={ foo: "bar" };
$.ajax({
url: '/api/schedule/',
type: "PATCH",
data: JSON.stringify(data),
beforeSend: function(xhr) {
xhr.setRequestHeader('X-CSRFToken', Cookies.get('csrftoken'))
},
contentType: "application/json",
});
EDIT:
This uses the JavaScript cookie library.
| {
"pile_set_name": "StackExchange"
} |
Q:
Confused by the operator definitions of Int in Scala
The scala tutorial says that Int's add operation is actually a method call: 1+1 means 1.+(1)
But when I look into the source code of Int.scala, it appears that the method will simply print an error message. Could anyone explain to me how this works?
def +(x: Int): Int = sys.error("stub")
A:
Int is a value class, which is somewhat different than other classes. There is no way to express primitive addition in scala without getting into a recursive definition. For example if the definition of + was,
def +(x: Int): Int = this + x
Then calling + would invoke + which would invoke + which ...
Scala needs to compile the methods on value classes into the java byte codes for addition/subtraction/etc.
The compiler does compile + into the java bytecode for addition, but the scala library authors wrote Int.scala with stub methods to make it a valid scala source file. Those stub methods are never actually invoked.
A:
As the implementation says, that method is a stub. Apparently its implementation is provided by the Scala compiler when the code is compiled, because int + int is a primitive operation and the Scala language does not itself have primitives - only the compiler knows about primitives on the JVM.
A:
It is important to realize that operators are methods as a matter of how one interacts with the language. Things like + on Int act like any other method in Scala, instead of being something that plays by their own rules.
However, at the implementation level, they are not methods at all: to the JVM only classes have methods, and the AnyVal subclasses are not classes as far as the JVM is concerned. Unsurprisingly, at the implementation level they act mostly like Java primitives.
| {
"pile_set_name": "StackExchange"
} |
Q:
IMAP attachment retrieving command
I am working on a mail client using IMAP and I am looking for the command for receiving the attachments of a message.
A:
All message info is retrieved using the FETCH command. You have two options on how to use it, however.
First, you can retrieve the entire email message, verbatim. In that case, you're going to need to include a MIME parser in your client to figure out the structure of the message. (Each platform has at least one or two popular MIME parsers; since you haven't told us what you're coding in, I can't recommend one for you.) Once you get the message structure from your MIME parser, you'll need some client logic to determine which parts are attachments. It's worth looking at RFC 2183 to get you started. In general, parts with a Content-Disposition starting with "attachment" are going to be attachments, but all mail client authors go through a phase of trial and error getting it right. In order to download the entire email message, you'd issue the IMAP command
$ UID FETCH <uid> BODY.PEEK[]
Second, you can have the IMAP server parse the message structure for you by issuing a FETCH BODYSTRUCTURE (note: no square brackets). You'll have to parse the returned BODYSTRUCTURE data yourself; the IMAP RFC explains the format and gives a few examples.
# message, no attachments:
("TEXT" "PLAIN" ("CHARSET" "ISO-8859-1" "FORMAT" "flowed") NIL NIL "7BIT" 1469 50 NIL NIL NIL NIL)
# message, one attachment
(("TEXT" "PLAIN" ("CHARSET" "US-ASCII") NIL NIL "QUOTED-PRINTABLE" 56 1 NIL NIL NIL NIL)("AUDIO" "X-WAV" ("NAME" "voicemail.wav") NIL NIL "BASE64" 152364 NIL ("attachment" ("FILENAME" "voicemail.wav")) NIL NIL) "MIXED" ("BOUNDARY" "----_=_NextPart_001_01C4ACB3.5AA7B8E2") NIL NIL NIL)
Once you've determined which parts you're interested in, you can issue a FETCH for the displayable message body. Your client can then just list the message attachments (parsed out of the BODY response) and can then go back and FETCH them if the user clicks on them. So the IMAP commands you'd be issuing would be along the lines of:
$ UID FETCH <uid> (BODY ENVELOPE) # get structure and header info
$ UID FETCH <uid> (BODY[1]) # retrieving displayable body
$ UID FETCH <uid> (BODY[2]) # retrieving attachment on demand
| {
"pile_set_name": "StackExchange"
} |
Q:
Dynamically prevent certain fields from being serialized with Jackson's PropertyFilter
I need to be able to prevent certain fields of objects from being serialized, primarily based on their type. For example, consider the following Object:
class MyPojo {
private int myInt;
private boolean myBoolean;
// getters, setters, etc.
}
I would like to be able to, when serializing, not serialize the boolean field if it is false. Or not serialize the int if it's zero. Basically, not serialize any particular field based on any property of either it's type or particular value.
I'm aware of JsonSerializers, which I used to partially solve my problem, but it is impossible to choose not to serialize a field in a JsonSerializer.
The closest I've come is implementing my own PropertyFilter, and applying it to my Object via @JsonFilter:
public class XmlPropertyFilter implements PropertyFilter {
@Override
public void serializeAsField(Object pojo, JsonGenerator gen, SerializerProvider prov, PropertyWriter writer) throws Exception {
JavaType type = writer.getType();
if (writer instanceof BeanPropertyWriter) {
BeanPropertyWriter bWriter = (BeanPropertyWriter) writer;
String fieldName = bWriter.getSerializedName().getValue();
Field f = pojo.getClass().getDeclaredField(fieldName);
f.setAccessible(true);
Object value = f.get(pojo);
if (!type.isTypeOrSubTypeOf(int.class) && value != null) {
// Serialize everything that isn't an int and doesn't have a null value
prov.defaultSerializeField(fieldName, value, gen);
} else if (type.isTypeOrSubTypeOf(int.class)) {
// Only serialize ints if the value isn't 0
if (value != 0) prov.defaultSerializeField(fieldName, value, gen);
}
}
}
// ...
}
This does exactly what I want, except it has the nasty side effect of breaking wrapping (e.g. serializing a list). According to the @JsonFilter documentation, it is valid to apply a filter to a field rather than the entire class, which would be wonderful, but I've tried that and I can't seem to get it to work.
A:
I found the solution, and it's exactly what I was looking for. The secret is the method BeanPropertyWriter#serializeAsOmittedField(Object, JsonGenerator, SerializerProvider). This does exactly what is impossible to do inside of a JsonSerializer - it completely removed the field from the output.
Here's an example of this DynamicPropertyFilter:
public class DynamicPropertyFilter implements PropertyFilter {
public void serializeAsField(Object pojo, JsonGenerator jgen, SerializerProvider prov, PropertyWriter writer) throws Exception {
if (writer instanceof BeanPropertyWriter) {
BeanPropertyWriter bWriter = (BeanPropertyWriter) writer;
String fieldName = bWriter.getFullName().getSimpleName();
Field field = pojo.getClass().getDeclaredField(fieldName);
field.setAccessible(true);
Object object = field.get(pojo);
if (Double.class.isInstance(object) && (double) object == 0.0) {
// Remove all double fields that are equal to 0.0
bWriter.serializeAsOmittedField(pojo, jgen, prov);
return;
} else if (Boolean.class.isInstance(object)) {
// Change all boolean fields to 1 and 0 instead of true and false
prov.defaultSerializeField(fieldName, (boolean) object ? 1 : 0, jgen);
return;
}
}
// Serialize field as normal if property is not filtered
writer.serializeAsField(pojo, jgen, prov);
}
public void serializeAsElement(Object elementValue, JsonGenerator jgen, SerializerProvider prov, PropertyWriter writer) throws Exception {
writer.serializeAsField(elementValue, jgen, prov);
}
public void depositSchemaProperty(PropertyWriter writer, JsonObjectFormatVisitor objectVisitor, SerializerProvider provider) throws JsonMappingException {
writer.depositSchemaProperty(objectVisitor, provider);
}
@Deprecated
public void depositSchemaProperty(PropertyWriter writer, ObjectNode propertiesNode, SerializerProvider provider) throws JsonMappingException {
writer.depositSchemaProperty(propertiesNode, provider);
}
}
Not only can I filter fields, which is primarily what I wanted, but I can also change them (as seen in the boolean example). This eliminates the need for both a PropertyFilter and a JsonSerializer.
| {
"pile_set_name": "StackExchange"
} |
Q:
Lambda expression for operator overloading
Is it possible to write lambda expression for overloading operators?
For example, I have the following structure:
struct X{
int value;
//(I can't modify this structure)
};
X needs == operator
int main()
{
X a = { 123 };
X b = { 123 };
//[define equality operator for X inside main function]
//if(a == b) {}
return 0;
}
== operator can be defined as bool operator==(const X& lhs, const X& rhs){...}, but this requires adding a separate function, and my comparison is valid only within a specific function.
auto compare = [](const X& lhs, const X& rhs){...} will solve the problem. I was wondering if I can write this lambda as an operator.
A:
Is it possible to write lambda expression for overloading operators?
No.
Operator overload functions must be functions or function templates. They can be member functions, member function templates, non-member functions, or non-member function templates. However, they cannot be lambda expressions.
From the C++11 Standard/13.5 Overloaded operators, para 6:
An operator function shall either be a non-static member function or be a non-member function and have at least one parameter whose type is a class, a reference to a class, an enumeration, or a reference to an enumeration.
| {
"pile_set_name": "StackExchange"
} |
Q:
Proving $\operatorname {rank} A + \operatorname {rank} B \le n $
Let $A$ and $B$ be two $n\times n$ matrices such that $AB = 0$. Prove that
$\operatorname {rank} A + \operatorname {rank} B \le n $.
What I did is to show that if the multiplication of the two matrices is zero then
$\displaystyle\sum^n_1a_{1,i}b_{i,1}=0\\
\displaystyle\sum^n_1a_{2,i}b_{i,2}=0\\
...\\
\displaystyle\sum^n_1a_{n,i}b_{i,n}=0$
So there has to be at least $n$ zeros in one or two of the matrices in order for $AB = 0$, so there are at least $n$ linearly dependent rows so the rank of the product is at most $n$.
I'm pretty sure this isn't rigorous enough nor on the right track.
A:
From $AB=0$ we have $im(B)\subseteq Ker(A)$. This implies $rank(B)\leq dim(Ker(A))$. Consequently,
\begin{equation}
n=rank(A)+dim(Ker(A))\geq rank(A)+rank(B).
\end{equation}
| {
"pile_set_name": "StackExchange"
} |
Q:
Python FileNotFoundError how to handle long filenames
I have a weird problem. I can neither rename specific files, nor remove them. I get the FileNotFoundError.
Similar questions have been asked before. The solution to this problem was using a full path and not just the filename.
My script worked before using only the filenames, but using different files I get this error, even using the full path.
It seems, that the filename is causing the error, but I cannot resolve it.
import os
cwd = os.getcwd()
file = "003de5664668f009cbaa7944fe188ee1_recursion1.c_2016-04-21-21-06-11_9bacb48fecd32b8cb99238721e7e27a3."
change = "student_1_recursion1.c_2016-04-21-21-06-11_9bacb48fecd32b8cb99238721e7e27a3."
oldname = os.path.join(cwd,file)
newname = os.path.join(cwd,change)
print(file in os.listdir())
print(os.path.isfile(file))
os.rename(oldname, newname)
I get the following output:
True
False
Traceback (most recent call last):
File "C:\Users\X\Desktop\code\sub\test.py", line 13, in <module>
os.rename(oldname, newname)
FileNotFoundError: [WinError 2] Das System kann die angegebene Datei nicht finden: 'C:\\Users\\X\\Desktop\\code\\sub\\003de5664668f009cbaa7944fe188ee1_recursion1.c_2016-04-21-21-06-11_9bacb48fecd32b8cb99238721e7e27a3.' -> 'C:\\Users\\X\\Desktop\\code\\sub\\student_1_recursion1.c_2016-04-21-21-06-11_9bacb48fecd32b8cb99238721e7e27a3.'
[Finished in 0.4s with exit code 1]
This file is existing if I use windows search in the folder.
If I try to use the full path I also get an windows error not finding the file.
I have also tried appending a unicode string u''+filename to the strings, because it was suggested by an user.
The pathlength is < 260, so what is causing the problem?
A:
This is a windows/Python thing. Filenames with a trailing period are sometimes trimmed.
If this is a once-off task, you can use two trailing periods as a workaround.
| {
"pile_set_name": "StackExchange"
} |
Q:
Castle - ActiveRecord - Inheritage
I'm trying to avoid creating the same properties in all ActiveRecord classes, so I am coding this:
Have a base class where I have my common properties: Id, Version, LastUpdate, etc...
public class IdentityBase<T> : ActiveRecordValidationBase<T> where T : class
Then my "child" class would have his own properties and should inherit from my IdentityBase.
[ActiveRecord("Users")]
public class User : IdentityBase<User>
Now I create an object user:
User user = new User()
and I can call user.Save() but I can't call user.FindAll() and many others public methods....
How can I solve this?
A:
I have ActiveRecord 2.0 , and all methods like Find , and FindAll - are static so try to use
User.FindAll()
insted of
user.FindAll()
| {
"pile_set_name": "StackExchange"
} |
Q:
Specify a single address for a self-hosted WCF service
I have a self-hosted WCF service written that needs to run on one specific address on a machine with multiple addresses. To that end, I have written the config so that the address to use is specified in the endpoint:
<endpoint address="http://A.B.C.D:8000/MyService" binding="webHttpBinding" name="MyServiceEndpoint" behaviorConfiguration="MyServiceBehavior" contract="IMyServiceInterface" />
When I run this app and start the service, it is running on ALL addresses rather than the one specified. I tried moving the address into the baseAddress field and leave the endpoint address blank, but got the exact same result. What am I missing?
A:
OK, for anyone else who happens to run into this problem, it isn't with the service configuration, it's with the binding configuration.
The webHttpBinding binding has a hostNameComparisonMode property that defaults to StrongWildcard.
This means that an http service ignores the host name and responds to any hostname. As a side effect, it runs on all open addresses on the machine.
If this value is changed to Exact, then it uses the host name or IP address specified either in the endpoint or the base address.
| {
"pile_set_name": "StackExchange"
} |
Q:
Java : Creating a String class : ArrayIndexOutOfBoundsException
Note : Please don't say to use String class methods as here I am creating all String method on my own.
Objective : Consider two string prince and soni. I want computeConcate method to ask for position ( Say 4 is entered ) and then obtain string from first name starting from beginning till 4th position and concate it with the last name viz soni. Hence I got prinsoni
Error : ArrayIndexOutOfBondsException on line labled as Error within computeConcate() method
Reason of the error : The string so obtained ( viz concatenation of first and last name can be of atmost length of first + last name )
So I created String
char []firstSubString;
firstSubString = new char[so.computeLength(firstName) + so.computeLength(lastName)];
Its length right now is as I though sum of first and last name but after this method computeSubstring() it changes to of length of first name.
What I want?
Can you provide me a way so that computeSubstring doesn't end up
changing lenght of firstSubString.
/**
* Take two strings
* Run a loop pause when counter is encountered
* Now use + to concatenate them
**/
@Override
public char[] computeConcatenation(char[] firstName, char[] lastName, int pos ) {
StringClass so = new StringClass();
Boolean flag = false;
char []firstSubString;
firstSubString = new char[so.computeLength(firstName) + so.computeLength(lastName)];
System.out.println( firstSubString.length ); // O/p is 10 ( length of // first name + last name
firstSubString = so.computeSubstring(firstName, pos);
System.out.println( firstSubString.length ); // o/p is 6. length of // first name
int len = so.computeLength(firstSubString);
// To find pos
for(int i = 0; i < so.computeLength(lastName); i++){
// ArrayIndexOfOfBondsException on this line
Error : firstSubString[len + i] = lastName[i];
}
return firstSubString;
}
Here is the code for substring method
/**
* Traverse the string till pos. Store this string into a new string
*/
@Override
public char[] computeSubstring(char[] name, int pos) {
StringClass so = new StringClass();
char []newName;
newName = new char[so.computeLength(name)];
for(int i = 0; i < so.computeLength(name); i++){
if( i == pos) break;
newName[i] = name[i];
}
return newName;
}
A:
Well, it changes because you are overwriting firstSubString here:
firstSubString = so.computeSubstring(firstName, pos);
What you want to do; however, is copy the result of computeString into the first part of firstSubString. You can do this rather simply with System.arraycopy()
char[] result = so.computeSubstring(firstName, pos);
System.arraycopy(result, 0, firstSubstring, 0, result.length);
This will copy the result into the front of the firstSubstring. This will not alter its length at all.
| {
"pile_set_name": "StackExchange"
} |
Q:
Do I need to use a guitar capo for this song?
I'm singing Bella ciao (an Italian song) with 4 chords:
Am E A7 Dm
The problem is that I find it very low to me, so I tried to barre the 5th fret with my index finger and play the usual chords with my other fingers and it sounds good.
My question is:
Do I need really to replace my index finger by a capo or do I need more exercise on the 5th fret? (I find the playing on the 5 fret hard and the sound not really clear because of my index finger barre)
More general:
What are the cases when the capo becomes indispensable?
A:
There is nothing wrong with using a capo, if it makes the sound you want.
Having said that, the barred shapes you are using are pretty simple ones, and it will be worthwhile learning to play them without a capo.
In your example you are raising the pitch of the whole song by 5 semitones.
So:
The Am becomes a Dm
The E becomes an A
The A7 becomes a D7
The Dm becomes a Gm
Most people choosing to barre the shapes for this set of chords would play the A/D7/Dm on the fifth fret. But rather than play a Dm shape barred on the 5th fret (tricky!) to get the Gm, they'd play a Em shape on the 3rd fret. Not using a capo gives you the freedom to move your barre around.
But you could also play the song using the standard shapes for those chords; it will sound different from the song with the capo, since these are different voicings of the same chords, but it will harmonise properly. It will be up to you which version you prefer.
Because it contains both an A and an Am this isn't a very common set of chords. However, A, D and E are the 1st, 4th and 5th in the key of A, and I, IV, V is a very common set of chords, so it's worth remembering what they are in various keys:
C, F, G
D, G, A
E, A, B
F, Bb, C
G, C, D
A, D, E
So you can try your song in at least 6 keys without using a capo, for example using Em, E7, Am and B. Some keys will need a barre for certain chords; others won't.
A capo becomes indispensable when you want to transpose chords that can't be played without open strings.
A:
When you use a Capo you basically just change the key the song is in, in this case you go up 5 frets. This means you are not playing Am E A7 Dm anymore, but Dm A D7 Gm. You can play these chords now whereever you're comfortable, and sing as you would play with a capo in the 5th fret. This process is called transposition (you find a brief explanation here, there's also a useful chart), and basically just means move a song up or down x notes. If anything is not clear just ask :-)
| {
"pile_set_name": "StackExchange"
} |
Q:
Want to add color to a rectangle which i draw using openGL function
I have to know how to add color to the rectangle drawn with the below method (which i took from a sample here).. Its by setting the openGL color to some color. But i dont know how to do it. Some help would be appreciated.
-(void) ccDrawFilledRect
{
HelloWorld *gs = [(swipeAppDelegate*)[[UIApplication sharedApplication] delegate] gameScene];
CGPoint poli[]= {gs.StartPoint,CGPointMake(gs.StartPoint.x,gs.EndPoint.y),gs.EndPoint,CGPointMake(gs.EndPoint.x,gs.StartPoint.y)};
glDisable(GL_TEXTURE_2D);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glVertexPointer(2, GL_FLOAT, 0, poli);
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
NSLog(@"openGL rectangles drawn !!");
// restore default state
glEnableClientState(GL_COLOR_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_2D);
}
A:
NeHe has a lot of easy OpenGL tutorials. This one is on adding color http://nehe.gamedev.net/data/lessons/lesson.asp?lesson=03 . Make your you have colors and color buffers enabled.
Have you tried putting glColor3d(1, 0, 0) before glVertexPointer?
| {
"pile_set_name": "StackExchange"
} |
Q:
KPSS test returns NA
This KPSS Test
urkpssTest(Total_TimeSeries, type = c("tau"), lags = c("short"),use.lag = NULL, doplot = TRUE)
returns :
Title:
KPSS Unit Root Test
Test Results:
NA
Description:
Mon Jan 27 19:07:48 2020 by user: Dorian
And Total_TimeSeries Is a Time Series From 2010 to 2020 :
Edit :
dput(Total_TimeSeries)
structure(c(54.565998, 51.842636, 51.931438, 50.598824, 48.574871,
50.345829, 53.552467, 49.434525, 53.437279, 52.761509, 53.956108,
51.889263, 48.825638, 52.035145, 54.832932, 59.555943, 58.029816,
60.351376, 55.363148, 55.445427, 57.237236, 52.047546, 51.145355,
52.381363, 49.218082, 50.348812, 49.609833, 47.011604, 45.711575,
44.508175, 43.559517, 45.339302, 44.359692, 43.133011, 42.748051,
43.252781, 43.034451, 40.239799, 40.307343, 39.701355, 39.742962,
40.034271, 39.463467, 39.808048, 41.637642, 36.707565, 36.133751,
35.818573, 35.807354, 39.392067, 38.420208, 35.062462, 36.387794,
38.654194, 38.053482, 39.075058, 41.868896, 37.897301, 40.926952,
39.309097, 38.524506, 41.857777, 45.063141, 47.914803, 49.037395,
47.951973, 53.676476, 51.022629, 52.337696, 47.599995, 47.092079,
41.483109, 43.845032, 43.165207, 43.780632, 40.871666, 39.0299,
37.435116, 33.850685, 34.650036, 34.921112, 32.826305, 34.222004,
37.143398, 35.045361, 33.79879, 33.960514, 33.297249, 33.137741,
30.539099, 29.384655, 28.155655, 31.450399, 32.970413, 36.162968,
34.163601, 32.473633, 32.873924, 33.229721, 27.397377, 30.626112,
33.767406, 36.121822, 34.969234, 39.001106, 37.021599, 37.221977,
35.685745, 32.473591, 28.909643, 32.294392, 30.587198, 27.652954,
30.012209, 26.461485, 26.95772, 31.438154, 33.542507, 32.178139,
33.293926), .Tsp = c(2010, 2019.91666666667, 12), class = "ts")
So my question is the following one, How should I interpret this NA Result ?
A:
Here is a solution to your NA problem.
First, install the urca package using install.packages("urca").
Then use the following code:
library(fUnitRoots)
my_urkpssTest <- function (x, type, lags, use.lag, doplot) {
x <- as.vector(x)
urca <- urca::ur.kpss(x, type = type[1], lags = lags[1], use.lag = use.lag)
output = capture.output(urca::summary(urca))[-(1:4)]
output = output[-length(output)]
for (i in 1:length(output)) output[i] = paste(" ", output[i])
ans = list(name = "ur.kpss", test = urca, output = output)
if (doplot)
plot(urca)
new("fHTEST", call = match.call(), data = list(x = x),
test = ans, title = "KPSS Unit Root Test", description = description())
}
my_urkpssTest(Total_TimeSeries, type = c("tau"), lags = c("long"),
use.lag = NULL, doplot = TRUE)
The output is:
Title: KPSS Unit Root Test
Test Results:
Test is of type: tau with 12 lags.
Value of test-statistic is: 0.0614
Critical value for a significance level of:
10pct 5pct 2.5pct 1pct
critical values 0.119 0.146 0.176 0.216
The problem is with the summary function inside the urkpssTest command.
I modified the code of urkpssTest setting urca::summary.
| {
"pile_set_name": "StackExchange"
} |
Q:
Why is my web not redirecting after Google Login
I have a web application that requires the user to login using their google account.However,after i logged in,it doesn't redirect me to another page.It stayed at the login page.
Before Login:
After Login:
I have tried using php Header() function but it did not redirect me.Am I missing anything out?
<?php
ob_start();
session_start();
require_once 'google-api-php-client-2.2.2/vendor/autoload.php';
$conn = new mysqli("localhost","root","","labelimagetool");
if($conn->connect_error){
die("Connection failed: ".$conn->connect_error);
echo 'Unable to connect to db!';
}
else{
echo 'Connected to db!';
}
$redirect_uri = 'http://localhost/labelimagetool/tool.php';
$client = new Google_Client();
$client->setClientId($client_id);
$client->setClientSecret($client_secret);
$client->setDeveloperKey($api_key);
$client->setRedirectUri($redirect_uri);
//$service = new Google_Service_Oauth2($client);
$client->addScope(Google_Service_Oauth2::USERINFO_EMAIL);
$client->setAccessType('offline'); // offline access
$client->setIncludeGrantedScopes(true);
//$client->authenticate(isset($_GET['code']));
//$client->authenticate($code);
$plus = new Google_Service_Plus($client);
if(isset($_REQUEST['logout'])){
session_unset();
}
if(isset($_GET['code'])){
$client->authenticate($_GET['code']);
$_SESSION['access_token'] = $client->getAccessToken();
$redirect = 'http://'.$_SERVER['HTTP_HOST'].'/tool.php';
header('Location:'.filter_var($redirect,FILTER_SANITIZE_URL));
exit();
}
else{
echo "no value";
}
if(isset($_SESSION['access_token']) && $_SESSION['access_token']){
$client->setAccessToken($_SESSION['access_token']);
$me = $plus->people->get('me');
$email = $me['emails'][0]['value'];
$name = $me['displayName'];
$sqlEmail = "SELECT * FROM users WHERE email='".$email."'";
$checkEmail = mysqli_query($conn,$sqlEmail);
if(mysqli_num_rows($checkEmail) >0){
}
else{
$insertStmt = "INSERT INTO users ('email','name','status') VALUES('".$email."','".$name."','user')";
mysqli_query($conn,$insertStmt);
}
}
else{
$authUrl = $client->createAuthUrl();
}
ob_end_flush();
?>
Please help me..thank you.
A:
Your problem is that you are setting a redirect header but then not following thru.
This section of code is executes:
if(isset($_GET['code'])){
$client->authenticate($_GET['code']);
$_SESSION['access_token'] = $client->getAccessToken();
$redirect = 'http://'.$_server['HTTP_HOST'].'/tool.php';
header('Location:'.filter_var($redirect,FILTER_SANITIZE_URL));
}
but instead of exiting the PHP after you set the Location header you continue on:
if(isset($_SESSION['access_token']) && $_SESSION['access_token']){
.....
}
and then finally this code executes:
else{
$authUrl = $client->createAuthUrl();
}
SOLUTION:
Change to this block of code: notice the addition of exit()
if(isset($_GET['code'])){
$client->authenticate($_GET['code']);
$_SESSION['access_token'] = $client->getAccessToken();
$redirect = 'http://'.$_server['HTTP_HOST'].'/tool.php';
header('Location:'.filter_var($redirect,FILTER_SANITIZE_URL));
exit();
}
Suggestion:
If you plan to send HTTP headers to the client, always add ob_start(); at the beginning of your PHP code. This turns on output buffering. This prevents output from being sent before your headers. Follow up with ob_end_flush(); before your code exits.
<?php
ob_start();
Enable error reporting. At the top of your PHP file add (plus my other suggestion):
<?php
// enable output buffering
ob_start();
// Enable error reporting
error_reporting(E_ALL);
ini_set('display_errors', 1);
Also, if you have PHP error logging setup, you will be able to see errors and warning. I will bet there are a few that need to be fixed.
| {
"pile_set_name": "StackExchange"
} |
Q:
Can I use Azure Active Directory to hold my application's user store?
I'm designing a solution for an ERP requirement. The Client insists on using AAD for one point management of users for different applications.
Since AAD has the capability of rendering Oauth service, I'm intending to use it as my OAUTH server and utilize its tokens inside my WebAPI services. But was wondering how I can capture the failed user login attempts as I need to apply locking mechanism.
When I found that AAD can handle this locking mechanism also through some configurations, I'm now left out with a question whether I can just use AAD for my user store, meaning I will have the users, their credentials and their roles stored in AAD, while I will have the permissions for each role and other data stored in my application's database.
Is this a feasible solution? or is there a different way of handling this?
Note: We are using noSQL database.
A:
Yes, this is a feasible solution. You can use application roles to assign roles to users.
You can define the application roles by adding them to the application manifest. Then you can assign these roles to a user.
"appRoles": [
{
"allowedMemberTypes": [
"User"
],
"description": "Creators can create Surveys",
"displayName": "SurveyCreator",
"id": "1b4f816e-5eaf-48b9-8613-7923830595ad",
"isEnabled": true,
"value": "SurveyCreator"
},
{
"allowedMemberTypes": [
"User"
],
"description": "Administrators can manage the Surveys in their tenant",
"displayName": "SurveyAdmin",
"id": "c20e145e-5459-4a6c-a074-b942bbd4cfe1",
"isEnabled": true,
"value": "SurveyAdmin"
}
],
The user list with roles listed.
| {
"pile_set_name": "StackExchange"
} |
Q:
Error when calling a groupby object inside a Pandas DataFrame
I've got this dataframe:
person_code #CNAE growth size
0 231 32 0.54 32
1 233 43 0.12 333
2 432 32 0.44 21
3 431 56 0.32 23
4 654 89 0.12 89
5 764 32 0.20 211
6 434 32 0.82 90
I need to create a new column called "top3growth". For that I will need to check df's #CNAE for each row and add an extra column pointing out which are the 3 persons with highest growth for that CNAE (it will add a dataframe inside the df dataframe). To create the "top3dfs" I'm using this groupby:
a=sql2.groupby('#CNAE',group_keys=False).apply(pd.DataFrame.nlargest,n=3,columns='growth')
(This solution came out of this question.)
It should look like this:
person_code #CNAE growth size top3growth ...
0 . 231 32 0.54 32 [df_top3_type_32]
1 . 233 43 0.12 333 [df_top3_type_43]
2 . 432 32 0.44 21 [df_top3_type_32]
3 . 431 56 0.32 23 [df_top3_type_56]
4 . 654 89 0.12 89 [df_top3_type_89]
5 . 764 32 0.20 211 [df_top3_type_32]
6 . 434 32 0.82 90 [df_top3_type_32]
...
df_top3_type_32 should look like this (for example):
person_code #CNAE growth size
6 . 434 32 0.82 90
0 . 231 32 0.54 32
2 . 432 32 0.44 21
I'm trying to solve my problem by using:
df['top3growth']=np.nan
for i in df.index:
df['top3growth'].loc[i]=a[a['#CNAE'] == df['#CNAE'].loc[i]]
But I'm getting:
ValueError: Incompatible indexer with DataFrame
Does anyone know what's going on?
Is there a more efficient way of doing this (not using a for loop)?
A:
There is one way, convert a to dict , then map it back
#a=df.groupby('#CNAE',group_keys=False).apply(pd.DataFrame.nlargest,n=3,columns='growth')
df['top3growth']=df['#CNAE'].map(a.groupby('#CNAE').apply(lambda x : x.to_dict()))
df
Out[195]:
person_code #CNAE growth size \
0 231 32 0.54 32
1 233 43 0.12 333
2 432 32 0.44 21
3 431 56 0.32 23
4 654 89 0.12 89
5 764 32 0.20 211
6 434 32 0.82 90
top3growth
0 {'person_code': {0: 231, 2: 432, 6: 434}, 'gro...
1 {'person_code': {1: 233}, 'growth': {1: 0.12},...
2 {'person_code': {0: 231, 2: 432, 6: 434}, 'gro...
3 {'person_code': {3: 431}, 'growth': {3: 0.32},...
4 {'person_code': {4: 654}, 'growth': {4: 0.12},...
5 {'person_code': {0: 231, 2: 432, 6: 434}, 'gro...
6 {'person_code': {0: 231, 2: 432, 6: 434}, 'gro...
After create your new column , if you want to convert the single cell back to dataframe
pd.DataFrame(df.top3growth[0])
Out[197]:
#CNAE growth person_code size
0 32 0.54 231 32
2 32 0.44 432 21
6 32 0.82 434 90
| {
"pile_set_name": "StackExchange"
} |
Q:
Proof of big-o propositions
I don't understand the proof of below sentences.
$O(f(n))=O(g(n)) \iff \Omega(f(n))=\Omega(g(n)) \iff \theta(f(n))=\theta(g(n))$
$f(n)=\theta(g(n)) \iff g(n)=\theta(f(n))$
How can I prove these statements?
A:
We will be using the following definitions:
Let $f\colon \mathbb{Z}_+ \to \mathbb{Z}_+$ be a function mapping positive integers to positive integers.
The class $O(f)$ consists of all functions $g\colon \mathbb{Z}_+ \to \mathbb{Z}_+$ such that for some real $C>0$, for all $n \in \mathbb{Z}_+$ we have $g(n) \leq Cf(n)$.
The class $\Omega(f)$ consists of all functions $g\colon \mathbb{Z}_+ \to \mathbb{Z}_+$ such that for some real $c>0$, for all $n \in \mathbb{Z}_+$ we have $g(n) \geq cf(n)$.
The class $\Theta(f)$ consists of all functions $g\colon \mathbb{Z}_+ \to \mathbb{Z}_+$ such that for some real $C,c>0$, for all $n \in \mathbb{Z}_+$ we have $cf(n) \leq g(n) \leq Cf(n)$.
We also write $g(n) = O(f(n))$ instead of $g(n) \in O(f(n))$, and similarly for the other two.
These are equivalent to the more standard definition in which the above only has to hold for large enough $n$. You can also carry out the proofs below using the standard definitions — this entails only small changes.
Let's start with the second statement: $f(n) = \Theta(g(n))$ iff $g(n) = \Theta(f(n))$.
Suppose that $f(n) = \Theta(g(n))$. Then there exist $C,c>0$ such that for all $n$ we have $cf(n) \leq g(n) \leq Cf(n)$. Therefore for all $n$ we have $C^{-1} g(n) \leq f(n) \leq c^{-1} g(n)$, and so $g(n) = \Theta(f(n))$. Similarly, if $g(n) = \Theta(f(n))$ then $f(n) = \Theta(g(n))$.
Now let us go back to the first statement: $O(f(n)) = O(g(n))$ iff $\Omega(f(n)) = \Omega(g(n))$ iff $\Theta(f(n)) = \Theta(g(n))$.
Suppose that $O(f(n)) = O(g(n))$. It is easy to check that $g(n) = O(g(n))$ (take $C = 1$), and so $O(f(n)) = O(g(n))$ implies that $g(n) = O(f(n))$. That is, there exists $D>0$ such that $g(n) \leq Df(n)$ for all $n$. Similarly, $f(n) = O(g(n))$, and so there exists $E>0$ such that $f(n) \leq Eg(n)$ for all $n$. Suppose now that $h(n) = \Theta(f(n))$. Then there exist $C,c>0$ such that $cf(n) \leq h(n) \leq Cf(n)$ for all $n$. This implies that $cD^{-1} g(n) \leq h(n) \leq CEg(n)$, and so $h(n) = \Theta(g(n))$. Similarly, if $h(n) = \Theta(g(n))$ then $h(n) = \Theta(f(n))$, and so $\Theta(f(n)) = \Theta(g(n))$.
Similarly, if $\Omega(f(n)) = \Omega(g(n))$ then $\Theta(f(n)) = \Theta(g(n))$.
Conversely, if $\Theta(f(n)) = \Theta(g(n))$ then, as before, $f(n) = \Theta(g(n))$. This means that there exist $C,c>0$ such that $cg(n) \leq f(n) \leq Cg(n)$ for all $n$. Now suppose that $h(n) = O(f(n))$. Then there exists $D>0$ such that $h(n) \leq Df(n)$ for all $n$. Since $h(n) \leq Df(n) \leq DCg(n)$, we see that $h(n) = O(g(n))$. Similarly, if $h(n) = O(g(n))$ then there exists $E>0$ such that $h(n) \leq Eg(n)$. This implies that $h(n) \leq Ec^{-1} f(n)$, and so $h(n) = O(f(n))$. Thus $O(g(n)) = O(f(n))$. Similarly, $\Omega(g(n)) = \Omega(f(n))$.
| {
"pile_set_name": "StackExchange"
} |
Q:
Sentence case for titles in biblatex
I'd like to have the titles of articles in Biblatex in sentence case (first word capitialized, the rest lowercase, override with {}) as is the default in Bibtex.
I tried:
\DeclareFieldFormat{titlecase}{\MakeSentenceCase{#1}}
However it also makes converts other fields into sentence case, such as the booktile:
[Pai99] P Paillier. “ Public-key cryptosystems based on composite degree residuosity classes”. Eurocrypt. 1999
I'd like:
[Pai99] P Paillier. “ Public-key cryptosystems based on composite degree residuosity classes”. EUROCRYPT. 1999
(This assumes that booktitle is EUROCRYPT in the .bib file)
Is there anyway to do this, short of adding {} to each booktitle entry?
A:
The format definition
\DeclareFieldFormat{titlecase}{\MakeSentenceCase{#1}}
makes all titles in sentence case, which isn't what you want. Titles need to be printed according to both the entry and field types. For example, with the title field we need to handle @article and @book entries differently. With @inproceedings entries we need to handle the title and booktitle fields differently.
To do this we can redefine the title bibmacro to print the title field of @article and any @in* entry type in sentence case. Taking the original definition found in biblatex.def:
\DeclareFieldFormat{sentencecase}{\MakeSentenceCase{#1}}
\renewbibmacro*{title}{%
\ifthenelse{\iffieldundef{title}\AND\iffieldundef{subtitle}}
{}
{\ifthenelse{\ifentrytype{article}\OR\ifentrytype{inbook}%
\OR\ifentrytype{incollection}\OR\ifentrytype{inproceedings}%
\OR\ifentrytype{inreference}}
{\printtext[title]{%
\printfield[sentencecase]{title}%
\setunit{\subtitlepunct}%
\printfield[sentencecase]{subtitle}}}%
{\printtext[title]{%
\printfield[titlecase]{title}%
\setunit{\subtitlepunct}%
\printfield[titlecase]{subtitle}}}%
\newunit}%
\printfield{titleaddon}}
Alternatively we can identify book-like entries directly and apply sentence casing to everything else. This is trickier because many more types qualify as book-like references and titles for these sources are printed by more than just one macro. In biblatex.def these include: title, booktitle, maintitle, journal, periodical and issue. To avoid redefining all of these, you can redefine the titlecase format instead.
\DeclareFieldFormat{titlecase}{\MakeTitleCase{#1}}
\newrobustcmd{\MakeTitleCase}[1]{%
\ifthenelse{\ifcurrentfield{booktitle}\OR\ifcurrentfield{booksubtitle}%
\OR\ifcurrentfield{maintitle}\OR\ifcurrentfield{mainsubtitle}%
\OR\ifcurrentfield{journaltitle}\OR\ifcurrentfield{journalsubtitle}%
\OR\ifcurrentfield{issuetitle}\OR\ifcurrentfield{issuesubtitle}%
\OR\ifentrytype{book}\OR\ifentrytype{mvbook}\OR\ifentrytype{bookinbook}%
\OR\ifentrytype{booklet}\OR\ifentrytype{suppbook}%
\OR\ifentrytype{collection}\OR\ifentrytype{mvcollection}%
\OR\ifentrytype{suppcollection}\OR\ifentrytype{manual}%
\OR\ifentrytype{periodical}\OR\ifentrytype{suppperiodical}%
\OR\ifentrytype{proceedings}\OR\ifentrytype{mvproceedings}%
\OR\ifentrytype{reference}\OR\ifentrytype{mvreference}%
\OR\ifentrytype{report}\OR\ifentrytype{thesis}}
{#1}
{\MakeSentenceCase{#1}}}
edit by @moewe: Note that the biblatex documentation recommends to use the starred form \MakeSentenceCase* instead of \MakeSentenceCase. The starred macro considers the language of the entry (as given in the langid field, or failing that assuming the current language) and only applies sentence case where it is appropriate (by default only for English-language publications).
A:
biblatex-ext introduces new field formats for a finer control over title casing. Where standard biblatex has the format titlecase that applies to all title-like fields alike, biblatex-ext has
titlecase:title
titlecase:booktitle
titlecase:maintitle
titlecase:maintitle
titlecase:journaltitle
titlecase:issuetitle
which apply only to the specific field in their name. As usual, these field formats can be defined per type. See pp. 21-22 of the biblatex-ext documentation. (Of course biblatex-ext still supports the standard titlecase format.)
If you only want the titles of entries whose titles are in quotation marks in sentence case, then use
\documentclass[british]{article}
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{babel}
\usepackage{csquotes}
\usepackage[style=ext-authoryear, backend=biber]{biblatex}
\DeclareFieldFormat
[article,inbook,incollection,inproceedings,patent,thesis,unpublished]
{titlecase:title}{\MakeSentenceCase*{#1}}
\addbibresource{biblatex-examples.bib}
\begin{document}
\nocite{springer,weinberg,doody,maron,pines}
\printbibliography
\end{document}
| {
"pile_set_name": "StackExchange"
} |
Q:
What happened to reputation when Meta Stack Exchange split from Stack Overflow?
Recently I came to know that MSE was split from MSO. I am wondering if the reputation of Meta users was duplicated in this new MSE.
Was the reputation split or duplicated?
Is that why some users have extremely high reputation? (because the Rep in MSE represents the Rep gotten from SO before split)
A:
This site is the old Meta Stack Overflow, renamed to Meta Stack Exchange, with a fresh graphical design and much of the Stack Overflow specific content cleaned up. An entirely new site was created to become Meta Stack Overflow, and some content was moved over to it.
The new Meta Stack Overflow site is a regular child Meta, where you do not have an independent reputation. Instead your reputation is copied over from the main site, once every hour.
Meta Stack Exchange is an exception in that it is not a child Meta of a main site, and you earn reputation like on a regular Stack Exchange site here. All older accounts with a lot of reputation here earned it from before the rename.
TL;DR: MSE is the old site, renamed, accounts here simply didn't lose reputation. The current MSO is the new site, and you don't earn rep there as it is a regular child Meta.
Note that at no point did this site ever share reputation with Stack Overflow! Reputation here has always been separate.
| {
"pile_set_name": "StackExchange"
} |
Q:
Intersection of irreducible sets in $\mathbb A_{\mathbb C}^3$ is not irreducible
I am looking for a counterexample in order to answer to the following:
Is the intersection of two closed irreducible sets in $\mathbb
A_{\mathbb C}^3$ still irreducible?
The topology on $\mathbb A_{\mathbb C}^3$ is clearly the Zariski one; by irreducible set, I mean a set which cannot be written as a union of two proper closed subsets (equivalently, every open subset is dense).
I think the answer to the question is "No", but I do not manage to find a counterexample. I think I would be happy if I found two prime ideals (in $\mathbb C[x,y,z]$) s.t. their sum is not prime. Am I right? Is there an easier way?
Thanks.
A:
Choose any two irreducible plane curves, they will intersect in a finite number of points.
| {
"pile_set_name": "StackExchange"
} |
Q:
Load data into ListFragments
I would like to load data from network into fragments. Inside the Activity I have a ViewPager and I am using a FragmentPagerAdapter to provide fragments to it. The problem is:
How can I detect from activity, that ALL fragments from ViewPager are created and ready to show data. How can I inform fragments, that there is a data inside activity, that should be shown?
A:
How can I detect from activity, that ALL fragments from ViewPager are
created and ready to show data.
This sounds a bit strange to ask. You Activity doesn't need to know when all of the ViewPager fragments are initialized(as only one will actually be seen by the user(plus one on each side will be available if you don't mess with setOffScreenPageLimit()), it doesn't make sense to update the rest).
How can I inform fragments, that there is a data inside activity, that
should be shown?
You fragments could register themselves as listeners for an Activity data load event. The Activity will do it's job of getting the data and then call the update method on all of the registered fragments(which should be stored in WeakReferences to avoid holding on to them when we shouldn't). The main problem is you'll risk trying to update items
Another hacky approach is to use something like from the Activity:
// get the current fragment(add/subtract one for the available fragments on each side of the curently visible
Fragment f = getSupportFragmentManager().findFragmentByTag("android:switcher:" + R.id.ViewPagerId + ":" + mViewPager.getCurrentItem()); one.)
f.update();
The rest of the fragments will check for new data being available when they get recreated.
| {
"pile_set_name": "StackExchange"
} |
Q:
Excel & VBA - Changing tab colour if first 3 letter of tab name = "xxx"
New to VBA in excel, but hoping to get some help with a macro while I find my feet. Any help would be greatly appreciated.
I have a workbook where I would like to automatically colour tabs based on the tab names. My tab/sheet names are often codes. Some of my existing sheet names (for example) are:
CIS22ABC
CIS22CBA
NAS22XYZ
NAS22ZXY
MY DATA
ADMIN, etc.
I am trying to implement a script that runs across entire Workbook (i.e. under "ThisWorkbook") that searches first 3 letters of every tab name and makes tab colours based on these letters. There are lots of sheets being added and removed all the time - so an array of names won't work.
In short, I am hoping to do the following:
If first 3 letter of sheet name = "CIS" then Tab.Color = RGB(0, 255, 255)
If first 3 letter of sheet name = "NAS" then Tab.Color = RGB(66, 134, 244)
Otherwise do nothing!
Again, any help would be great. Thank you.
A:
This will automatically execute every time you add a new sheet.
There are a good amount of events you can tie this to in order to have the macro fire automatically without user intervention. A few notable ones that may suit your needs better than the NewSheet event that I used below are SheetChange, SheetBeforeDelete, SheetActivate, etc.
This code will need to be placed in the coding space under ThisWorkbook rather a sheet or module in VBE.
Option Explicit
Private Sub Workbook_NewSheet(ByVal Sh As Object)
Dim ws As Worksheet
For Each ws In Worksheets
Select Case Left(ws.Name, 3)
Case "CIS"
ws.Tab.Color = RGB(0, 255, 255)
Case "NAS"
ws.Tab.Color = RGB(66, 134, 244)
'Case "ABC"
'Add as many of these as you need inbetween _
Select Case and End Select
End Select
Next ws
End Sub
| {
"pile_set_name": "StackExchange"
} |
Q:
composer refuses to install a package even though dependency version number is within range
My package relies on illuminate/support ~4.1.
I am trying to install this package in a Laravel project whose composer.json file demands "laravel/framework": "4.1.*".
But when I run composer require and try to install this package, I ultimately run into this error each time:
Your requirements could not be resolved to an installable set of packages.
Problem 1
- Conclusion: remove laravel/framework v4.1.29
- adityamenon/postcodes-io-laravel 1.0.0 requires illuminate/support 4.2.* -> satisfiable by illuminate/support[v4.2.1, v4.2.2, v4.2.3, v4.2.4, v4.2.5, v4.2.6, v4.2.7, v4.2.8, v4.2.9].
- adityamenon/postcodes-io-laravel 1.0.1 requires illuminate/support 4.2.* -> satisfiable by illuminate/support[v4.2.1, v4.2.2, v4.2.3, v4.2.4, v4.2.5, v4.2.6, v4.2.7, v4.2.8, v4.2.9].
- don't install illuminate/support v4.2.1|don't install laravel/framework v4.1.29
- don't install illuminate/support v4.2.2|don't install laravel/framework v4.1.29
- don't install illuminate/support v4.2.3|don't install laravel/framework v4.1.29
- don't install illuminate/support v4.2.4|don't install laravel/framework v4.1.29
- don't install illuminate/support v4.2.5|don't install laravel/framework v4.1.29
- don't install illuminate/support v4.2.6|don't install laravel/framework v4.1.29
- don't install illuminate/support v4.2.7|don't install laravel/framework v4.1.29
- don't install illuminate/support v4.2.8|don't install laravel/framework v4.1.29
- don't install illuminate/support v4.2.9|don't install laravel/framework v4.1.29
- Installation request for laravel/framework == 4.1.29.0 -> satisfiable by laravel/framework[v4.1.29].
- Installation request for adityamenon/postcodes-io-laravel ~1.0 -> satisfiable by adityamenon/postcodes-io-laravel[1.0.0, 1.0.1].
Installation failed, reverting ./composer.json to its original content.
What am I doing wrong?
A:
You install adityamenon/postcodes-io-laravel ~1.0, as we read in one of the last items in the error message:
Installation request for adityamenon/postcodes-io-laravel ~1.0 -> satisfiable by adityamenon/postcodes-io-laravel[1.0.0, 1.0.1].
This means 1.0.0 or 1.0.1 (also showed in this item). Looking at packagist, both 1.0.0 and 1.0.1 require illuminate/support 4.2.* (shown in the second and third item of the error message). You install laravel/framework 4.1.*. Since 4.1.* is not within the range of 4.2.* (the required version for the package), it'll fail.
You probably want to install the dev version, which you can do by tagging it with @dev:
{
...
"require": {
"adityamenon/postcodes-io-laravel": "dev-master"
}
}
Btw, it's not a good idea to really rely on dev-master. You should always try to alias the master branch to a specific dev version.
| {
"pile_set_name": "StackExchange"
} |
Q:
how to create a .desktop file to launch a python script
I made my litle python App, but now I can't launch the app from the .desktop that I made:
[Desktop Entry]
Name=MyApp
Version=0.1
Exec=/usr/share/MyApp/MyApp.py
Icon=/usr/share/MyApp/img/MyApp.png
Comment=Descriton......
Type=Application
Terminal=false
StartupNotify=false
Categories=Video;GTK;GNOME
the /usr/share/MyApp folder I made as Root, the MyApp.py has a executable properties
every time I Doble Click the MyApp.desktop the MyApp.py launch, display a systray icon, but it closed and make ubuntu display a error message.
if I run the MyApp.py from Terminal or DClick on the .py file it runs normally, no crashes
A:
May I offer a few notes and improvements over the accepted answer?
Manually installing software to /usr/share is strongly discouraged!!! That tree should be reserved for software installed by your package manager (Ubuntu Software Center, apt, etc). There is usr/local/share for that. Or, if you don't want to use sudo, you may install for your user only at ~/.local/share. See https://askubuntu.com/a/135679/11015 for more information about software install directories.
If you want to execute a .desktop file by double-clicking on it, in Nautilus or your Desktop, just make the .desktop file executable! Note that this is not required if you copy it to /usr/share/applications and launch it via Dash/Menu. Or, better: copy to ~/.local/share/applications or at least /usr/local/share/applications, as suggested above.
Icon=/usr/share/MyApp/MyApp.py makes no sense: a .py file is not a valid image, so it cannot be used as an icon. If you want to use the default python icon, use /usr/share/pixmaps/python.xpm
The main difference between your .desktop file and the accepted answer is the Path=/usr/share/MyApp/ statement. If that made your app work, it means your software requires the current directory to be the application directory. And that is a bad thing: your software should be able to run fine and find its data files regardless of current directory. (hint: use python's __file__)
If your application is a GUI (ie, it has a window), then add StartupNotify=true. It will help the launcher identify its window when it is running.
A:
Try following text in your .desktop file.
[Desktop Entry]
Version=1.0
Name=ProgramName
Comment=This is my comment
Exec=/usr/share/MyApp/MyApp.py
Icon=/usr/share/MyApp/MyApp.py
Path=/usr/share/MyApp/
Terminal=false
Type=Application
Categories=Utility;Application;
| {
"pile_set_name": "StackExchange"
} |
Q:
Caulk or Grout the base of a toilet?
I recently re-tiled my bathroom floor and removed the toilet to do so. I noticed that the builders had used grout around the base of the toilet when I was removing it and demoing the tile. Should I use grout or caulk around the base of the toilet? I've read some things that would suggest caulk is a better option.
The toilet is installed with the appropriate wax ring and is working fine. I do notice the slightest rocking if I push on the toilet. I'm guessing this is due to the floor not being completely level. The flange bolts are as tight as I'm willing to go with them w/o risk of cracking the porcelain.
Will the grout/caulk help to completely stabilize the toilet as well?
A:
The grout may help to keep the toilet stabile for awhile, but to ensure it remains level and secure you should install shims. I've used plastic building shims that can be snapped-off at 2 inch increments. Any flat material that is water-proof will do. Loosen the bolts at the base of the toilet first. Place a level on the rim of the bowl and shim up the low side slightly past level. Snug the shim side and than the other side. You can now either grout or caulk. Caulk is quick and easy to clean up. Grout will support the toilet better.
A:
Code requires toilets to be caulked at the floor, that, IMO is a mistake. If the toilet does develop a leak, it will be restricted under the toilet and the subfloor, and may leak for a while before it is detected. So much for that.
The toilet can be shimmed to keep it from rocking. Because of the rocking, what is not leaking now eventually will leak.
The grout that the other installer used will act as shims to keep it from rocking as well, but back at the "if it leaks" issue. If you feel you need to seal it in some way, do not seal it 100%. Leave an inch or so not sealed at the back where it is not noticeable, that way it will look good and water can get out if need be.
A:
NEITHER !!!
I don't care what code says. I have had 40+ houses inspected the past 15 years, never caulked one toilet, never had one inspector ask me to. I have had a few point it out and say they don't either but that is it.
Points:
caulk or grout does not stabilize your toilet. A flat surface does this. If your surface is not flat there are things you can buy to help under the toilet. Using grout to do this is ludicrous. It will eventually fail and be a mess in your bathroom. This is really half-assed
my first issue with caulk is that it would hide small leaks. I want to know right away if human waste was leaking into my house - don't care if it is a few drops a month. To me this is more than enough reason to never do it.
but then point number two is how do you expect your caulk to look after 6 months in your bathroom. Better buy yellow caulk. There is nothing worse than having guests over using your guest bathroom that your kids use seeing pee pee caulk.
There are some dumb codes in the book, some dumber than this one. No decent inspector would ever enforce crap like this as he wouldn't enforce the 50 other dumb codes in the book. I remember asking the last inspector that pointed it out, "Would I caulk it if the bathroom was carpet?" He says, "Who would carpet a bathroom?" I say, "Who would caulk a toilet?"
| {
"pile_set_name": "StackExchange"
} |
Q:
Grab href with jQuery on click
I was wondering is there any chance to grab href link with jQuery?
For example, I have these:
<li><a href="head.php">Head</a></li>
<li><a href="body.php">Body</a></li>
<li><a href="settings.php">Settings</a></li>
So I have made with jQuery some animation but I had to preventDefault();
So now on click of any of these <a> tags I get my fancy animation and thats it.
For index I was able to make it redirected because it is basically index.php so I have wrote following:
$('a#logo').on('click', function() {
setTimeout(function() {
window.location.replace('index.php');
}, 1100);
});
Now since it would be too much to make ID's for every link and to write that with jQuery... it will be just way too much right?
So how can I tell jQuery... hey on click of specified <a> delay redirection for 1 second and then continue where you were before (redirect the page to specified href).
I hope you guys can understand me, and I truly hope that this is possible!
A:
$('a').click(function(e){
e.preventDefault();
var href = $(this).attr('href');
setTimeout(function(){
window.location.href = href;
}, 1100);
});
| {
"pile_set_name": "StackExchange"
} |
Q:
How to delete lines where certain pattern appears on the specific position
I am having a file that looks like this:
PEBP1_HUMAN Homo sapiens P30086 PDB; 1BD9; X-ray; 2.05 A; A/B=1-187.
PDB; 1BEH; X-ray; 1.75 A; A/B=1-187.
PDB; 2L7W; NMR; -; A=1-187.
PDB; 2QYQ; X-ray; 1.95 A; A=1-187.
PECA1_HUMAN Homo sapiens P16284 PDB; 2KY5; NMR; -; A=686-738.
PDB; 5C14; X-ray; 2.80 A; A/B=28-229.
PDB; 5GEM; X-ray; 3.01 A; A/B=28-232.
PELO_HUMAN Homo sapiens Q9BRX2 PDB; 1X52; NMR; -; A=261-371.
PDB; 5EO3; X-ray; 2.60 A; A/B=265-385.
PDB; 5LZW; EM; 3.53 A; ii=1-385.
PDB; 5LZX; EM; 3.67 A; ii=1-385.
PDB; 5LZY; EM; 3.99 A; ii=1-385.
PDB; 5LZZ; EM; 3.47 A; ii=1-385.
From this file I want to match all EM; elements that are found just after PDB; (four letter code); EM;. So under this column either X-ray;, NMR; or EM; can be found. For those lines that have EM; remove them. Is there some bash command that I can use to match these elements and remove these lines?
Importantly when matching it put the space before EM, so match it with space, like EM;.
Expected result is:
PEBP1_HUMAN Homo sapiens P30086 PDB; 1BD9; X-ray; 2.05 A; A/B=1-187.
PDB; 1BEH; X-ray; 1.75 A; A/B=1-187.
PDB; 2L7W; NMR; -; A=1-187.
PDB; 2QYQ; X-ray; 1.95 A; A=1-187.
PECA1_HUMAN Homo sapiens P16284 PDB; 2KY5; NMR; -; A=686-738.
PDB; 5C14; X-ray; 2.80 A; A/B=28-229.
PDB; 5GEM; X-ray; 3.01 A; A/B=28-232.
PELO_HUMAN Homo sapiens Q9BRX2 PDB; 1X52; NMR; -; A=261-371.
PDB; 5EO3; X-ray; 2.60 A; A/B=265-385.
A:
awk can do that:
awk '{if(!($1=="PDB;"&&$3=="EM;")){print}}' <yourfile
This tests if the first column (by default whitespaces are taken as the delimiter) of the current line is PDB; and the third column is EM; and prints the line only if not both are true.
Output
$ awk '{if(!($1=="PDB;"&&$3=="EM;")){print}}' <test
PEBP1_HUMAN Homo sapiens P30086 PDB; 1BD9; X-ray; 2.05 A; A/B=1-187.
PDB; 1BEH; X-ray; 1.75 A; A/B=1-187.
PDB; 2L7W; NMR; -; A=1-187.
PDB; 2QYQ; X-ray; 1.95 A; A=1-187.
PECA1_HUMAN Homo sapiens P16284 PDB; 2KY5; NMR; -; A=686-738.
PDB; 5C14; X-ray; 2.80 A; A/B=28-229.
PDB; 5GEM; X-ray; 3.01 A; A/B=28-232.
PELO_HUMAN Homo sapiens Q9BRX2 PDB; 1X52; NMR; -; A=261-371.
PDB; 5EO3; X-ray; 2.60 A; A/B=265-385.
| {
"pile_set_name": "StackExchange"
} |
Q:
Why does my code say that 90 was repeated 6 times, instead of 5?
The problem I am working on is called "Birthday Cake Candles" from HackerRank.
Basically, the program takes in inputs and then outputs the number of times the largest input was inputted.
For example, if the input could be:
4
3 2 1 3
4 would be the number of elements in the array.
3 2 1 3 would be the elements in the array.
The output would be 2, since 3 (the largest) repeats 2 times.
This is my code:
#include <iostream>
using namespace std;
int main() {
long long num;
cin >> num;
long long arr[num];
for (int i = 0; i < num; i++) {
cin >> arr[i];
}
long long qwer{};
//find the largest value
for (int i = 1; i<num; i++) {
if (arr[0] < arr[i]) {
arr[0] = arr[i];
if (arr[0] == arr[i]) {
arr[0] = arr[i];
}
}
qwer = arr[0];
}
long long howmany = 0;
//find how many times the largest value is repeated
for (int i = 0; i<num; i++) {
if (arr[i] == qwer) {
howmany += 1;
}
}
cout << howmany;
return 0;
}
My code gives the correct output for the test case mentioned above. However, for the following test cases, the output is incorrect:
10
18 90 90 13 90 75 90 8 90 43
For this test case, my program outputted 6, instead of 5.
10
44 53 31 27 77 60 66 77 26 36
For this test case, my program outputted 3, instead of 2.
What is wrong with my code?
Thanks in advance.
A:
The problem with your code is that your "find largest value" loop replaces the first element of your array with the highest value found. When you loop back through the array in your "find how many times" loop, you have an additional instance of the high value thus getting back the 6 instead of 5.
Your array when you input:
18 90 90 13 90 75 90 8 90 43
Your array when after the "find largest value" loop has completed:
90 90 90 13 90 75 90 8 90 43
The reason this worked on your first example is because the highest value is the first element. Therefore, you are not replacing the first element with anything and creating an additional instance in your array. If you were to try an array of 10 with the highest value first, you would get the correct number outputted.
Also, you could change the loop to this:
qwer = arr[0];
//find the largest value
for (int i = 1; i<num; i++) {
if (qwer < arr[i]) {
qwer = arr[i];
}
}
This would remove the problem that you are seeing with the inconsistency between your examples.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to install Backtrack 5 tool?
I have a problem with a Backtrack 5 installation. So I just want to install Backtrack 5 tool on my Ubuntu 11.10. Can it run on ubuntu 11.10?
A:
You don't install BackTrack on another OS.
You either run it as a LiveCD, or you install it on a disk as the operating system. I assume you want to keep your existing operating system (your Ubuntu install) so don't try and install BackTrack over it; run it as a LiveCD or install it on another disk or partition.
Backtrack on USB stick works perfectly for penetration testing. I have one USB stick just for this purpose. You will need to ensure your machine can boot from USB (set this in the BIOS) and that the install you have on the stick is a Live install (ie bootable) and it should boot to BackTrack without any problem.
Update: the BackTrack FAQ says this about using BackTrack tools in Ubuntu and vice versa:
Why cant I just add the Backtrack repositories to my Ubuntu install or
the Ubuntu repositories to my Backtrack install ?
We highly recommend against this action because Backtrack tools are
built with many custom features, libraries and kernel. We have no way
of knowing how they will perform on a non Backtrack distribution, plus
you will very quickly break your install. Also if you chose to add the
ubuntu repositories to your Backtrack install, you will most certainly
break your entire Backtrack install very quickly. We do a lot of
testing to ensure that all packages in our repo will work together
without causing problems. If you decide on this course of action you
do so entirely at your own risk and the backtrack team will not offer
any support in any way.
| {
"pile_set_name": "StackExchange"
} |
Q:
Where is the core dump file written for a Spring Boot application
I am starting a Spring Boot application in Windows using winsw and after it crashes I can't find the Java core dump file anywhere.
Where will the Java core dump file be located?
Thx.
A:
The question of where Java core dumps are located on Windows gets a little complicated. The commonest answers are:
$JAVA_HOME/bin
The current working directory of the process
The file is typically named hs_err_<PID>.
However, rather than guessing you can tell Java where to write a dump file.
The default JVM argument for this, in Oracle JVMs, is: -XX:HeapDumpPath. For example:
-XX:HeapDumpPath=/path/to/dumps/directory/java_pid<pid>.hprof`
IBM provides its own flavour: -Xdump. For example
-Xdump:heap:label=/path/to/dumps/directory/heapdump.%Y%m%d.%H%M%S.%pid.%seq.dump
-Xdump:java:label=/path/to/dumps/directory/core.%Y%m%d.%H%M%S.%pid.%seq.dump
And if you run with -Xdump:what then a log event will be written to STDOUT on startup showing your the various dump parameters you have chosen.
| {
"pile_set_name": "StackExchange"
} |
Q:
Exp:resso Store + UPS Shipping
We're using Store 2.0.1 on EE 2.7.2 and trying to figure out how to get the UPS shipping extension working. I've read and re-read the docs but I'm clearly missing something. :/
Docs say:
The UPS Shipping extension will automatically add shipping method
options to your checkout. To display the shipping options inside your
checkout tag, use the {field:shipping_method} variable.
So, we have {field:shipping_method} in our {exp:store:checkout} tag but all I see is a drop-down with no options. We don't have anything set up in the Shipping Methods of Store's settings. Have I misunderstood the docs?
A:
I had the exact same issue and if it was not for Angie, I would have never figured it out.
"The solution for us ended up being ridiculously simple. Under the UPS Settings (in Extensions), make sure that the Source Country is correct. I had something like "United States" but it should be US."
I had exactly that, "United States", and changing it fixed the problem.
| {
"pile_set_name": "StackExchange"
} |
Q:
Reference different schema using "alter session set current_schema" inside a package
Is it possible to do
alter session set current_schema=MySchema;
inside a package?
Our asp.net web application call Oracle packages. We'd like to connect to database with an Oracle user that is not the owner of MySchema. For that, we grant execute permissions on Other_User to package MyPackage.
Example:
grant execute on MySchema.MyPackage to Other_User
But when web app connects to Oracle and try to execute the stored procedures of MyPackage, it gets errors because tables don't belong to Other_User.
One way to avoid errors is creating synonyms, but we would prefere to use
alter session set current_schema=MySchema;
if possible, inside the package.
EDIT: When trying to put "alter session" in package:
A:
You cannot use DDL statements (which ALTER SESSION is) directly in PL/SQL.
You need to use an EXECUTE IMMEDIATE:
execute immediate 'alter session set current_schema=MySchema';
| {
"pile_set_name": "StackExchange"
} |
Q:
Biholomorphic maps from unit disc
Let $f$ be biholomorphic map from the unit disc onto some $D \subset \overline{\mathbb{C}}$ (considered as a Riemann sphere, so it is holomorphic) with
$$f(z)=\frac{1}{z}+c_1z+c_2z^2+\cdots$$
What does inequality
$$\sum n |c_n|^2 \leq 1$$
mean geometrically?
A:
The complement of $D$ is a compact subset of the plane. Its area can be computed in terms of $f$, using Green's formula. This computation yields
$$\text{area of $\mathbb C\setminus D$} = \pi-\pi\sum_{n=1}^\infty n|c_n|^2$$
Therefore, the inequality $\sum_{n=1}^\infty n|c_n|^2 \le 1$ expresses the fact that area cannot be negative. This is why the result is known as the Area Theorem; the wiki has the detailed computation to which I alluded above.
| {
"pile_set_name": "StackExchange"
} |
Q:
Outputting modified textContent
I have a svg as a string and I'm doing some modifications to it. I'm outputting the svg as text into targetDiv.
<html lang="en">
<head>
<title>Output Modified</title>
</head>
<body>
<div id="targetDiv"></div>
<script>
var viewText = '<svg width="400" height="100"><rect width="400" height="100" style="fill:rgb(0,0,255);stroke-width:10;stroke:rgb(0,0,0)" /></svg>';
var rect = document.getElementsByTagName("rect");
for (var i = 0; i < rect.length; i++) {
//do modifications to rect (e.g. height="200")
}
document.getElementById('targetDiv').textContent = viewText;
</script>
</body>
</html>
At the moment the output is same as viewText. My question is how can I output the modified svg MARKUP?
A:
You can use a temp element to modify the contents
var viewText = '<svg width="400" height="100"><rect width="400" height="100" style="fill:rgb(0,0,255);stroke-width:10;stroke:rgb(0,0,0)" /></svg>';
var tmp = document.createElement('div');
tmp.innerHTML = viewText;
var rect = tmp.getElementsByTagName("rect");
for (var i = 0; i < rect.length; i++) {
rect[i].style.height = '200px'
}
document.getElementById('targetDiv').innerText = tmp.innerHTML;
<div id="targetDiv"></div>
| {
"pile_set_name": "StackExchange"
} |
Q:
Передача атрибутов одного класса другому
При запуске программы получается ошибка AttributeError: 'BD' object has no attribute 'entry_description'. Как это можно исправить?
class Child(tk.Toplevel):
def __init__(self):
super().__init__(root)
self.init_child()
def init_child(self):
BD().create()
self.entry_description = ttk.Entry(self)
self.entry_description.place(x=200,y=50)
self.entry_money = ttk.Entry(self)
self.entry_money.place(x=200,y=110)
self.combobox = ttk.Combobox(self, values=(u'Доходы',u'Расходы'),state='readonly')
self.combobox.current(0)
self.combobox.place(x=200,y=80)
btn_add = tk.Button(self, text='Добавить', command = BD().add_item())
btn_add.place(x=220, y=170)
btn_add.bind('<Button-1>')
btn_cancel = tk.Button(self, text='Отменить', command=lambda: self.destroy())
btn_cancel.place(x=300, y=170)
btn_cancel.bind('<Button-1>')
self.grab_set()
self.focus_set()
class BD:
def create(self):
conn = sqlite3.connect('finance.db')
cursor = conn.cursor()
cursor.execute("""CREATE TABLE IF NOT EXISTS albums (ID integer primary key,
description text,
costs text,
total real)""")
conn.commit()
def add_item(self):
cursor = sqlite3.connect('finance.db').cursor()
cursor.execute("""INSERT INTO albums(description, costs, total) VALUES (?, ?, ?)""",
(self.entry_description.get(), self.combobox.get(), self.entry_money.get()))
A:
Всё довольно просто, для того чтобы получить данные для добавления в базу, вы должны их передать внутрь обьекта класса BD. В функции add, были обращений к несувществующим атибутам класса.
Вот, исправил ошибки:
from tkinter import StringVar
class Child(tk.Toplevel):
def __init__(self):
super().__init__(root)
self.init_child()
def init_child(self):
BD().create()
self.description = StringVar()
self.entry_description = ttk.Entry(self, textvariable=self.description)
self.entry_description.place(x=200,y=50)
self.money = StringVar()
self.entry_money = ttk.Entry(self, textvariable=self.money)
self.entry_money.place(x=200,y=110)
self.combobox = ttk.Combobox(self, values=(u'Доходы',u'Расходы'),state='readonly')
self.combobox.current(0)
self.combobox.place(x=200,y=80)
btn_add = tk.Button(
self,
text='Добавить',
command=lambda: BD().add_item(
self.description.get(),
self.combobox.get(),
self.money.get()
)
)
btn_add.place(x=220, y=170)
btn_add.bind('<Button-1>')
btn_cancel = tk.Button(self, text='Отменить', command=lambda: self.destroy())
btn_cancel.place(x=300, y=170)
btn_cancel.bind('<Button-1>')
self.grab_set()
self.focus_set()
class BD:
def create(self):
conn = sqlite3.connect('finance.db')
cursor = conn.cursor()
cursor.execute("""CREATE TABLE IF NOT EXISTS albums (ID integer primary key,
description text,
costs text,
total real)""")
conn.commit()
def add_item(self, entry_description, combobox, entry_money):
conn = sqlite3.connect('finance.db')
cursor= conn.cursor()
cursor.execute("""INSERT INTO albums(description, costs, total) VALUES (?, ?, ?)""",
(entry_description, combobox, entry_money))
conn.commit()
| {
"pile_set_name": "StackExchange"
} |
Q:
Fourier Transform Of $h(x,y)=A\cdot e^{-2\pi^2(x^2+y^2)}$
Prove: $$h(x,y)=A\cdot e^{-2\pi^2(x^2+y^2)}$$ is $$H(u,v)=\frac{A}{\sqrt{2}}\cdot e^{-\frac{u^2+v^2}{2}}$$
After fourier transform
$$F(u,v)=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f(x,y)e^{-i2\pi (ux+vy)}dxdy$$
$$H(u,v)=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}A\cdot e^{-2\pi^2(x^2+y^2)}e^{-i2\pi (ux+vy)}dxdy$$
$$H(u,v)=A\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}e^{-2\pi^2(x^2+y^2)}e^{-i2\pi (ux+vy)}dxdy$$
$$H(u,v)=A\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}e^{-2\pi^2x^2-2\pi^2y^2-i2\pi ux -i2\pi vy}dxdy$$
How should I group $-2\pi^2x^2-2\pi^2y^2-i2\pi ux -i2\pi vy$ in order to integrate?
A:
Complete squares
\begin{align}
-2\pi^2 x^2 - 2\pi^2y^2 - i2\pi u x - i 2\pi v y &=
-2\pi^2\left(x + \frac{i u}{\pi}\right)^2 - \frac{u^2}{2}
-2\pi^2\left(y + \frac{i u}{\pi}\right)^2 - \frac{v^2}{2}
\end{align}
So the integral becomes
$$
H(u, v) = A e^{-u^2/2 - v^2/2}\left(\int_{-\infty}^{+\infty} e^{-2\pi^2(x + iu/\pi)^2}{\rm d}x\right)\left(\int_{-\infty}^{+\infty} e^{-2\pi^2(y + iv/\pi)^2}{\rm d}y\right)
$$
The integrals are fairly easy to calculate
$$
\int_{-\infty}^{+\infty} e^{-2\pi^2(x + iu/\pi)^2}{\rm d}x = \frac{1}{\sqrt{2\pi}}
$$
| {
"pile_set_name": "StackExchange"
} |
Q:
How to use c program directly in any web server?
I have a small program written in c and i am planning to convert it to a web service module. Is there any web server(Linux or Windows platform) where i can embed the c program directly. I googled and found some of the web servers which supports c code are Apache, mongoose etc. But i don't know how to implement it?
A:
You are looking for CGI. See the docs for more info:
http://httpd.apache.org/docs/2.0/howto/cgi.html
| {
"pile_set_name": "StackExchange"
} |
Q:
How to append NSData to NSData in Objective-C?
I m going to append NSData to NSData in Objective-C.
How can I do it?
A:
NSMutableData *first_data = [NSMutableData dataWithContentsOfURL:self.firstURL];
NSMutableData *second_data = [NSMutableData dataWithContentsOfURL:self.secondURL];
[first_data appendData:second_data];
[first_data writeToURL:url atomically:YES];
| {
"pile_set_name": "StackExchange"
} |
Q:
Generics adding subclasses to array
I have a class B which extends A. A stores a list of AItem and B stores a list of BItem
In 'A' I have an ArrayList which uses [? extends AItem] .
I assume this means that I can use this ArrayList for objects of any type that extends AItem.
So in my B class I have a method add(..) which adds a BItem to the items.
I assumed this would work because the items arrayList can hold a list of any object that extends from AItem.
import java.util.ArrayList;
class A {
public ArrayList<? extends AItem> items;
}
public class B extends A{
public void add(BItem i) {
this.items.add(i); //compile error here
}
}
//items :
class AItem {}
class BItem extends AItem{}
How would I get this to work?
My compile error looks like this :
The method add(capture#2-of ? extends AItem) in the type
ArrayList is not applicable for the
arguments (BItem)
A:
You probably don't need to use generics here. Having the list of type AItem in class A should be fine (I would also declare it of type List instead of ArrayList for best practices):
class A {
public List<AItem> items;
}
class B extends A {
public void add(BItem i) {
this.items.add(i);
}
}
The problem with ? extends AItem is that the compiler cannot guarantee that the actual type of the elements is AItem or BItem. It could be another subclass CItem in which case the type safety is broken.
There is another approach to designing the class hierarchy with generics, in which you set the type parameter in the parent class and set it to the appropriate subclass (BItem) in the extended class B:
class A<T extends AItem> {
public ArrayList<T> items;
}
class B extends A<BItem>{
public void add(BItem i) {
this.items.add(i);
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
In mathematics, does a point have a shape?
In mathematics, is a point a sphere or some other shape? Does a point have a shape?
In physics is there any difference?
A:
Define a point as an address in space. Hence a point is of zero width because any continuous region with size greater than zero is defined as a set of multiple points to meet the definition of continuity. This allows a point to distinguish a location from any other regardless of how closely it is inspected. A quoted definition reads "...[points] do not have volume, area, length..."
| {
"pile_set_name": "StackExchange"
} |
Q:
Error logic at reading xml document with xquery?
I have a lot XML documents that contain <h1>text</h1> .
for example :
<p>
<h1>
text-1
</h1>
a lot text
<h1>
text-2
</h1>
</p>
I insert this code :
for $p in (1 to 351)
return <a href="{$p}">{data(doc(concat("/db/INDEX/",$p,".html"))//h1)}</a>
The result is This :
<a href="2"<----this is page number >
text-1
text-2
</a>
<a href="3"<----this is page number />
Notice: when in one page are two tag or more <h1> the texts show in one tag <a>
but i need this :
<a href="2">
text-1
</a>
<a href="2">
text-2
</a>
And when in a page are not <h1> tag , show empty <a>.
A:
How about:
for $h in collection("/db/INDEX")//h1
let $i := replace(document-uri(root($h)), ".*/(.*)\.html", "$1")
return
<a href="{$i}">{string($h)}</a>
| {
"pile_set_name": "StackExchange"
} |
Q:
How to compare Enums in TypeScript
In TypeScript, I want to compare two variables containing enum values. Here's my minimal code example:
enum E {
A,
B
}
let e1: E = E.A
let e2: E = E.B
if (e1 === e2) {
console.log("equal")
}
When compiling with tsc (v 2.0.3) I get the following error:
TS2365: Operator '===' cannot be applied to types 'E.A' and 'E.B'.
Same with ==, !== and !=.
I tried adding the const keyword but that seems to have no effect.
The TypeScript spec says the following:
4.19.3 The <, >, <=, >=, ==, !=, ===, and !== operators
These operators require one or both of the operand types to be assignable to the other. The result is always of the Boolean primitive type.
Which (I think) explains the error. But how can I get round it?
Side note
I'm using the Atom editor with atom-typescript, and I don't get any errors/warnings in my editor. But when I run tsc in the same directory I get the error above. I thought they were supposed to use the same tsconfig.json file, but apparently that's not the case.
A:
Well I think I found something that works:
if (e1.valueOf() === e2.valueOf()) {
console.log("equal")
}
But I'm a bit surprised that this isn't mentioned anywhere in the documentation.
A:
There is another way: if you don't want generated javascript code to be affected in any way, you can use type cast:
let e1: E = E.A
let e2: E = E.B
if (e1 as E === e2 as E) {
console.log("equal")
}
In general, this is caused by control-flow based type inference. With current typescript implementation, it's turned off whenever function call is involved, so you can also do this:
let id = a => a
let e1: E = id(E.A)
let e2: E = id(E.B)
if (e1 === e2) {
console.log('equal');
}
The weird thing is, there is still no error if the id function is declared to return precisely the same type as its agument:
function id<T>(t: T): T { return t; }
A:
If was able to compare two enums with this
if (product.ProductType &&
(product.ProductType.toString() == ProductTypes[ProductTypes.Merchandises])) {
// yes this item is of merchandises
}
with ProductTypes being this export enum ProductTypes{Merchandises,Goods,...}
| {
"pile_set_name": "StackExchange"
} |
Q:
Set splashscreen delay on Meteor Cordova
How can I set the splashscreen delay on a meteor project?
I tried
App.setPreference('SplashScreenDelay', '1000');
but this is not working. The default delay is now approx 25 seconds.
A:
I got a Correct answer about Meteor Js. File Directory mobile-config.js file to add following code to set spalshscreen delay .
App.setPreference('AutoHideSplashScreen', 'true');
App.setPreference('SplashScreenDelay', '5000'); //5 seconds delay
| {
"pile_set_name": "StackExchange"
} |
Q:
GTK - multiple objects signal connect
I have a lot of buttons in my gtk program and they have same callback function. How to avoid duplication. For example:
g_signal_connect(G_OBJECT(button1), "clicked", G_CALLBACK(button_clicked), data);
g_signal_connect(G_OBJECT(button2), "clicked", G_CALLBACK(button_clicked), data);
g_signal_connect(G_OBJECT(button3), "clicked", G_CALLBACK(button_clicked), data);
g_signal_connect(G_OBJECT(button4), "clicked", G_CALLBACK(button_clicked), data);
and do something like this
g_signal_connect(G_OBJECT(four_buttons), "clicked", G_CALLBACK(button_clicked), data);
How can I do it? Thanks in advance
A:
Use a loop:
GtkButton *buttons[] = { button1, button2, button3, button4 };
for (int index = 0; index < 4; index++)
g_signal_connect(G_OBJECT(buttons[index]), "clicked", G_CALLBACK(button_clicked), data);
| {
"pile_set_name": "StackExchange"
} |
Q:
How do I find the term of a recursive sequence?
I have $\{a_n\}$ the following sequence:
$a_1 =-1$
$a_k a_{k+1} = - a_k - \frac{1}{4}$
How can I find $a_1 a_2 \cdots a_n$?
A:
$a_{k+1} = -\frac{4a_k+1}{4a_{k}}$
$a_2 = -3/4$
$a_3 = -2/3=-4/6$
$a_4 = -5/8$
$a_5 = -3/5 = -6/10$
$a_6 = -7/12$
So, we prove that $a_n = -(n+1)/(2n)$
Base case: $a_1 = -1$
IH: $a_{k} = -(k+1)/(2k)$
Then, $a_{k+1} = -\frac{4a_k+1}{4a_{k}} = -\frac{-2(k+1)/k + 1}{-2(k+1)/k} = -\frac{-2k-2+k}{-2(k+1)} = -((k+1)+1)/(2(k+1))$
Take it from here.
A:
$$a_{k+1} = \frac{-4a_k - 1}{4a_k + 0}$$
Let $p_k / q_k = a_k$ :
$$\frac{p_{k+1}}{q_{k+1}} = \frac{-4 \frac{p_k}{q_k} - 1}{4\frac{p_k}{q_k} + 0} = \frac{-4 p_k - 1q_k}{4p_k + 0q_k}$$
$$\begin{align}
%
\begin{bmatrix} p_{k+1} \\ q_{k+1} \end{bmatrix}
%
& = \begin{bmatrix} -4 & -1 \\ 4 & 0 \end{bmatrix} \begin{bmatrix} p_{k} \\ q_{k} \end{bmatrix}
\\ %
& = \begin{bmatrix} -4 & -1 \\ 4 & 0 \end{bmatrix}^k \begin{bmatrix} p_{1} \\ q_{1} \end{bmatrix}
\\ %
& \text{ Jordan Decomposition...}
\\ %
& = \left(
\begin{bmatrix} -2 & 1 \\ 4 & 0 \end{bmatrix}
\begin{bmatrix} -2 & 1 \\ 0 & -2 \end{bmatrix}
\begin{bmatrix} -2 & 1 \\ 4 & 0 \end{bmatrix}^{-1}
\right)^k
\begin{bmatrix} -1 \\ 1 \end{bmatrix}
\\ %
& =
\begin{bmatrix} -2 & 1 \\ 4 & 0 \end{bmatrix}
\begin{bmatrix} -2 & 1 \\ 0 & -2 \end{bmatrix}^k
\begin{bmatrix} -2 & 1 \\ 4 & 0 \end{bmatrix}^{-1}
\begin{bmatrix} -1 \\ 1 \end{bmatrix}
\\ %
& =
\begin{bmatrix} -2 & 1 \\ 4 & 0 \end{bmatrix}
\begin{bmatrix} (-2)^k & k~(-2)^{k - 1} \\ 0 & (-2)^k \end{bmatrix}
\begin{bmatrix} -2 & 1 \\ 4 & 0 \end{bmatrix}^{-1}
\begin{bmatrix} -1 \\ 1 \end{bmatrix}
\\ %
& =
(-2)^k
\begin{bmatrix} -k/2 - 1 \\ k + 1\end{bmatrix}
\\ %
\end{align}$$
So
$$a_{k+1} = \frac{p_{k+1}}{q_{k+1}} = \frac{-k/2 - 1}{k + 1}$$
$$a_k = \frac{-k - 1}{2k}$$
| {
"pile_set_name": "StackExchange"
} |
Q:
Como re estructurar dataframe en pandas?
dispongo de un dataframe que muestra información agrupada disponiendo en las columnas, en multi index, los meses del año, subagrupados por años, y en las filas los diferentes centros. Quiero desempaquetar por así decirlo el multindex para que quede una tabla mostrando como indice varias columnas.
Esta es la tabla de la que dispongo actualmente, y lo que quiero es que el dataframe que se muestre finalmente muestre por ejemplo:
1/1/2017/2082949.0
1/1/2018/2204553.0
1/1/2019/2726634.0
1/1/2020/2744176.0
1/1/%/1.0
1/2/2017/1673355.0
...
...
En un formato semejante a ese, es decir que no agrupe a la hora de mostrar la tabla, para que así sea más facil trabajar con ella desde excel. He usado unstack() pero no consigo el formato deseado. No se como podría compartir el dataframe, si alguien lo necesita y me indica cómo yo se lo mando.
A:
Para eliminar un nivel puedes usar reset_index:
df.reset_index(inplace=True)
df
(suponiendo que el DataFrame se llama df)
En caso que no lo solucione, por favor, comparte un ejemplo mínimo, completo y verificable
| {
"pile_set_name": "StackExchange"
} |
Q:
How to include a Qt Core c++ class for use in QtScript user function in SQLiteStudio 3.1.1?
I'm trying to write a user-defined function within SQLiteStudio (v3.1.1). This function needs to decode a field stored in base64.
I think I can achieve what I want to using the QByteArray class from Qt Core like so:
QByteArray text = QByteArray::fromBase64(base64EncodedString);
return text.data();
But how can I include/import the QByteArray class, so I can access it's methods inside my user-defined function? Is this even possible within SQLiteStudio?
A:
I'm not sure if it is possible in SQLiteStudio, but you should be able to use SQLS' own base64_decode(arg) function.
| {
"pile_set_name": "StackExchange"
} |
Q:
Is it OK to use motor oil as bar oil in a chainsaw?
Someone told me that you can use motor oil in your chainsaw instead of bar oil. Is this a good idea? Will it cause problems over time?
A:
It might be OK for a bit but I probably wouldn't do it. You could look up your owner's manual to see if they say anything. If you must use the chainsaw and have nothing else on hand, surely motor oil is better than nothing. I think bar oil is stickier than regular motor oil to prevent splattering as much.
FWIW I like to buy the "biodegradable" chain oil, since it invariably ends up all over the place.
A:
you shouldn't run anything but bar oil on a chainsaw bar, but not because of the saw (even though it sticks better to the bar and lubricates the chain better). its the environment and your lungs. engine oils usually have one or two zinc thiophosphate compounds added into them. its not good for you to breathe this in in aerosol form (like grinding galvanized metal), but then again, its not good to breathe it in in after combusting in an older engine that burns oil. the two stroke oil your saw uses in the fuel mix doesn't have this additive
A:
You will spray motor oil everywhere, and you will run out of oil quickly, and then burn up the bar and chain.
That being said, I sometimes use a mix of bar and gear oil or motor oil in winter, when it's so cold outside that the bar oil won't flow quickly enough. On such days, I would only mix about 1 part motor oil with 10 parts bar oil (or 1 part gear oil with 5 parts bar oil). The saw will warm up eventually and the oil will flow faster, so I only do that for the first tank of oil.
| {
"pile_set_name": "StackExchange"
} |
Q:
Java - FlowPanel - Use a parent variable from a child
public class PageIndex extends FlowPanel {
private PageHeader header;
private PageCenter center;
private PageFooter footer;
public PageIndex() {
this.header=new PageHeader();
this.add(header);
this.center=new PageCenter();
this.add(center);
this.footer=new PageFooter();
this.add(footer);
}
}
public class PageCenter extends FlowPanel {
private PageMenu menu;
private PageContent content;
public PageCenter() {
this.setStyle("center");
this.menu=new PageMenu(content);
this.add(menu);
this.content=new PageContent();
this.add(content);
}
}
public class PageMenu extends FlowPanel {
private PageContent content;
private PageMenuLogin login;
private PageMenuSearch search;
private PageMenuOffers offers;
private PageMenuStudents students;
private PageMenuShopping shopping;
private PageMenuEvents events;
public PageMenu (PageContent content) {
this.content=content;
this.login=new PageMenuLogin();
this.add(login);
this.search=new PageMenuSearch();
this.add(search);
this.offers=new PageMenuOffers();
this.add(offers);
this.students=new PageMenuStudents();
this.add(students);
this.shopping=new PageMenuShopping();
this.add(shopping);
this.events=new PageMenuEvents();
this.add(events);
// IF I WRITE THIS I GET AN org.apache.jasper.JasperException: java.lang.NullPointerException
this.content.daghine();
}
}
I need to manage the content instance, but is not visible. Sorry, Im a bit rusty :)
A:
How do you need to manage the variable? You're inside the object, that should allow you to do anything you need. If you need access to the parent object, do this:
public class PageIndex extends FlowPanel {
protected final PageContent content;
...
public PageIndex() {
content = new PageContent(this);
this.add(content);
...
}
public PageContent getContent() {
return this.content;
}
}
public class PageContent extends FlowPanel {
protected final PageIndex index;
public PageContent(PageIndex index) {
this.index = index;
}
public PageIndex getIndex() {
return this.index;
}
public void exampleFunction() {
createLabel().setLabel("HOLAAA").setStyle("content_title");
}
}
public class PageMenuLogin extends FlowPanel {
protected final PageIndex index;
public PageContent(PageIndex index) {
this.index = index;
index.getContent().exampleFunction();
}
public PageIndex getIndex() {
return this.index;
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
const_cast with two levels pointers
I want to do this conversion using C++ format, it works on the C way. but it fails when I try on C++ format.
It works!
void req_password(const void *data, size_t datalen)
{
char *password_old = ((char **) data)[0];
char *password_new = ((char **) data)[1];
...
}
It fails
void req_password(const void *data, size_t datalen)
{
char *password_old = (const_cast<char **>(data))[0];
char *password_old = (const_cast<char **>(data))[1];
...
}
error:
error: invalid const_cast from type 'const void*' to type 'char**'
So my doubt is, how could I do this conversion using the C++ way?
PS: This code is part from a API, I can't control the the input of data.
A:
Don't.
If you are being given immutable data, then you are being given immutable data and that is the end of it!
First, here's what I suggest for maximum safety. Coercing data into its real type is a little tricky, alas:
void req_password(const void* data, size_t datalen)
{
const char* password_old = (reinterpret_cast<const char* const*>(data)[0]);
const char* password_new = (reinterpret_cast<const char* const*>(data)[1]);
// ...
}
(I've actually added some constness in the above, as it seems to be the intent of having const void* in the first place.)
But, if you really want the strings to be mutable, then this is fine too:
void req_password(const void* data, size_t datalen)
{
char* password_old = (reinterpret_cast<char* const*>(data)[0]);
char* password_new = (reinterpret_cast<char* const*>(data)[1]);
// ...
// More obvious when you recall that `const void*` is actually `void const*`;
// So:
// void const*
// becomes:
// char* const*
}
Notice how you don't even need const_cast here, because you're not modifying the thing that data points to: you are dereferencing it and getting its pointee.
Of course, ideally, data would point to a const std::string instance.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to implement a look up table for a range of values of two columns
Edit: updated the two tables to sync with each other. These are the only tables involved. simply put, i need to take a count of all the unique records and display it in a 2d format.
I have a table of optical lenses, a sample of which follows:
Spherical|Cylindrical
---------------------
0 | 0.5
0.25 | 0.75
0.25 | 0.5
0 | 0
0 | 0.25
0 | 0.5
0.25 | 0.75
0.25 | 0.5
0.5 | 0
0.75 | 0
0.75 | 0
0.5 | 0.25
0.5 | 0.75
0.75 | 0.25
0.5 | 0.75
0.75 | 0.75
0.75 | 0.5
0.75 | 0.5
and so on...
I want to display a bird's eye view of the count of lenses in each combination in a 2d format this way:
Spherical/Cylindrical|0|0.25|0.5|0.75|... upto 8 in steps of 0.25
-----------------------------------------
0 |1| 1 | 2 | 1 |
0.25 |0| 0 | 2 | 2 |
0.5 |1| 2 | 0 | 2 |
0.75 |2| 1 | 2 | 1 |
...
upto 30 in steps of 0.25
How do I implement this in c#.net with sql server 2008? which would be the best approach?
I have a couple of thoughts:
Generate a view at run time using some special query and format it
in 2d
Create a 2d table kind of (in the above format) and update the
count each time the lens table is updated.
Please give me your thoughts and advise on this. Thanks!
A:
here is a sample query for how to make a view:
--build table variable and sample data
DECLARE @Optical table (Spherical numeric(4,2),Cylindrical numeric(4,2))
INSERT INTO @Optical VALUES ( 0, 0.5)
INSERT INTO @Optical VALUES (0.25,0.75)
INSERT INTO @Optical VALUES (1.25, 0.5)
INSERT INTO @Optical VALUES (1.25, 0.5)
INSERT INTO @Optical VALUES ( 0, 0)
--query to use as a basis for the view
;with AllSpherical AS --this recursive CTE builds the 121 rows for: 0.00 to 30.0
(
SELECT convert(numeric(4,2),0.0) AS Spherical
UNION ALL
SELECT convert(numeric(4,2),Spherical+0.25)
FROM AllSpherical
WHERE Spherical<=29.75
)
SELECT
s.Spherical
,SUM(CASE WHEN o.Cylindrical=0.00 THEN 1 ELSE 0 END) AS c_000
,SUM(CASE WHEN o.Cylindrical=0.25 THEN 1 ELSE 0 END) AS c_025
,SUM(CASE WHEN o.Cylindrical=0.50 THEN 1 ELSE 0 END) AS c_050
,SUM(CASE WHEN o.Cylindrical=0.75 THEN 1 ELSE 0 END) AS c_075
,SUM(CASE WHEN o.Cylindrical=1.00 THEN 1 ELSE 0 END) AS c_100
,SUM(CASE WHEN o.Cylindrical=1.25 THEN 1 ELSE 0 END) AS c_125
,SUM(CASE WHEN o.Cylindrical=1.50 THEN 1 ELSE 0 END) AS c_150
,SUM(CASE WHEN o.Cylindrical=1.75 THEN 1 ELSE 0 END) AS c_175
--... add a case for all columns
FROM AllSpherical s
LEFT OUTER JOIN @Optical o ON s.Spherical=o.Spherical
GROUP BY s.Spherical
OPTION (MAXRECURSION 120)
output:
Spherical c_000 c_025 c_050 c_075 c_100 c_125 c_150 c_175
---------- ----- ----- ----- ----- ----- ----- ----- -----
0.00 1 0 1 0 0 0 0 0
0.25 0 0 0 1 0 0 0 0
0.50 0 0 0 0 0 0 0 0
0.75 0 0 0 0 0 0 0 0
1.00 0 0 0 0 0 0 0 0
1.25 0 0 2 0 0 0 0 0
1.50 0 0 0 0 0 0 0 0
1.75 0 0 0 0 0 0 0 0
2.00 0 0 0 0 0 0 0 0
2.25 0 0 0 0 0 0 0 0
...
(121 row(s) affected)
you can build a traditional view using this query if you update the raw data much more than you would read this view. this would be your option 1
if you plan on reading this view much more than you update the raw data consider persisting the view: Improving Performance with SQL Server 2005 Indexed Views and Creating Indexed Views. This basically materializes the view and when you insert/update/delete the underlying table, the view's stored data is updated much like an automatic system level trigger would to keep them in sync. this would be your option 2, but the system would do all the "hard" work of keeping everything in sync.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to see badge list on child metas?
On the sites I visit I can see a list of all badges by: click on the question mark, visit help center, click on View a full list of badges you can earn. However if I go to the corresponding child meta and follow the same steps I see the list for the main site, not the child meta. Is this a bug, a feature or a user error?
In case you are wondering this is driven by curiosity about the last column shown which gives the number of times each badge has been awarded so if there is another way of accessing that then that would solve my particular issue. I do know that I can see which badges I have on my profile and also see the progress towards a proper subset of them by clicking on the little cogwheel.
A:
This happens because child metas don't have their own Help Center; they share it with the main site. Once you click on the question mark, you're effectively on the main site.
Alternatives are to just type /help/badges/ behind the root URL of the meta site (e.g. https://meta.stackoverflow.com/help/badges/), or use the link in the achievements dialog:
| {
"pile_set_name": "StackExchange"
} |
Q:
Why is printf with a single argument (without conversion specifiers) deprecated?
In a book that I'm reading, it's written that printf with a single argument (without conversion specifiers) is deprecated. It recommends to substitute
printf("Hello World!");
with
puts("Hello World!");
or
printf("%s", "Hello World!");
Can someone tell me why printf("Hello World!"); is wrong? It is written in the book that it contains vulnerabilities. What are these vulnerabilities?
A:
printf("Hello World!"); is IMHO not vulnerable but consider this:
const char *str;
...
printf(str);
If str happens to point to a string containing %s format specifiers, your program will exhibit undefined behaviour (mostly a crash), whereas puts(str) will just display the string as is.
Example:
printf("%s"); //undefined behaviour (mostly crash)
puts("%s"); // displays "%s\n"
A:
printf("Hello world");
is fine and has no security vulnerability.
The problem lies with:
printf(p);
where p is a pointer to an input that is controlled by the user. It is prone to format strings attacks: user can insert conversion specifications to take control of the program, e.g., %x to dump memory or %n to overwrite memory.
Note that puts("Hello world") is not equivalent in behavior to printf("Hello world") but to printf("Hello world\n"). Compilers usually are smart enough to optimize the latter call to replace it with puts.
A:
Further to the other answers, printf("Hello world! I am 50% happy today") is an easy bug to make, potentially causing all manner of nasty memory problems (it's UB!).
It's just simpler, easier and more robust to "require" programmers to be absolutely clear when they want a verbatim string and nothing else.
And that's what printf("%s", "Hello world! I am 50% happy today") gets you. It's entirely foolproof.
(Steve, of course printf("He has %d cherries\n", ncherries) is absolutely not the same thing; in this case, the programmer is not in "verbatim string" mindset; she is in "format string" mindset.)
| {
"pile_set_name": "StackExchange"
} |
Q:
Mongo Template : Modifying Match Operation dynamically
I have defined my match operation in mongo template as below.
MatchOperation match = Aggregation.match(new Criteria("workflow_stage_current_assignee").ne(null)
.andOperator(new Criteria("CreatedDate").gte(new Date(fromDate.getTimeInMillis()))
.andOperator(new Criteria("CreatedDate").lte(new Date(toDate.getTimeInMillis())))));
Everything is fine until this. However I can not modify this match operation using the reference match I have created. I was looking for List kind of functionality where in I could add multiple criteria clauses as and when they are needed to an already created reference. Something on the lines match.add(new Criteria)
However MatchOperation currently does not support any methods which would provide this functionality. Any help in this regard would be appreciated.
A:
Criteria is where you add new criteria, which is backed by list.
Use static Criteria where(String key) method to create a initialize criteria object.
Something like
Criteria criteria = where("key1").is("value1");
Add more criteria's
criteria.and("key2").is("value2");
Create implicit $and criteria and chain to existing criteria chain.
criteria.and(where("key3).gt(value3).lte(value4))
When you are done, just pass it to match operation.
MatchOperation match = Aggregation.match(criteria);
| {
"pile_set_name": "StackExchange"
} |
Q:
How to add !important to every element in a CSS file
Possible Duplicate:
Is there a shortcut for setting !important on every property in a CSS file?
I have a unique problem. In css, !important overrides any element already declared - but let's say I want all styles to override any other mentioned stylesheet; well, normally I would put the override second on the html file like this:
<link rel="stylesheet" type="text/css" href="mystyle.css">
<link rel="stylesheet" type="text/css" href="override.css">
But for some reason this isn't working. Specifically, what I am doing, is adding classes to append basic css styles. Like this:
<div class="no-border-right float-left no-margin-top"></div>
And in my CSS:
.no-border-right {
border-right: 0px solid transparent;
}
.float-left {
float: left;
}
.no-margin-top {
margin-top: 0;
}
But, it's still not overriding. I would just add !important to every element, but the problem is, I've already structured the 14gb library. (I work on this only when I'm bored... hehe :P) So I was wondering how to add !important to every element. Like:
* {
...!important;
}
Maybe not even that... Any ideas?
A:
Probably your new selectors doesn't have enough priority over the ones you have on mystyle.css?
CSS/Training/Priority level of selector
CSS: Understanding the selector's priority / specificity
!important override these calculations, buts a hack that should be avoided if possible in favor of the standard way.
| {
"pile_set_name": "StackExchange"
} |
Q:
Ajax does not process the form that was generated by Javascript
I am having a problem where when I create an HTML form with javascript, the ajax does not process it at all and will just send me to the blank action PHP page. This page is blank because I added a check to see if the form fields are empty or not.
Form created with Javascript:
function createForm() {
var f = document.createElement('form');
f.setAttribute('method','post');
f.setAttribute('action','action.php');
f.setAttribute('class', 'ajax');
f.innerHTML = '<span style="color: black; display: inline;">Name: <input class="popup-form-fields" id="inputName" onkeyup="validateName(); theCheck();" onkeydown="validateEmail(); theCheck();" onclick="validateName();" type="text" name="name" autocomplete="off"></span><label id="commentName" class="label1"></label>';
f.innerHTML += '<div class="space"></div>';
f.innerHTML += '<span style="color: black; display: inline;">Email address: <input class="popup-form-fields" id="inputEmail" onkeyup="validateEmail(); theCheck();" onkeydown="validateEmail(); theCheck();" onclick="validateEmail();" type="email" name="email" autocomplete="off"></span><label id="commentEmail" class="label1"></label>'
f.innerHTML += '<div class="space"></div>';
f.innerHTML += '<span style="color: black; display: inline;">Phone number: <input class="popup-form-fields" id="inputPhone" onkeyup="validatePhone(); theCheck();" onkeydown="validatePhone(); theCheck();" onclick="validatePhone();" type="tel" name="phone" autocomplete="off"></span><label id="commentPhone" class="label1"></label>';
f.innerHTML += '<div class="space"></div>';
f.innerHTML += '<span style="vertical-align: top; color: black; display: inline;">Details: <textarea class="popup-form-fields" id="inputDetail" onkeyup="validateDetail(); theCheck();" onkeydown="validateDetail(); theCheck();" onclick="validateDetail();" name="details" autocomplete="off"></textarea></span><label id="commentDetail" class="label1"></label>';
f.innerHTML += '<div class="space"></div>';
f.innerHTML += '<input id="sub" type="submit" value="Send">';
}
Form created without Javascript:
<form method="POST" action="book-priv-party-complete.php" class="ajax">
<span style="color: black; display: inline;">Name: <input class="popup-form-fields" id="inputName" onkeyup="
validateName(); theCheck();" onkeydown="validateName(); theCheck();" onclick="validateName();" type="text"
name="name" autocomplete="off"></span><label id="commentName" class="label1"></label>
<div class="space"></div>
<span style="color: black; display: inline;">Email address: <input class="popup-form-fields" id="inputEmail"
onkeyup="validateEmail(); theCheck();" onkeydown="validateEmail(); theCheck();" onclick="validateEmail();" type=
"email" name="email" autocomplete="off"></span><label id="commentEmail" class="label1"></label>
<div class="space"></div>
<span style="color: black; display: inline;">Phone number: <input class="popup-form-fields" id="inputPhone"
onkeyup="validatePhone(); theCheck();" onkeydown="validatePhone(); theCheck();" onclick="validatePhone();" type=
"tel" name="phone" autocomplete="off"></span><label id="commentPhone" class="label1"></label>
<div class="space"></div>
<span style="vertical-align: top; color: black; display: inline;">Details: <textarea class="popup-form-fields"
id="inputDetail" onkeyup="validateDetail(); theCheck();" onkeydown="validateDetail(); theCheck();" onclick="
validateDetail();" name="details" autocomplete="off"></textarea></span><label id="commentDetail" class="label1
"></label>
<div class="space"></div>
<input id="sub" type="submit" value="Send">
</form>
The ajax:
$('form.ajax').on('submit', function () {
var that = $(this),
url = 'action.php',
type = 'post',
data = {};
that.find('[name]').each(function (index, value) {
var that = $(this),
name = that.attr('name'),
value = that.val();
data[name] = value;
});
$.ajax({
url: url,
type: type,
data: data,
success: function () {
removeElement();
document.getElementById('errorCode').innerHTML = "Your message was sent";
document.getElementById('errorCode').style.color = "green";
setTimeout(function () {document.getElementById('errorCode').style.display = "none";}, 2000);
alert("It worked");
},
error: function () {
removeElement();
document.getElementById('errorCode').innerHTML = "Your message was not sent";
document.getElementById('errorCode').style.color = "red";
setTimeout(function () {document.getElementById('errorCode').style.display = "none";}, 2000);
}
});
return false;
});
Is is possible that the cause of this issue is when and where the Javascript is being run?
A:
A possible reason for the submit function not getting triggered in your javascript function is because you create this form element dynamically. When I say dynamically, I mean this element is created definitely after the initial page render is complete. When the browser initially renders your page, it creates the DOM tree, parses the javascript and binds all the event handlers.
So, the $('form.ajax').on('submit',.. looks for form element and binds the submit function to a callback at a time when the form element might not have been created. That could be a reason why you don't get any response.
I would suggest reading about event delegation.
Delegated events have the advantage that they can process events from descendant elements that are added to the document at a later time. By picking an element that is guaranteed to be present at the time the delegated event handler is attached, you can use delegated events to avoid the need to frequently attach and remove event handlers.
So you basically bind listeners to a parent container that is guaranteed to be in the DOM and then just find the target of that event.
For instance suppose I want to listen to a click event of a button with id foo that I create dynamically inside a div with id container. I can listen to that click by doing this:
$( "div#container" ).on("click", function( event ) {
console.log( "clicked: " + event.target.nodeName ); //Will output button when this element is clicked.
});
Another really good article on event order in browsers is referenced here.
Hope it gets you started in the right direction.
| {
"pile_set_name": "StackExchange"
} |
Q:
Rename and merge [.netstandard] tags to [.net-standard]
Right now there are many different tags for .NET Standard e.g. netstandard, .net-standard and sub-tags netstandard1.5, .net-standard-1.4, netstandard1.6, .net-standard-1.5, .net-standard2.0, netstandard2
These should be renamed and consolidated accordingly e.g. netstandard and .net-standard should be consolidated under one tag (I would suggest .net-standard).
All subtags should follow a consistent naming convention. IMO .net-standard-versionnumber where version number is majorversion.minorversion i.e .net-standard-2.0
A:
Aaaand it's gone.
Disappeared:
netstandard ➡ .net-standard
netstandard1.5 ➡ .net-standard-1.5
netstandard1.6 ➡ .net-standard-1.6
netstandard2 ➡ .net-standard2.0
Still alive:
.net-standard2.0 ➡ .net-standard-2.0
That's another 132 questions.
| {
"pile_set_name": "StackExchange"
} |
Q:
Node-rsa SHA1 message signature length
I'm trying to get an sha-1 signature of a message with a private key.
var client_private = new NodeRSA(require('fs').readFileSync('../providers/sirena/keys/client_private.txt'), {signingAlgorithm: 'sha1'});
var message_signature = client_private.sign(message);
And “message_signature” appears to have 256 bytes.
In the other hand simple bash command
openssl dgst -sha1 -binary -out message.signature -sign providers/sirena/keys/client_private.txt message
returns 128 which is required by my provider.
Am I doing something wrong? What should I do to get 128-bytes signature?
Thank you!
A:
That was simple as I thought.
In fact I had 2 KiB long private key instead of 1 KiB.
| {
"pile_set_name": "StackExchange"
} |
Q:
Tensor-flow training and Testing on different OS
Will the performance of the model be affected if I train the data on LINUX system and then use that model in the WINDOWS application or a python script?
A:
No, the model will be exactly the same. You'll only have to make sure that your TF versions on Linux and Windows are compatible ones, but this is not made more difficult by the different OS, it's only a matter of which versoin you install on which device.
| {
"pile_set_name": "StackExchange"
} |
Q:
What are the advantages of using Apache Solr over the core search module?
The Apache Solr Search Integration module project page states:
Solr search can be used as a replacement for core content search and
boasts both extra features and better performance.
What exactly are these extra features? (Or if there are too many, in what kind of use cases would I want to use Apache Solr?)
My site is hosted on Pantheon and they provide Apache Solr at no additional charge. Would this be something I definitely want to set up? Only set up in certain cases?
A:
This article gives a thorough and straight-forward answer to your question.
Performance improvements with Apache Solr
High traffic sites running search queries against the database can start to degrade the site’s overall performance if the database becomes the bottleneck. This is also true for lower traffic sites with lots of content. Complex search queries can be slow to run.
Requirements for features such as faceted search (see below) are becoming increasingly common. This can be delivered in conjunction with Drupal core search using the Faceted Search module but the inherent scalability and performance implications are well documented by the module’s maintainers.
Other useful Apache Solr features
Apache Solr also supports indexing and searching multiple sites (imagine internal intranet site and external corporate site), indexing attachments (eg PDFs, Excel documents) and recommended content blocks driven by a node’s taxonomy. The module page and the Acquia Search overview both have a good overview of the Apache Solr features that Drupal supports.
| {
"pile_set_name": "StackExchange"
} |
Q:
How can i make a list of directories from two list nested (parent and child)
I have a parent-list of two elements
P= ["E:\\GIS_DOCUMENT\\BCAS_Map\\BCAS_All.gdb\\BD_Admin" , "E:\\GIS_DOCUMENT\\BCAS_Map\\BCAS_All.gdb\\BCAS_BD_Infrastructure"]
but first element has another list:
Cld1 = ['BGD_4_new_district', 'BGD_3_old_district', 'BGD_2_division', 'BGD_1_all', 'BGD_5_Upazilla', 'BGD_4_old_district', 'BGD_6_Union_court', 'BGD_6_Union', 'BD_exposed_coastal_area','BD_drought', 'BGD_1_River', 'BGD_1_River_detail', 'BD_international_bnd', 'BGD_1_River_1', 'BGD_7_Mauza', 'test', 'BGD_5_UpazillaAnno', 'BGD_4_new_districtAnno', 'BGD_4_new_districtAnno2']
and second element has another list:
Cld2 = ['BD_Health_Infrastructures_1', 'BD_Railway_Establishments_1', 'BGD_roads_1']
Now i want to join P and (Cld1 and Cld2) i.e. P+Cld1 and P+Cld2 to make os path and it (P+Cld1 and P+Cld2) will be saved in an array "My_Full_Path" e.g "E:\GIS_DOCUMENT\BCAS_Map\BCAS_All.gdb\BD_Admin \ BGD_4_new_district" is an element of "My_Full_Path" list
How to do
Thanks in advance
A:
If I understood your problem correctly then I guess you need this:
import os
import pprint
P= ["E:\\GIS_DOCUMENT\\BCAS_Map\\BCAS_All.gdb\\BD_Admin" , "E:\\GIS_DOCUMENT\\BCAS_Map\\BCAS_All.gdb\\BCAS_BD_Infrastructure"]
Cld1 = ['BGD_4_new_district', 'BGD_3_old_district', 'BGD_2_division', 'BGD_1_all', 'BGD_5_Upazilla', 'BGD_4_old_district', 'BGD_6_Union_court', 'BGD_6_Union', 'BD_exposed_coastal_area','BD_drought', 'BGD_1_River', 'BGD_1_River_detail', 'BD_international_bnd', 'BGD_1_River_1', 'BGD_7_Mauza', 'test', 'BGD_5_UpazillaAnno', 'BGD_4_new_districtAnno', 'BGD_4_new_districtAnno2']
Cld2 = ['BD_Health_Infrastructures_1', 'BD_Railway_Establishments_1', 'BGD_roads_1']
lis1 = [os.path.join(x, y) for x in P for y in Cld1]
lis2 = [os.path.join(x, y) for x in P for y in Cld2]
My_Full_Path = lis1 + lis2
pprint.pprint(My_Full_Path)
output:
['E:\\GIS_DOCUMENT\\BCAS_Map\\BCAS_All.gdb\\BD_Admin\\BGD_4_new_district',
'E:\\GIS_DOCUMENT\\BCAS_Map\\BCAS_All.gdb\\BD_Admin\\BGD_3_old_district',
'E:\\GIS_DOCUMENT\\BCAS_Map\\BCAS_All.gdb\\BD_Admin\\BGD_2_division',
'E:\\GIS_DOCUMENT\\BCAS_Map\\BCAS_All.gdb\\BD_Admin\\BGD_1_all',
'E:\\GIS_DOCUMENT\\BCAS_Map\\BCAS_All.gdb\\BD_Admin\\BGD_5_Upazilla',
'E:\\GIS_DOCUMENT\\BCAS_Map\\BCAS_All.gdb\\BD_Admin\\BGD_4_old_district',
'E:\\GIS_DOCUMENT\\BCAS_Map\\BCAS_All.gdb\\BD_Admin\\BGD_6_Union_court',
'E:\\GIS_DOCUMENT\\BCAS_Map\\BCAS_All.gdb\\BD_Admin\\BGD_6_Union',
'E:\\GIS_DOCUMENT\\BCAS_Map\\BCAS_All.gdb\\BD_Admin\\BD_exposed_coastal_area',
'E:\\GIS_DOCUMENT\\BCAS_Map\\BCAS_All.gdb\\BD_Admin\\BD_drought',
'E:\\GIS_DOCUMENT\\BCAS_Map\\BCAS_All.gdb\\BD_Admin\\BGD_1_River',
'E:\\GIS_DOCUMENT\\BCAS_Map\\BCAS_All.gdb\\BD_Admin\\BGD_1_River_detail',
'E:\\GIS_DOCUMENT\\BCAS_Map\\BCAS_All.gdb\\BD_Admin\\BD_international_bnd',
'E:\\GIS_DOCUMENT\\BCAS_Map\\BCAS_All.gdb\\BD_Admin\\BGD_1_River_1',
'E:\\GIS_DOCUMENT\\BCAS_Map\\BCAS_All.gdb\\BD_Admin\\BGD_7_Mauza',
'E:\\GIS_DOCUMENT\\BCAS_Map\\BCAS_All.gdb\\BD_Admin\\test',
'E:\\GIS_DOCUMENT\\BCAS_Map\\BCAS_All.gdb\\BD_Admin\\BGD_5_UpazillaAnno',
'E:\\GIS_DOCUMENT\\BCAS_Map\\BCAS_All.gdb\\BD_Admin\\BGD_4_new_districtAn
...
...
| {
"pile_set_name": "StackExchange"
} |
Q:
Japan work visa and tickets for 5 family members?
I've got a job offer from Japan, I'll be relocating with family, 2 adults, 3 kids.
As a visa requirement, I have to mention entry and departure dates on the visa form. What do I put in the case of a work visa (stay for more than 1 year)?
For the visa application, does booking of ticket (not buying) will be ok, or do I need to buy for all 5 family members? Surely I'll be buying them, but after we get the visa.
Can I buy one-way tickets or do I need to buy return tickets? I want to save money here.
A:
Talk to your employer. Have they gotten a certificate of eligibility for you yet?
Here's what the Japanese Embassy in the UK states are required to apply for a work visa:
Once a Certificate of Eligibility has been obtained, the applicant should bring in:
1. Valid passport
2. One visa application form (sample), completed and signed
3. One passport-sized photograph approx. 45mm x 45mm (taken within the last 6 months)
4. Original Certificate of Eligibility
5. One photocopy of Certificate of Eligibility
Your local embassy may have different requirements.
| {
"pile_set_name": "StackExchange"
} |
Q:
Seeking a property about Lebesgue-Stieltjes outer measure
I am a graduate student and this is not something related to my work but I was just wondering and did not find an answer on the Internet. I asked this on the other math site two weeks ago and no one responded. Question:
Let $g$ and $h: R \rightarrow R$, where $R$ is the set of all real numbers, be two increasing functions and $\theta_g$, $\theta_h$ be the associated Lebesgue--Stieltjes outer measures on $R$. We can also associate to $g+h$ the Lebesgue--Stieltjes outer measure $\theta_{g+h}$.
Is it possible to show that $\theta_g + \theta_h = \theta_{g+h}$?
Thank you so much
A:
A more general question is as follows. Let $\mu$ and $\nu$ be two measures on a measurable space $(S,\Sigma)$. Does it then always follow that $(\mu+\nu)^*=\mu^*+\nu^*$, where ${}^*$ denotes the corresponding outer measure?
The answer to this question is yes. Indeed, for any $E\subseteq S$, let $\mathcal{A}_E$ denote the set of all sequences $(A_j)$ of pairwise disjoint members of $\Sigma$ such that $\bigcup_j A_j\supseteq E$. Then
\begin{equation}
\mu^*(E)+\nu^*(E)=\inf_{(A_j)\in\mathcal{A}_E}\sum_j\mu(A_j)
+\inf_{(A_j)\in\mathcal{A}_E}\sum_j\nu(A_j)
\end{equation}
\begin{equation}
\le\inf_{(A_j)\in\mathcal{A}_E}\Big(\sum_j\mu(A_j)+\sum_j\nu(A_j)\Big)
=\inf_{(A_j)\in\mathcal{A}_E}\sum_j(\mu+\nu)(A_j)
= (\mu+\nu)^*(E).
\end{equation}
To prove the reverse inequality, $(\mu+\nu)^*(E)\le\mu^*(E)+\nu^*(E)$, take any real numbers $a$ and $b$ (if they exist) such that $\mu^*(E)<a$ and $\nu^*(E)<b$. Then for some sequences $(A_i)$ and $(B_j)$ in $\mathcal{A}_E$ one has $\sum_i\mu(A_i)<a$ and $\sum_j\mu(B_j)<b$. Let $C_{i,j}:=A_i\cap B_j$. Somehow enumerating the pairs $(i,j)$, we may assume that the double-sequence $(C_{i,j})$ is in $\mathcal{A}_E$. So,
\begin{equation}
(\mu+\nu)^*(E)\le\sum_{i,j}(\mu+\nu)(C_{i,j})
=\sum_i\sum_j\mu(C_{i,j})+\sum_j\sum_i\nu(C_{i,j})
\end{equation}
\begin{equation}
=\sum_i\mu(A_i)+\sum_j\nu(B_j)<a+b,
\end{equation}
for any real $a$ and $b$ such that $\mu^*(E)<a$ and $\nu^*(E)<b$.
Hence, $(\mu+\nu)^*(E)\le\mu^*(E)+\nu^*(E)$. QED
| {
"pile_set_name": "StackExchange"
} |
Q:
''to master the Japanese Language'' or ''to master Japanese language'' which way is the correct way?
I want to say that I want to master the Japanese Language while learning in Japan. But sometimes it feels odd to the Japanese instead of Japanese.
I want to master Japanese Language by going Japan and learn it there.
Is this correct?
A:
The Japanese languauge, the French language, the German language, etc. It always takes the definite article the.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to fasten learning without changing learning rate in linear regression
Whenever I made a linear regression model, it just diverged, all the time. I really couldn't find any solutions for that. But when I changed the learning rate to 0.0000252, it worked! But another problem is that it learns so slowly so that I have to wait for the model to learn for more than 10 minutes.
How can I fasten learning without changing the learning rate?
A:
The first question is: why use SGD (which i assume here). There are more specialized learning-procedures for Linear-regression, which partially do not need that kind of hyperparameter-tuning. Maybe you are in a very large-scale setting then where SGD is a valid approach.
Assuming SGD-based learning is the way to go:
You should use some kind of learning-schedule
Add at least a learning-rate decay, which reduces the learning-rate for example after each epoch by a factor of something like 0.9 (yes, one more hyperparameter)
Try to use some kind of momentum, e.g. Nesterov-momentum which was developed for convex-optimization (your case is convex) and holds strong guarantees
This kind of momentum is even popular in the non-convex setting
Most DeepLearning libs should provide this out-of-the-box
You can try adaptive learning-rate based algorithms like:
Adam, AdaDelta, AdaGrad, ...
These try to remove the burden from selecting those LR-hyperparameters while still trying to convergence as quickly as possible
Of course they are heuristics (strictly spoken), but they seem to work well for most people (although an optimized SGD is most of the time the best)
Most DeepLearning libs should provide this out-of-the-box
Use specialized software for linear-models like liblinear or others
And one more thing, because i''m surprised that it's that easy to observe divergence on this simple problem: normalize your input!
| {
"pile_set_name": "StackExchange"
} |
Q:
C++ standard lib: Achieving constant time remove from linked list
I would like to use standard library data structures (because one can easily override the Allocator) to get a (probably doubly) linked list that allows constant time removal operations given a pointer to an element.
Is this functionality built into any standard library data structures? Imagine the following list:
myList = [ elementA, elementB, elementC ]
Removing elementB is constant time in the size of the list if one can say something to this effect:
elementB.previous.next = pointer_to(elementC)
... or must I build my own linked list to achieve this?
A:
The built-in std::list supports constant-time removal of elements.
Quoting from documentation at http://www.cplusplus.com/reference/list/list/:
Lists are sequence containers that allow constant time insert and erase operations anywhere within the sequence, and iteration in both directions.
The implementation is indeed a doubly-linked list:
List containers are implemented as doubly-linked lists; Doubly linked lists can store each of the elements they contain in different and unrelated storage locations. The ordering is kept by the association to each element of a link to the element preceding it and a link to the element following it.
To remove an element from a list, simply use the erase() member function:
std::list mylist;
std::list::iterator it = /* find an element */;
// remove the element in constant time
mylist.erase(it);
A:
For a doubly-linked list, there's std::list, which supports constant-time removal of an element given an iterator (but not a pointer) to that element.
C++11 added a singly-linked list, std::forward_list. That can't quite do what you describe: you can only remove an element in constant time if you have an iterator for the preceding element.
If you really want to manipulate containers using pointers to the elements, rather than iterators, then you'll have to abandon the non-intrusive STL model and use something like Boost's intrusive containers.
| {
"pile_set_name": "StackExchange"
} |
Q:
Fast manipulation of Dates in R
I have around 34000 vectors of dates that I have to change the day and move the month. I have tried this with a loop and using the mapply function but it is extremely slow.
This is an example of what I have:
library(lubridate)
list_dates = replicate(34000,seq(as.Date("2019-03-14"),length.out = 208,by = "months"),simplify = F)
new_day = round(runif(34000,1,30))
new_day[sample(1:34000,10000)] = NA
new_dates = mapply(FUN = function(dates,day_change){
day(dates) = ifelse(is.na(rep(day_change,length(dates))),day(dates),rep(day_change,length(dates)))
dates = as.Date(ifelse(is.na(rep(day_change,length(dates))),dates,dates%m-%months(1)),origin = "1970-01-01")
return(dates)
},dates = list_dates,day_change = as.list(new_day),SIMPLIFY = F)
The variable new_dates should contain a list of the original dates move accordingly to the variable new_day. The function in side works like this:
if new_day is different from NA it will change the day of the dates to the new one
if new_day is different from NA it will move the months of the dates one behind.
I'm open to any solution that will increase the speed regardless of the packages use (if they are in CRAN).
EDIT
So based on the comments I reduce the example for a list of 2 vector of dates each containing 2 dates and created a manual vector of new days:
list_dates = replicate(2,seq(as.Date("2019-03-14"),length.out = 2,by = "months"),simplify = F)
new_day = c(9,NA)
This is the original input (variable list_dates):
[[1]]
[1] "2019-03-14" "2019-04-14"
[[2]]
[1] "2019-03-14" "2019-04-14"
and the expected output of the mapply function is:
[[1]]
[1] "2019-02-09" "2019-03-09"
[[2]]
[1] "2019-03-14" "2019-04-14"
As you can see the first vector of dates was change to the day 9 and each date was lag one month. The second vector of dates did not change because new_dates is NA for that value.
A:
Here is a lubridate solution
library(lubridate)
mapply(
function(x, y) { if (!is.na(y)) {
day(x) <- y;
month(x) <- month(x) - 1
}
return(x) },
list_dates, new_day, SIMPLIFY = F)
#[[1]]
#[1] "2019-02-09" "2019-03-09"
#
#[[2]]
#[1] "2019-03-14" "2019-04-14"
Or using purrr
library(purrr)
library(lubridate)
map2(list_dates, new_day, function(x, y) {
if (!is.na(y)) {
day(x) <- y
month(x) <- month(x) - 1
}
x })
| {
"pile_set_name": "StackExchange"
} |
Q:
angular2 pipe for multiple arguments
I have a array of thread objects, each thread object with the properties
unit:number
task:number
subtask:number
I want to create a pipe to filter after these threads, so far I have a working pipe like below. I'm not really satisfied with it yet and wanted to ask you guys if there is a more elegant solution?
HTML:
<div class="thread-item" *ngFor="#thread of threadlist | threadPipe:unitPipe:taskPipe:subtaskPipe"></div>
Pipe.ts
export class ThreadPipe implements PipeTransform{
threadlistCopy:Thread[]=[];
transform(array:Thread[], [unit,task,subtask]):any{
//See all Threads
if(unit == 0 && task == 0 && subtask == 0){
return array
}
//See selected Units only
if(unit != 0 && task == 0 && subtask == 0){
this.threadlistCopy=[];
for (var i = 0; i<array.length;i++){
if(array[i].unit == unit){
this.threadlistCopy.push(array[i])
}
}
return this.threadlistCopy
}
//See selected Units and Tasks
if (unit != 0 && task != 0 && subtask == 0){
this.threadlistCopy=[];
for (var i = 0; i<array.length;i++){
if(array[i].unit == unit && array[i].task == task){
this.threadlistCopy.push(array[i])
}
}
return this.threadlistCopy
}
// See selected units, tasks, subtask
if (unit != 0 && task != 0 && subtask != 0){
this.threadlistCopy=[];
for (var i = 0; i<array.length;i++){
if(array[i].unit == unit && array[i].task == task && array[i].subtask == subtask){
this.threadlistCopy.push(array[i])
}
}
return this.threadlistCopy
}
}
}
A:
You are implementing your pipe the right way, but you are basically re-inventing the Array.prototype.filter mechanism in your code. A simpler way will be:
export class ThreadPipe implements PipeTransform{
transform(array:Thread[], [unit,task,subtask]):any{
//See all Threads
if(unit == 0 && task == 0 && subtask == 0){
return array
}
//See selected Units only
if(unit != 0 && task == 0 && subtask == 0){
return array.filter(thread => {
return thread.unit === unit;
});
}
//See selected Units and Tasks
if (unit != 0 && task != 0 && subtask == 0){
return array.filter(thread => {
return thread.unit === unit && thread.task === task;
});
}
// See selected units, tasks, subtask
if (unit != 0 && task != 0 && subtask != 0){
return array.filter(thread => {
return thread.unit === unit && thread.task === task && thread.subtask === subtask;
});
}
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
In Neo4j 2.0, can numbers be labels?
I read in the documentation that labels can be string or numbers. However, using only numbers gives an error:
start u=node(5) set u:1234 return labels(u);
The error is:
Invalid input '1': expected whitespace or an identifier (line 1, column 23)
A:
Any non-empty unicode string can be used as a label name. In Cypher, you may need to use the backtick (`) syntax to avoid clashes with Cypher identifier rules.
Here is the source of that: source
I think you are running into a Cypher conflict. If you do this it should work:
start u=node(5)
set u:`1234`
return labels(u);
| {
"pile_set_name": "StackExchange"
} |
Q:
How might one prove two functions asymptotic?
Lets say that function $f$ and $g$ look to be converging on one another. What tests might you use to prove that they are/are not asymptotic?
If you want an example to work on, lets say $f(x)=\ln x$ and $g(x)=x-x^{1-1/x}$
I know that these functions are asymptotic, but have no way of proving it. Thanks to all help in advance.
A:
As $x \to \infty$,
$\begin{align*}
x^{1-1/x}
&=e^{\ln x(1-1/x)}\\
&=e^{\ln x}e^{-\ln x/x}\\
&=x(1-\ln x/x + (\ln x)^2/(2 x^2)+o(1/x^2))\\
&=x-\ln x+ (\ln x)^2/(2 x)+o(1/x)\\
\end{align*}
$
so
$\begin{align*}
g(x)
&=x-x^{1-1/x}\\
&=x-(x-\ln x+ (\ln x)^2/(2 x)+o(1/x))\\
&=\ln x- (\ln x)^2/(2 x)+o(1/x)\\
&\approx \ln x\\
\end{align*}
$
I believe that
this is what you want.
| {
"pile_set_name": "StackExchange"
} |
Q:
Cutting and pasting files
OS X doesn't allow me to cut files. Are there any applications to enable this?
A:
It is possible to cut-paste files/folders in OSX 10.7 Lion's Finder (so, since 2011), but the OSX way is slightly different from the Windows way.
⌘-C (copy first)
⌘-⌥-V (now move to it's destination)
So, the steps are very similar to copy-paste, but holding ⌥ (option key) moves the file/folder instead of copies it.
You can also have a look in the edit menu after copying a file - press ⌥ while looking to see the difference: "paste" changes to "move item here".
A:
If you can pay some you can use Moveaddict:
And also I found "this solution" but I didn't test it myself.
| {
"pile_set_name": "StackExchange"
} |
Q:
iOS / Core Data - How can I change sectionNameKeyPath of a NSFetchedResultsController?
I declared my fetchedResultsController like this
NSFetchedResultsController *fetchController = [[NSFetchedResultsController alloc]
initWithFetchRequest:fetchRequest managedObjectContext:managedObjectContext
sectionNameKeyPath:@"date" cacheName:nil];
But when I click on a UISegmentedControl, I want to change the sectionNameKeyPath to be @"title".
Do you know a way to do so ?
Thanks
A:
You would need to redefine the FRC and reinitiate the fetch request. Either set a property on the class to hold the value of the current sectionNameKeyPath (set the default in the viewDidLoad event), or you can pass that in to the method that instantiates and executes the FRC.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to map tinyint to c# byte using entityframework code first migration
I am trying to map tinyint column to byte property in c#How to achieve this efficiently. I want to store the values null, 0 and 1 to the database column. Please find below my code. Can anyone help me whether the following is the right approach?
public enum TriState : byte
{
Null = 0,
True = 1,
False =2
}
[NotMapped]
public TriState AuthorisationStatus { get; set; }
[Column("AuthorisationStatus")]
public byte AuthorisationStatusByte {
get
{
return Convert.ToByte(AuthorisationStatus.ToString());
}
private set
{
AuthorisationStatus = EnumExtensions.ParseEnum<TriState>(value);
}
}
public static T ParseEnum<T>(byte value)
{
return (T)Enum.Parse(typeof(T), value.ToString(), true);
}
Thanks
A:
There's no need to go via a string at all. Just use the explicit conversions:
[Column("AuthorisationStatus")]
public byte AuthorisationStatusByte
{
get { return (byte) AuthorisationStatus; }
set { AuthorisationStatus = (TriState) value; }
}
(I'm assuming the column attribute is correct - I don't know about that side of things.)
| {
"pile_set_name": "StackExchange"
} |
Q:
Icons in react-table
I've been struggling to find how I could display an icon in a Cell in a ReactTable (from the library react-table). All I could find is that it accepts HTML symbols. There's a lot of symbols, but what I'm looking for is to show flags...
Cell: () => (
<span>♥</span>
)
I already tried to use <i class='fa fa-cloud' /> for a test but I couldn't make it work.
Any ideas ?
A:
I found a library https://www.npmjs.com/package/react-flag-kit that gives FlagIcons to be used this way :
Cell: () => (
<FlagIcon code="FR" size={20} />
)
React Icons <Icon /> can also be used in the Cell, but it does not contain any flags.
| {
"pile_set_name": "StackExchange"
} |
Q:
Real Time Data - Angular2 App
I want to make an app in angular2 which can have a parent client who can send real time data to it's children clients.I will just type and send data in form of numbers and strings which should be displayed to child clients in real time.I just want to know is there any special provision in angular2 for this ?
I'm a beginner so please provide me some rough insight on this.
A:
parent.component.html
<sonComponent [sonData]="parentData"></sonComponent>
son.component.ts
import { Component } from '@angular/core';
import { Injectable } from "@angular/core";
@Component({
selector: 'sonComponent',
"<div> {{sonData}} </div>"
})
@Injectable()
export class sonComponent{
@Input()sonData;
constructor(
) {
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Why does macOS have many strange ports open and what do they do? 50323, 50334, 50641,57621, 57650, 64448
I did an nmap scan for my private IP address
nmap -p- 192.168.1.123
and got the following open ports back:
50323/tcp open unknown
50334/tcp open unknown
50641/tcp open unknown
57621/tcp open unknown
57650/tcp open unknown
64448/tcp open unknown
A more detailed scan about those specific ports also didn't yield much.
nmap -p 64448,57650,57621,50641,50334,50323 -A 192.168.1.123
PORT STATE SERVICE VERSION
50323/tcp open unknown
50334/tcp open unknown
| fingerprint-strings:
| NULL:
|_ {"type":"Tier1","version":"1.0"}
50641/tcp open tcpwrapped
57621/tcp open unknown
57650/tcp open tcpwrapped
64448/tcp open tcpwrapped
I'm fairly confident that I'm not responsible for opening them. They're most likely there because Apple uses them for services.
I wanted to know what they do, so I'd know if I should disable them or not for security purposes. However, googling the port names gave nothing. So, I think we should give a short description about each port and the corresponding service, so that others will be easily able to find this information in the future.
There was no option to answer my own question, so note that I'm working on providing the list right now.
A:
I have compiled a list of all the services and what they do.
Checking it yourself
If you have different ports open and want to check for yourself, you can use
sudo lsof -i :50323 to see the service running on the port 50323.
ps aux | grep rapportd to find out some more information about the service rapportd, like the executable path.
codesign -vvvv -R="anchor apple" /usr/libexec/rapportd to see if the service is correctly signed by Apple. Doesn't apply to programs that aren't made by Apple.
man rapportd to read the documentation about the service
Results
Port 50323:
Service rapportd at /usr/exec/rapportd, signed by Apple. It is a Daemon that enables Phone Call Handoff and other communication features between Apple devices.
Port 50334:
This is Spotify, which is a music streaming service. Not installed by default. I think it might be related to Spotify Connect, a service which allows connecting your phone to your computer, and streaming music from your computer to your phone (or any other device which supports it)
Port 50641:
This is IntelliJ Idea, an integrated development environment for programmers. Not installed by default. Also it's tcpwrapped, meaning you can't actually do anything with it, since it drops the connection whenever you actually try to send it anything (correct me if I'm wrong). I think IDEs usually need a port open for the debugging functionality.
Port 57621:
Also Spotify.
Port 57650:
This is gradle's daemon. It's a tool programmers use. Not installed by default. Also it's tcpwrapped. Not sure why it needs to be listening on a port. If someone knows, then I'd be curious to know.
Port 64448:
Also gradle. tcpwrapped.
| {
"pile_set_name": "StackExchange"
} |
Q:
jQuery DataTables multiple select column filters
What I would like to get is excel-like multiple criteria filtering for individual DataTables columns. I have come across few topics here on stackoverflow related to the subject but none of those seem to implement what I'm looking for.
So far, I've got only sample table and I'd appreciate any (even most high-level) guidance as of where to move next.
var tableData = [
{name: 'Clark Kent', city: 'Metropolis'},
{name: 'Bruce Wayne', city: 'Gotham'},
{name: 'Steve Rogers', city: 'New York'},
{name: 'Peter Parker', city: 'New York'},
{name: 'Thor Odinson', city: 'Asgard'}
];
var dataTable = $('#mytable').DataTable({
sDom: 't',
data: tableData,
columns: [
{data: 'name', title: 'Name'},
{data: 'city', title: 'City'}
]
});
<!doctype html>
<html>
<head>
<script src="https://code.jquery.com/jquery-3.3.1.min.js"></script>
<script src="https://cdn.datatables.net/1.10.19/js/jquery.dataTables.min.js"></script>
<link rel="stylesheet" type="text/css" href="https://cdn.datatables.net/1.10.19/css/jquery.dataTables.min.css">
</head>
<body>
<table id="mytable"></table>
</body>
</html>
A:
You may find of use following DataTables plug-in. I have somewhat extended your example for demonstration purposes (it works somewhat slow as non-minified files served from github through jsdelivr):
$(document).ready(function () {
//Source data definition
var tableData = [{
name: 'Clark Kent',
city: 'Metropolis',
race: 'cryptonian'
}, {
name: 'Bruce Wayne',
city: 'Gotham',
race: 'human'
}, {
name: 'Steve Rogers',
city: 'New York',
race: 'superhuman'
}, {
name: 'Peter Parker',
city: 'New York',
race: 'superhuman'
}, {
name: 'Thor Odinson',
city: 'Asgard',
race: 'god'
}, {
name: 'Jonathan Osterman',
city: 'New York',
race: 'superhuman'
}, {
name: 'Walter Kovacs',
city: 'New Jersey',
race: 'human'
}, {
name: 'Arthur Curry',
city: 'Atlantis',
race: 'superhuman'
}, {
name: 'Tony Stark',
city: 'New York',
race: 'human'
}, {
name: 'Scott Lang',
city: 'Coral Gables',
race: 'human'
}, {
name: 'Bruce Banner',
city: 'New York',
race: 'superhuman'
}
];
//DataTable definition
window.dataTable = $('#mytable').DataTable({
sDom: 'tF',
data: tableData,
columns: [{
data: 'name',
title: 'Name'
}, {
data: 'city',
title: 'City'
}, {
data: 'race',
title: 'Race'
}]
});
});
<!doctype html>
<html>
<head>
<script src="https://code.jquery.com/jquery-3.3.1.min.js"></script>
<script src="https://cdn.datatables.net/1.10.19/js/jquery.dataTables.min.js"></script>
<script type="application/javascript" src="https://cdn.mfilter.tk/js/mfilter.min.js"></script>
<link rel="stylesheet" type="text/css" href="https://cdn.datatables.net/1.10.19/css/jquery.dataTables.min.css">
<link rel="stylesheet" type="text/css" href="https://cdn.mfilter.tk/css/mfilter.min.css">
</head>
<body>
<table id="mytable"></table>
</body>
</html>
| {
"pile_set_name": "StackExchange"
} |
Q:
Understanding pushing to a hash and ||=[] construct. (generated in boilerplate.t)
I'm re-acquainting myself with Perl, and have just used module-starter to initialise a new project. I'm now trying to understand the generated code. All is fine apart from the follow line indicated :
sub not_in_file_ok {
my ($filename, %regex) = @_;
open( my $fh, '<', $filename )
or die "couldn't open $filename for reading: $!";
my %violated;
while (my $line = <$fh>) {
while (my ($desc, $regex) = each %regex) {
if ($line =~ $regex) {
##I'm having problems here
push @{$violated{$desc}||=[]}, $.;
}
}
}
...
}
I have two problems:
The ||=[]. Is this | followed by |=, or is this an or || followed by an =[]. Can someone talk me through what is happening here? (I'm guessing "if the hash is empty the create an empty anonymous array to initialise the hash", but I'm struggling to see how that is formed from the code.)
push @{$violated{$desc}}, $. I understand this to mean "assign the line number to the key $desc for the hash %violated. But from the code I read, "lookup the value of the key desc of $violated{$desc} (the $violated{$desc} part), then use this value as a symbolic reference to an array (the @{$value} part), then push the line number onto that array". I don't see how to reconcile these two views.
I think there is a lot for me to learn in this line of code - can someone help me by walking me through it?
A:
||=: this is an assignment operator. Example
$a ||= $b;
# corresponds to
$a = $a || $b;
see man perlop. In your example
$a ||= [];
# corresponds to
$a = $a || [];
that is: if the left operand is defined to nothing, otherwise assign an empty array reference
%violated contains an array reference for each value. You can see it like that:
my $array_ref = $violated{$desc};
push @{array_ref}, $.;
Written more verbosely:
if (! $violated{$desc} ) {
$violated{$desc} = [];
}
my $array_ref = $violated{$desc};
push @{ $array_ref }, $.;
EDIT
Arrays and array references
an array constructed with () and contains a dynamic ordered list of elements (in Perl arrays can grow dynamically)
an array reference is a reference to an array (more or less a pointer without pointer arithmetic). You can create and array reference with []
Example
my @a = ( 1, 2, 3);
# $a[0] will contain 1
my $array_ref = [ 10, 11, 12 ];
# array_ref is a _pointer_ to an array containing 10, 11 and 12
To access an array reference you need to dereference it:
@{ $array_ref };
my @array = @{ $array_ref }; # is valid
You can access { $array_ref} as an array
${ $array_ref }[0]
Now back to your question in the comment: %violated is an hash with the following key-value pairs: a string ($desc) and an array reference
A:
Let's try to deconstruct this step-by-step:
The line is used to populate a hash of arrayrefs, where the arrayrefs contain the line numbers where the $desc regex matches. The resultant %violated hash will look something like:
( desc1 => [ 1, 5, 7, 10 ],
desc2 => [ 2, 3, 4, 6, 8 ] );
push takes an array as its first argument. The variable $violated{$desc is an arrayref, not an array, so the @{...} is used to dereference it (dereferencing is the opposite of referencing).
Now for the tricky part. The stuff inside the braces is just a fancy way of saying that if $violated{$desc} is not defined inside %violated (tested with ||), it is assigned (=) to an empty arrayref ([]). Think of it as two assignments in one line:
$violated{$desc} = $violated{$desc} || [];
push @{$violated{$desc}}, $.;
Note that this complication isn't usually necessary, thanks to a feature called autovivification, which automatically creates previously undefined keys inside the hash with the intended context (an arrayref in this case). The only case I can think of where this would be needed is if $violated{$desc} == 0 before.
| {
"pile_set_name": "StackExchange"
} |
Q:
Adicionar e remover class
Estou com um esquema simples para ir para a classe .ativo e remover ela e adicionar a class .block depois procura o id #banner1 e adiciona a class .ativo e remove a class .block
Só que não esta dando certo, olha ai :
$(".botao1").click(function() {
$('.ativo').removeClass('ativo').addClass('block').find($("#banner1")).addClass('ativo').removeClass('block');
});
Alguém sabe porque ?
JSFIddle
A:
Fiz um exemplo mais simples do que você deseja, está no jsfiddle, basicamente ao clicar no botão, você oculta as descrições e aplica a visibilidade nos elementos correspondentes - descrição e imagem. Veja se o exemplo abaixo se aproxima do que deseja fazer.
jQuery
$(document).ready(function()
{
$("button").click(function() {
// id do elemento clicado
id = $(this).attr( 'id' );
// exibe / oculta descrições
$('.description').addClass('none');
$('#description'+id).removeClass('none').addClass('block');
// exibe / oculta imagens
$('.image').addClass('none');
$('#img'+id).removeClass('none').addClass('block');
});
});
HTML
<div id="banners">
<img id="img1" src="..." class="image block">
<img id="img2" src="..." class="image none">
<img id="img3" src="..." class="image none">
<div id="description1" class="description">[1] Bla bla bla</div>
<div id="description2" class="description none">[2] Bla bla bla</div>
<div id="description3" class="description none">[3] Bla bla bla</div>
<button id="1">Banner 1</button>
<button id="2">Banner 2</button>
<button id="3">Banner 3</button>
</div>
CSS
.block{display:block}
.none{display:none}
| {
"pile_set_name": "StackExchange"
} |
Q:
How to get current state inside useCallback when using useReducer?
Using react hooks with TypeScript and here is a minimal representation of what I am trying to do: Have a list of buttons on screen and when the user clicks on a button, I want to change the text of the button to "Button clicked" and then only re-render the button which was clicked.
I am using useCallback for wrapping the button click event to avoid the click handler getting re-created on every render.
This code works the way I want: If I use useState and maintain my state in an array, then I can use the Functional update in useState and get the exact behaviour I want:
import * as React from 'react';
import { IHelloWorldProps } from './IHelloWorldProps';
import { useEffect, useCallback, useState } from 'react';
import { PrimaryButton } from 'office-ui-fabric-react';
interface IMyButtonProps {
title: string;
id: string;
onClick: (clickedDeviceId: string) => (event: any) => void;
}
const MyButton: React.FunctionComponent<IMyButtonProps> = React.memo((props: IMyButtonProps) => {
console.log(`Button rendered for ${props.title}`);
return <PrimaryButton text={props.title} onClick={props.onClick(props.id)} />;
});
interface IDevice {
Name: string;
Id: string;
}
const HelloWorld: React.FunctionComponent<IHelloWorldProps> = (props: IHelloWorldProps) => {
//If I use an array for state instead of object and then use useState with Functional update, I get the result I want.
const initialState: IDevice[] = [];
const [deviceState, setDeviceState] = useState<IDevice[]>(initialState);
useEffect(() => {
//Simulate network call to load data.
setTimeout(() => {
setDeviceState([{ Name: "Apple", Id: "appl01" }, { Name: "Android", Id: "andr02" }, { Name: "Windows Phone", Id: "wp03" }]);
}, 500);
}, []);
const _deviceClicked = useCallback((clickedDeviceId: string) => ((event: any): void => {
setDeviceState(prevState => prevState.map((device: IDevice) => {
if (device.Id === clickedDeviceId) {
device.Name = `${device.Name} clicked`;
}
return device;
}));
}), []);
return (
<React.Fragment>
{deviceState.map((device: IDevice) => {
return <MyButton key={device.Id} title={device.Name} onClick={_deviceClicked} id={device.Id} />;
})}
</React.Fragment>
);
};
export default HelloWorld;
Here is the desired result:
But here is my problem: In my production app, the state is maintained in an object and we are using the useReducer hook to simulate a class component style setState where we only need to pass in the changed properties. So we don't have to keep replacing the entire state for every action.
When trying to do the same thing as before with useReducer, the state is always stale as the cached version of useCallback is from the first load when the device list was empty.
import * as React from 'react';
import { IHelloWorldProps } from './IHelloWorldProps';
import { useEffect, useCallback, useReducer, useState } from 'react';
import { PrimaryButton } from 'office-ui-fabric-react';
interface IMyButtonProps {
title: string;
id: string;
onClick: (clickedDeviceId: string) => (event: any) => void;
}
const MyButton: React.FunctionComponent<IMyButtonProps> = React.memo((props: IMyButtonProps) => {
console.log(`Button rendered for ${props.title}`);
return <PrimaryButton text={props.title} onClick={props.onClick(props.id)} />;
});
interface IDevice {
Name: string;
Id: string;
}
interface IDeviceState {
devices: IDevice[];
}
const HelloWorld: React.FunctionComponent<IHelloWorldProps> = (props: IHelloWorldProps) => {
const initialState: IDeviceState = { devices: [] };
//Using useReducer to mimic class component's this.setState functionality where only the updated state needs to be sent to the reducer instead of the entire state.
const [deviceState, setDeviceState] = useReducer((previousState: IDeviceState, updatedProperties: Partial<IDeviceState>) => ({ ...previousState, ...updatedProperties }), initialState);
useEffect(() => {
//Simulate network call to load data.
setTimeout(() => {
setDeviceState({ devices: [{ Name: "Apple", Id: "appl01" }, { Name: "Android", Id: "andr02" }, { Name: "Windows Phone", Id: "wp03" }] });
}, 500);
}, []);
//Have to wrap in useCallback otherwise the "MyButton" component will get a new version of _deviceClicked for each time.
//If the useCallback wrapper is removed from here, I see the behavior I want but then the entire device list is re-rendered everytime I click on a device.
const _deviceClicked = useCallback((clickedDeviceId: string) => ((event: any): void => {
//Since useCallback contains the cached version of the function before the useEffect runs, deviceState.devices is always an empty array [] here.
const updatedDeviceList = deviceState.devices.map((device: IDevice) => {
if (device.Id === clickedDeviceId) {
device.Name = `${device.Name} clicked`;
}
return device;
});
setDeviceState({ devices: updatedDeviceList });
//Cannot add the deviceState.devices dependency here because we are updating deviceState.devices inside the function. This would mean useCallback would be useless.
}), []);
return (
<React.Fragment>
{deviceState.devices.map((device: IDevice) => {
return <MyButton key={device.Id} title={device.Name} onClick={_deviceClicked} id={device.Id} />;
})}
</React.Fragment>
);
};
export default HelloWorld;
This is how it looks:
So my question boils down to this: When using useState inside useCallback, we can use the functional update pattern and capture the current state (instead of from when useCallback was cached)
This is possible without specifying dependencies to useCallback.
How can we do the same thing when using useReducer? Is there a way to get the current state inside useCallback when using useReducer and without specifying dependencies to useCallback?
A:
You can dispatch a function that will be called by the reducer and gets the current state passed to it. Something like this:
//Using useReducer to mimic class component's this.setState functionality where only the updated state needs to be sent to the reducer instead of the entire state.
const [deviceState, dispatch] = useReducer(
(previousState, action) => action(previousState),
initialState
);
//Have to wrap in useCallback otherwise the "MyButton" component will get a new version of _deviceClicked for each time.
//If the useCallback wrapper is removed from here, I see the behavior I want but then the entire device list is re-rendered everytime I click on a device.
const _deviceClicked = useCallback(
(clickedDeviceId) => (event) => {
//Since useCallback contains the cached version of the function before the useEffect runs, deviceState.devices is always an empty array [] here.
dispatch((deviceState) => ({
...deviceState,
devices: deviceState.devices.map((device) => {
if (device.Id === clickedDeviceId) {
device.Name = `${device.Name} clicked`;
}
return device;
}),
}));
//no dependencies here
},
[]
);
Below is a working example:
const { useCallback, useReducer } = React;
const App = () => {
const [deviceState, dispatch] = useReducer(
(previousState, action) => action(previousState),
{ count: 0, other: 88 }
);
const click = useCallback(
(increase) => () => {
//Since useCallback contains the cached version of the function before the useEffect runs, deviceState.devices is always an empty array [] here.
dispatch((deviceState) => ({
...deviceState,
count: deviceState.count + increase,
}));
//no dependencies here
},
[]
);
return (
<div>
<button onClick={click(1)}>+1</button>
<button onClick={click(2)}>+2</button>
<button onClick={click(3)}>+3</button>
<pre>{JSON.stringify(deviceState)}</pre>
</div>
);
};
ReactDOM.render(<App />, document.getElementById('root'));
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/16.8.4/umd/react.production.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/react-dom/16.8.4/umd/react-dom.production.min.js"></script>
<div id="root"></div>
This is not how you would normally use useReducer and don't se a reason why you would not just use useState instead in this instance.
| {
"pile_set_name": "StackExchange"
} |
Q:
AWS Lambda python: .so module: ModuleNotFoundError: No module named 'regex._regex' when in subshell
I'm trying to run some of my simple CI jobs (e.g: linters) into lambda functions for my python repositories.
I'm using gitlab-ci for this, and the basic architecture (not directly related to the problem but it may help the choices) is the following:
a CI jobs is defined as a set of shell commands in my case just running black --check . (a python linter):
1. the shell commands are sent to my lambda
2. the lambda git clone the repo
3. the lambda execute in a subprocess the commands
4. the result is returned
so my lambda looks like this:
import os
import stat
import sys
import json
import subprocess
import stat
import shutil
from git import Repo
def main(event, lambda_context):
# I've commented out, so that if you like trying to reproduce it
# at home, you don't need the additional dependencies :)
#Repo.clone_from("https://example.com/the/repo.git")
# the command sent by the CI having a shebang in it
# the quick and dirty solution we found is to write it into
# a shell script and then executing said script
script = open("/tmp/script.sh", "w")
script.write(event["command"])
script.close()
st = os.stat("/tmp/script.sh")
os.chmod("/tmp/script.sh", st.st_mode | stat.S_IEXEC)
# the copy is just to be extra-safe
copy_env_variables = os.environ.copy()
# for all the binary added by the requirements.txt
# we edit PYTHONPATH for black in the subshell to find the modules
lambda_root_dir = os.environ["LAMBDA_TASK_ROOT"]
copy_env_variables["PATH"] = copy_env_variables["PATH"] + f":{lambda_root_dir}/bin"
copy_env_variables["PYTHONPATH"] = (
copy_env_variables.get("PYTHONPATH", "") + ":" + lambda_root_dir
)
proc = subprocess.Popen(
"/tmp/script.sh",
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
env=copy_env_variables,
)
(stdout, _) = proc.communicate()
return {"return_code": proc.returncode, "output": stdout.decode("utf-8")}
and the lambda is packaged with this script: (python3 is python3.7 and I'm on linux x86_64)
python3 -m pip install -r requirements.txt -t ./
zip -r archive.zip . -x \*.pyc *.git*
the requirements.txt is the following:
gitpython
black
and when I execute the CI job, I got in the output of the subprocess:
$ black --check .
Traceback (most recent call last):
File "/var/task/bin/black", line 6, in <module>
from black import patched_main
File "/var/task/black.py", line 15, in <module>
import regex as re
File "/var/task/regex/__init__.py", line 1, in <module>
from .regex import *
File "/var/task/regex/regex.py", line 402, in <module>
import regex._regex_core as _regex_core
File "/var/task/regex/_regex_core.py", line 21, in <module>
import regex._regex as _regex
ModuleNotFoundError: No module named 'regex._regex'
The special thing about regex._regexp is that it's a compiled module, named regex/_regex.cpython-37m-x86_64-linux-gnu.so , however if i import it directly in my lambda code, it works without problem.
Is there something I'm missing about the way python deal with modules coming from .so libraries ?
A:
The problem come from the binary having /usr/bin/python3 in its shebang and pointing to python3.6, so python3.6 can't load .so compiled for python3.7
Strangely even though the Lambda was deployed with python3.7 runtime. In lambda python3.7 is located in /var/lang/bin/python3.7 , so for the moment I have modify my CI to symlink python3.7 in /var/lang/bin/python3.7 so the shebang generated by pip is the correct one for lambda,
| {
"pile_set_name": "StackExchange"
} |
Q:
Async methods don't require additional threads?
In MSDN, there is a paragraph like this:
The async and await keywords don't cause additional threads to be
created. Async methods don't require multithreading because an async
method doesn't run on its own thread. The method runs on the current
synchronization context and uses time on the thread only when the
method is active. You can use Task.Run to move CPU-bound work to a
background thread, but a background thread doesn't help with a process
that's just waiting for results to become available.
But it looks I need little more help with the bold text since I am not sure what it exactly means. So how come it becomes async without using Threads?
Source: http://msdn.microsoft.com/en-us/library/hh191443.aspx
A:
There are many asynchronous operations which don't require the use of multiple threads. Things like Asynchronous IO work by having interrupts which signal when data is available. This allows you to have an asynchronous call which isn't using extra threads - when the signal occurs, the operation completes.
Task.Run can be used to make your own CPU-based async methods, which will run on its own separate thread. The paragraph was intended to show that this isn't the only option, however.
A:
async/await is not just about using more threads. It's about using the threads you have more effectively. When operations block, such as waiting on a download or file read, the async/await pattern allows you to use that existing thread for something else. The compiler handles all the magic plumbing underneath, making it much easier to develop with.
See http://msdn.microsoft.com/en-us/magazine/hh456401.aspx for the problem description and the whitepaper at http://www.microsoft.com/en-us/download/details.aspx?id=14058.
| {
"pile_set_name": "StackExchange"
} |
Q:
extracting unique elements from nested list in dataframe
I have a data.frame with a variable which contains names of numerous participants. The names of the participants are all contained as one (=1) long string with names separated by a comma. Some of the names are repetitive. I try to get only each name once.
Below the data.
I converted the long string of names into a list:
b$s <- strsplit(b$participants, ",")
I then removed spaces on both sides of names to standardize them.
library(stringr)
b.l <- unlist(b$s)
b.l <- str_trim(b.l, side="both")
From this list I took the unique values
b.l <- unique(unlist(b.l))
The result are all unique names:
"Takfir wa'l Hijra" "AIS" "GIA" "AQIM" "MUJAO" "FLEC-R" "FLEC-FAC"
However, this list contains ALL unique names. I would like to perform these steps only for each ID (session number), which can be also repetitive.
I tried to perform the operation above with ddply but to no avail. Any recommendation? Unfortunately, I am not very familiar with the handling of lists.
Eventually, the dataframe should look like this:
id unique.participants
1-191 Takfir wa'l Hijra, AIS, GIA, AQIM, MUJAO
1-191 Takfir wa'l Hijra, AIS, GIA, AQIM, MUJAO
1-192 FLEC-R, FLEC-FAC
Many thanks.
data.frame:
b<–structure(list(id = structure(c(1L, 1L, 2L), .Label = c("1-191",
"1-192", "1-131"), class = "factor"), participants = c("Takfir wa'l Hijra,AIS,AIS, GIA,AIS, GIA,AIS, GIA,AIS, GIA,AIS, GIA,GIA,AQIM, GIA,AQIM, GIA,AQIM, GIA,AQIM, GIA,AQIM, GIA,AQIM,AQIM,AQIM,AQIM,AQIM,AQIM,AQIM,AQIM,AQIM, MUJAO,AQIM",
"Takfir wa'l Hijra,AIS,AIS, GIA,AIS, GIA,AIS, GIA,AIS, GIA,AIS, GIA,GIA,AQIM, GIA,AQIM, GIA,AQIM, GIA,AQIM, GIA,AQIM, GIA,AQIM,AQIM,AQIM,AQIM,AQIM,AQIM,AQIM,AQIM,AQIM, MUJAO,AQIM",
"FLEC-R,FLEC-FAC, FLEC-R,FLEC-FAC,FLEC-FAC, FLEC-R,FLEC-FAC,FLEC-FAC, FLEC-R,FLEC-FAC,FLEC-FAC,FLEC-FAC"
), s = list(c("Takfir wa'l Hijra", "AIS", "AIS", " GIA", "AIS",
" GIA", "AIS", " GIA", "AIS", " GIA", "AIS", " GIA", "GIA", "AQIM",
" GIA", "AQIM", " GIA", "AQIM", " GIA", "AQIM", " GIA", "AQIM",
" GIA", "AQIM", "AQIM", "AQIM", "AQIM", "AQIM", "AQIM", "AQIM",
"AQIM", "AQIM", " MUJAO", "AQIM"), c("Takfir wa'l Hijra", "AIS",
"AIS", " GIA", "AIS", " GIA", "AIS", " GIA", "AIS", " GIA", "AIS",
" GIA", "GIA", "AQIM", " GIA", "AQIM", " GIA", "AQIM", " GIA",
"AQIM", " GIA", "AQIM", " GIA", "AQIM", "AQIM", "AQIM", "AQIM",
"AQIM", "AQIM", "AQIM", "AQIM", "AQIM", " MUJAO", "AQIM"), c("FLEC-R",
"FLEC-FAC", " FLEC-R", "FLEC-FAC", "FLEC-FAC", " FLEC-R", "FLEC-FAC",
"FLEC-FAC", " FLEC-R", "FLEC-FAC", "FLEC-FAC", "FLEC-FAC"))), .Names = c("id",
"participants", "s"), row.names = c(1L, 2L, 24L), class = "data.frame")
A:
Using ddply you can do this
library(plyr)
ddply(b,~id,summarise,
nn= paste(unique(unlist(strsplit(participants,','))),collapse=','))
id nn
1 1-191 Takfir wa'l Hijra,AIS, GIA,GIA,AQIM, MUJAO
2 1-192 FLEC-R,FLEC-FAC, FLEC-R
A:
within would be good for this. It allows for reassignment of the variables within the expression. Also, you could adjust your regular expression in strsplit so that you can remove those spaces and the commas in one go.
> within(b[-3],{
unique.participants <- sapply(strsplit(participants, "(,)|(, )"), unique)
rm(participants)
})
# id unique.participants
# 1 1-191 Takfir wa'l Hijra, AIS, GIA, AQIM, MUJAO
# 2 1-191 Takfir wa'l Hijra, AIS, GIA, AQIM, MUJAO
# 24 1-192 FLEC-R, FLEC-FAC
Since I'm seeing
I would like to perform these steps only for each ID (session number), which can be also repetitive.
in your question, I'm sticking with the duplicate row.
| {
"pile_set_name": "StackExchange"
} |
Q:
How identityserver3 token can be protected
Could you please help me to clarify token related questions?
I have implemented HTTPS all the way through, my question is that
when token is granted I can see it under the Chrome developer tool
and the redirection url, which means if someone hacked my computer
they can use it too? I have checked the Fiddler and I can't see the
token from there.
The web api has CORS implemented, it works fine in the browsers as
origins that are not listed will get requests denied. The problem is
that I retrieved the access token from Chrome, used Fiddler to
compose a request and that worked fine, it got around the CORS check
and returned the results, I expected to have the request denied.
Thanks in advance!
A:
yes
CORS only applies to browsers and AJAX requests
| {
"pile_set_name": "StackExchange"
} |
Q:
How to add Virtual Hosting to two existing VM servers
I'm trying to run two Ubuntu VM's on one machine, each running a separate LAMP stack serving different websites.
Is it possible to use Virtual Hosting to accomplish this? I've read that it's possible, but everything I see involves either the same machine. Can I use multiple machines (virtual or logical). Can I add Virtual Hosting entries without hurting the existing stacks?
Each server functions properly separately.
A:
You can reach your objective with reverse proxying. It works like this:
You assign the external IP to one of your virtual machines.
In that virtual machine's Apache, you set up a virtual host, which reverse proxies all requests to the second virtual machine, via the internal VM network that both virtual machines are in.
The second virtual machine does not have an external IP address.
| {
"pile_set_name": "StackExchange"
} |
Q:
Unable to configure CSRF in Spring not to face CORS error
Following the Spring Boot's Issue #5834, in order to setup the proper CORS and lift the error supporting all the origins I have the following code:
@Configuration
@EnableWebSecurity
public class SecurityAdapter extends WebSecurityConfigurerAdapter
{
@Override
protected void configure(HttpSecurity http)
throws Exception
{
ExpressionUrlAuthorizationConfigurer<HttpSecurity>.ExpressionInterceptUrlRegistry authorizeRequests = http.authorizeRequests();
authorizeRequests.antMatchers("/logon_check").permitAll();
authorizeRequests.antMatchers("/logon").permitAll();
authorizeRequests.anyRequest().authenticated();
http
.csrf().csrfTokenRepository(CookieCsrfTokenRepository.withHttpOnlyFalse())
.and()
.cors()
.and()
.httpBasic()
.and()
.sessionManagement().sessionCreationPolicy(SessionCreationPolicy.NEVER);
}
@Bean
public CorsConfigurationSource corsConfigurationSource() {
final CorsConfiguration configuration = new CorsConfiguration();
configuration.setAllowedOrigins(ImmutableList.of("*"));
configuration.setAllowedMethods(ImmutableList.of("HEAD", "GET", "POST", "PUT", "DELETE", "PATCH"));
// setAllowCredentials(true) is important, otherwise:
// The value of the 'Access-Control-Allow-Origin' header in the response must not be the wildcard '*' when the request's credentials mode is 'include'.
configuration.setAllowCredentials(true);
// setAllowedHeaders is important! Without it, OPTIONS preflight request
// will fail with 403 Invalid CORS request
configuration.setAllowedHeaders(ImmutableList.of("Authorization", "Cache-Control", "Content-Type"));
final UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource();
source.registerCorsConfiguration("/**", configuration);
return source;
}
}
And
@Configuration
public class WebConfig extends WebMvcConfigurerAdapter
{
@Override
public void addCorsMappings(CorsRegistry registry) {
registry.addMapping("/**")
.allowedMethods("HEAD", "GET", "PUT", "POST", "DELETE", "PATCH");
}
}
But the OPTIONS preflight request returns 403:
XMLHttpRequest cannot load http://192.168.2.10:8080/logon_check. Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://192.168.2.10:4200' is therefore not allowed access. The response had HTTP status code 403.
These are the request headers:
OPTIONS /logon_check HTTP/1.1
Host: 192.168.2.10:8080
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache
Access-Control-Request-Method: GET
Origin: http://192.168.2.10:4200
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.109 Safari/537.36
Access-Control-Request-Headers: x-requested-with
Accept: */*
Referer: http://192.168.2.10:4200/logon
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.8,fa;q=0.6
And response headers:
HTTP/1.1 403
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
Expires: 0
X-Frame-Options: DENY
Content-Length: 20
Date: Mon, 26 Jun 2017 23:56:06 GMT
Can someone help me configure the Spring right so all origins are passed through?
A:
I found a way to fix the problem but I'm not sure if this way of fixing is the right way of doing it or not.
After a couple of hours tracing Spring's code, I realized that the problem is with the HTTP headers that are allowed in the request. So changing the this line could resolve the problem:
configuration.setAllowedHeaders(ImmutableList.of("Authorization", "Cache-Control", "Content-Type", "X-Requested-With", "X-XSRF-TOKEN"));
In the above line, I've added "X-Requested-With", "X-XSRF-TOKEN" to the list of the headers allowed by the request to have. These two extra headers are the ones I needed to add. There might be some other cases / browsers which some other headers might be needed. So a general fix could be:
configuration.setAllowedHeaders(ImmutableList.of("*"));
But again, I'm not sure if this could be security risk or not.
| {
"pile_set_name": "StackExchange"
} |
Q:
Xcode 6 custom launch screen
I am trying to make a custom launch screen based Xcode 6 new feature following the content given here. I don't understand how can I manage orientation of launch screen on iPad. My launch screen contains a horizontal rectangular image in the centre. Using size classes I can set different constraints and views for iPhone in landscape or portrait mode. But unable to do the same for iPad. Also how can I lock the orientation of Launch screen when creating a launch screen using the above method ?
A:
You can not do this for the iPad because both orientations fall under the regular width and regular height size class.
| {
"pile_set_name": "StackExchange"
} |
Q:
What does `SYNs to LISTEN sockets dropped` from `netstat -s` mean
I could found 437 SYNs to LISTEN sockets dropped from netstat -s from the server on my server which runs nginx.
I found this explanation from the man page: --statistics, -s, Display summary statistics for each protocol.
Then what does this count 437 mean, is it a snapshot or a summed up count for some time period?
Thanks a lot.
A:
Nginx accepts connections very quickly, but in extremely high-traffic situations, a connection backlog can still happen at the system level (which is a distinct bottleneck from the application-level connection handling) When this occurs, new connections will be refused.
"SYNs to LISTEN sockets dropped" is a symptom that your Nginx drops the packets. My advise is to first monitor the Nginx active connections using ngx_http_stub_status_module[1]. Then identify current system wide open file descriptors and adjust kernel parameters accordingly.
The connection queue size can be increased by modifying the somaxconn and tcp_max_syn_backlog kernel variables. Please refer these valuable resources[2][3] for more information.
[1] https://nginx.org/en/docs/http/ngx_http_stub_status_module.html
[2] http://engineering.chartbeat.com/2014/01/02/part-1-lessons-learned-tuning-tcp-and-nginx-in-ec2/
[3] https://www.scalyr.com/community/guides/how-to-monitor-nginx-the-essential-guide
| {
"pile_set_name": "StackExchange"
} |
Q:
How can I host a python server script, on a cloud hosted server (Google, Amazon, Azure...)?
let me start by answering the obvious. Why would I want to do this? Actually, I don't. I made the following program below, but it's designed to run on a remote server. I'm basically using the socket library, but I want to host it on another machine, preferbly a google, amazon, azure, etc. But, as I knew before I tried, this was slightly not possible. Google app engine gave me an error, like "access denied to socket blah blah blah".
I feel I'm left with 2 options:
I can continue to run this code on my own servers, if I can figure out how to host this server script on a hosted cloud base server, or I could take the code, every bit that doesn't contain the server portion, and make it get the "data" from the clients, via POST requests.
the data is what's sent from the client...
bap = {}
while 1:
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
server_address = ("localhost", 8081)
#print 'starting up on %s port %s' % server_address
server.bind(server_address)
server.listen(5)
connection, client_address = server.accept()
#print 'connection from', connection.getpeername()
server.close()
data = connection.recv(4096)
if data in bap:
print data + " is checking in!!!"
for k, v in bap.iteritems():
if k == data:
bap[k] = 10
print bap
c = open('check.json', "w")
wiz = json.dumps(bap)
c.write(wiz)
c.close()
else:
bap[k] -= 1
if bap[k] < 0:
print k + " is Offline!!!"
mail()
c = open('log.txt', "a")
wiz = json.dumps(bap)
time1 = datetime.datetime.now().strftime("%m/%d/%y %H:%M ")
c.write(k + " is offline!!! "+ time1 + "\n")
c.close()
else:
print bap
else:
bap[data] = 10
print data + " was added!!!"
A:
Running a python script listening to external ports can be done on Amazon EC2.
1) Create an EC2 Instance using the Management Console at Amazon Web Services.
2) Edit the Security Group associated with your instance so it opens the port number you want your python script to be listening to.
3) Upload and run your script on the EC2 instance. Be sure the port number that your script is listening on is the same as the one you opened in the Security Group.
If you SSH into your EC2 instance, you may want to run the python script in a "background process" using something like TMUX. Else when you terminate the SSH connection your python script will stop running.
| {
"pile_set_name": "StackExchange"
} |
Q:
Query to identify changes to a set of data?
I have two reports. One which was generated three weeks ago, the other a few days ago. These reports share the same fields (Last Name, First Name, SSN, etc.) but the data is obviously different. We've edited particular fields. A customer's SSN may have been edited to reflect a data entry mistake. Their address information might have been updated.
We've yet to install an audit table as part of our database, so we need to determine another way to identify changes between the reports. There are a couple of glaring issues.
1) When we exported the data into Excel, the record ID was not retained. A single customer may have multiple records to account for different addresses, so their SSN wouldn't be unique.
2) We don't only need to identify the changes. We need to categorize them. If there was a change made to SSN, report that separately than changes made to address.
So, if I were to import the two files into a database, is there a query that can fix this for me? My team and I have brainstormed on this for some time now and we've thought of nothing. Without a unique record ID, on which field(s) are we meant to link?
If you'd like me to be more specific as to any part of the question, please let me know and I'll do whatever I can to help you assist me.
The reports are in an Excel file that was exported from a query ran in MS Access 2013.
Thank you.
A:
So you have no PK on your report as you lost it when exporting to Excel. Assuming you cannot recover it, you are out of luck unless there is something else that can quantify as a PK.
What I would do if I were you is import both of the report files as separate tables:
table_old
table_new
Then I'd do a minus statement to eliminate exact copies:
select * from table_old
minus
select * from table_new
This will show you records that existed on your old report, but have either been deleted or modified. You could then the do converse
select * from table_new
minus
select * from table_old
This will give you changes and/or new/removed records.
That being said, the only real way to categorize the records would be to incorporate the SSN to this. Having no PK will make this a very inexact science, but in losing the PK you're going to have to live with this being close instead of exact.
| {
"pile_set_name": "StackExchange"
} |
Q:
Making a switch statement on my partials angularjs
I'm having a problem, i'm unable to get my ng-switch to work on my partials. What I'm trying to do is upload an image but before I upload it I must first check if the included image size doesn't reach 25kb.
Here is my controller code:
$scope.validateAvatar = function(files) {
var fd = new FormData();
$scope.filesize = files[0].size;
$scope.filemaxsize = 25;
//Take the first selected file
fd.append("file", files[0]);
$scope.uploadAvatar = function() {
Api.uploadAvatar($scope.main.user.email, fd)
.then(function(result) {
console.log(result.data);
}, function(result) {
console.log(result);
})
};
};
and my partials code:
<form data-ng-submit="uploadAvatar()">
<input type="file" name="file" onchange="angular.element(this).scope().validateAvatar(this.files)"/>
<div ng-switch on="filesize / 1024 < filemaxsize">
<div ng-switch-when="true">
<input type="submit" value="Upload Avatar">
</div>
<div ng-switch-default>
Please choose you avatar.
</div>
</div>
</form>
Also do you think my validation is enough when checking for the image file size, say what if the image included is size:20MB, will my validation still catch it? Sorry in the first place I haven't been able to make it work the switch statements first . :(
A:
You need to call $scope.$apply() after you changed $scope.filesize manually.
BTW, why not just let $scope.filemaxsize = 25 * 1024;
| {
"pile_set_name": "StackExchange"
} |
Q:
Подсказать в решении проблемы grid
Есть grid меню, в нём несколько элементов. Нужно чтобы первый и последний занимали все 100% по ширине, остальные 1/2(как и сейчас на скринах)
Текущий css:
display: grid
width: 100%
grid-template-columns: repeat(2, 1fr)
grid-template-rows: repeat(4, 1fr)
grid-row-gap: resize(12)
grid-column-gap: resize(15)
A:
.menu {
display: grid;
width: 100%;
grid-template-columns: repeat(2, 1fr);
grid-template-rows: repeat(4, 1fr);
grid-template-areas:
"first first"
"second third"
"fourth fifth"
"sixth seventh"
"eighth eighth";
grid-row-gap: resize(12)
grid-column-gap: resize(15)
Свойством grid-template-areas можно размечать grid с использованием условных названий блоков на странице. Нужно только синтаксически разместить эти имена по колонкам/рядам в соответствии с размерами, указанными в grid-template-rows и grid-template-columns, а затем выдать самим блокам свои названия по примеру:
grid-area: first;
Об этом свойстве лучше почитать в интернете.
| {
"pile_set_name": "StackExchange"
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.