date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
---|---|---|---|
2018/03/20 | 323 | 1,168 | <issue_start>username_0: declared loginObj in login.component.ts as below
```
public loginObj: Object = {
email:'',
password:''
};
public registerObj: Object = {
email:'',
name:'',
password:''
};
```
HTML
```
```<issue_comment>username_1: The error is right this property is not existing. You need to create interface
```
export interface LoginObject {
email:string;
password:string;
}
```
adn then import it into your component and declare your object like this
```
public loginObj: LoginObject = {
email:'',
password:''
};
```
You can even try to declare it just like this
```
public loginObj: LoginObject;
```
and it will work for you
Upvotes: 3 <issue_comment>username_2: Make the type any instead of Object or define an interface and make it the type.
Upvotes: 4 [selected_answer]<issue_comment>username_3: I got a similar error while building it in jenkins. The following command resolved the issue:
```
npm install
npm run ng build --prod
```
Hope it helps
Upvotes: 0 <issue_comment>username_4: try add this dependencies in your package.json
"@angular/compiler-cli":"9.0.0 (or other version you used)"
Upvotes: 0 |
2018/03/20 | 577 | 1,950 | <issue_start>username_0: Javascript function that takes a single argument. Use that argument value which is a string to return the appropriate value from the matched object key.
```
function someFunction(someArg) {
var message = {
bob: "<NAME>",
mike: "Hello mike",
tara: "Hello tara"
}
console.log(message + " " + message.someArg + " " + someArg + " " + message.bob);
```
}
what is returned is
```
[object Object] undefined bob Hello bob
```
Where undefined is returned in the console log, JavaScript should return the message "Hello bob" as the value of someArg is "bob", calling message.bob returns the correct result.<issue_comment>username_1: You have to use the [] notation, where obj[key] is the same as obj.key, but key can be a variable.
```js
function someFunction(someArg) {
var message = {
bob: "<NAME>",
mike: "Hello mike",
tara: "Hello tara"
}
console.log(JSON.stringify(message) + " " + message[someArg] + " " + someArg + " " + message.bob);
}
someFunction("mike");
```
Upvotes: 0 <issue_comment>username_2: When using `message.someArg` you are "telling" attribute someArg or your message Object.
What you have to use is `message[someArg]` to get the dynamic property.
Upvotes: 1 [selected_answer]<issue_comment>username_3: To print it properly, you'll have to:
* Stringify the message object
* Refer to the property of message in a correct manner
Try this
```
function someFunction(someArg) {
var message = {
bob: "Hello bob",
mike: "Hello mike",
tara: "Hello tara"
}
//ES6
console.log(`${JSON.stringify(message)} ${message[someArg]} ${someArg} ${message.bob}`);
//ES5
console.log(JSON.stringify(message) + " " + message[someArg] + " " + someArg + " " + message.bob);
}
```
Now, on calling *someFunction('bob')*, the output is:
```
{"bob":"Hello bob","mike":"Hello mike","tara":"Hello tara"} Hello bob bob Hello bob
```
Upvotes: 2 |
2018/03/20 | 544 | 1,798 | <issue_start>username_0: I'm working on a mailing and I'm having problems with setting a width in a table element in IE/ Outlook.
I have tried several things which I've seen in other questions but none of them seems to work.
The code is this, it includes some solutions I've tried. The div which wraps the table is used for other styling necessities.
```
\*{
margin:0;
padding:0;
}
body{
box-sizing: border-box;
}
table{
border-collapse: collapse;
border-spacing: 10px 5px;
}
```<issue_comment>username_1: You have to use the [] notation, where obj[key] is the same as obj.key, but key can be a variable.
```js
function someFunction(someArg) {
var message = {
bob: "Hello bob",
mike: "Hello mike",
tara: "Hello tara"
}
console.log(JSON.stringify(message) + " " + message[someArg] + " " + someArg + " " + message.bob);
}
someFunction("mike");
```
Upvotes: 0 <issue_comment>username_2: When using `message.someArg` you are "telling" attribute someArg or your message Object.
What you have to use is `message[someArg]` to get the dynamic property.
Upvotes: 1 [selected_answer]<issue_comment>username_3: To print it properly, you'll have to:
* Stringify the message object
* Refer to the property of message in a correct manner
Try this
```
function someFunction(someArg) {
var message = {
bob: "Hello bob",
mike: "Hello mike",
tara: "Hello tara"
}
//ES6
console.log(`${JSON.stringify(message)} ${message[someArg]} ${someArg} ${message.bob}`);
//ES5
console.log(JSON.stringify(message) + " " + message[someArg] + " " + someArg + " " + message.bob);
}
```
Now, on calling *someFunction('bob')*, the output is:
```
{"bob":"Hello bob","mike":"Hello mike","tara":"Hello tara"} Hello bob bob Hello bob
```
Upvotes: 2 |
2018/03/20 | 959 | 3,141 | <issue_start>username_0: ```
import java.util.HashMap;
import java.util.Map;
import java.util.Scanner;
public class MyMap {
public static void main (String[] args) {
Map cars = new HashMap<> ();
cars.put ("ID1", new Car("Dave", "LT12 DDS"));
cars.put ("ID2", new Car("Steve", "GB14 HHG"));
cars.put ("ID3", new Car("Molly", "LT18 SDF"));
System.out.println ("Car with ID1 is " + cars.get ("ID1"));
}
}
```
I can get the details from the map without using the scanner class.
I would like to have a user input the ID and get the results from the HashMap
```
class Car {
public Car (String name, String barcode) {
this.name = name;
this.barcode = barcode;
}
public String toString () {
return "Car: " + name + " (" + barcode + ")";
}
public final String name;
public final String barcode;
}
```
This works until i try to add the scanner part. I want the user to enter the ID and then the results to be retrieved from the Hash Map
```
import java.util.HashMap;
import java.util.Map;
import java.util.Scanner;
public class MyMap {
public static void main (String[] args) {
Map cars = new HashMap<> ();
cars.put ("ID1", new Car("Dave", "LT12 DDS"));
cars.put ("ID2", new Car("Steve", "GB14 HHG"));
cars.put ("ID3", new Car("Molly", "LT18 SDF"));
Scanner ab=new Scanner(System.in);
System.out.println("Enter ID: ");
int id=ab.nextInt();
//user input should get details from HashMap??
System.out.println ("Car with ID1 is " + cars.get (int id));
//System.out.println ("Car with ID1 is " + cars.get ("ID1"));
}
}
//the Car class
class Car {
public Car (String name, String barcode) {
this.name = name;
this.barcode = barcode;
}
```
I am struggling with getting the Hash Map to get the details when the user input supplies the ID number.<issue_comment>username_1: You have to use the [] notation, where obj[key] is the same as obj.key, but key can be a variable.
```js
function someFunction(someArg) {
var message = {
bob: "Hello bob",
mike: "Hello mike",
tara: "Hello tara"
}
console.log(JSON.stringify(message) + " " + message[someArg] + " " + someArg + " " + message.bob);
}
someFunction("mike");
```
Upvotes: 0 <issue_comment>username_2: When using `message.someArg` you are "telling" attribute someArg or your message Object.
What you have to use is `message[someArg]` to get the dynamic property.
Upvotes: 1 [selected_answer]<issue_comment>username_3: To print it properly, you'll have to:
* Stringify the message object
* Refer to the property of message in a correct manner
Try this
```
function someFunction(someArg) {
var message = {
bob: "Hello bob",
mike: "Hello mike",
tara: "Hello tara"
}
//ES6
console.log(`${JSON.stringify(message)} ${message[someArg]} ${someArg} ${message.bob}`);
//ES5
console.log(JSON.stringify(message) + " " + message[someArg] + " " + someArg + " " + message.bob);
}
```
Now, on calling *someFunction('bob')*, the output is:
```
{"bob":"Hello bob","mike":"Hello mike","tara":"Hello tara"} Hello bob bob Hello bob
```
Upvotes: 2 |
2018/03/20 | 512 | 1,901 | <issue_start>username_0: I tried changing the replica factor from 3 to 1 and restarting the services. But the replication factor remains the same
Can anyone suggest me how to change the replication factor of existing files?
**This is the fsck report:**
```
Minimally replicated blocks: 45 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 45 (100.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 1
Average block replication: 2.0
Corrupt blocks: 0
Missing replicas: 45 (33.333332 %)
DecommissionedReplicas: 45
Number of data-nodes: 2
Number of racks: 1
```<issue_comment>username_1: For anyone who is facing the same issue just run this command :
```
hdfs dfs -setrep -R 1 /
```
Because when the blocks are under-replicated and you change the replication factor from 3 to 1(or any changes) then these changes are for the new files which will be created in HDFS, not for the old ones.
You have to change the replication factor of old files on your own.
Upvotes: 3 [selected_answer]<issue_comment>username_2: There are two scenarios in changing the replication factor for a file in hdfs:
1. When the file is already present, in that case you need to go to that particular file or directory and change the replication factor. For changing replication factor of a directory :
```
hdfs dfs -setrep -R -w 2 /tmp
```
OR for changing replication factor of a particular file
```
hdfs dfs –setrep –w 3 /tmp/logs/file.txt
```
2. When you want to make this change of replication factor for the new files that are not present currently and will be created in future. For them you need to go to hdfs-site.xml and change the replication factor there
```
< property>
< name>dfs.replication< /name>
< value>2< /value>
< /property>
```
Upvotes: 0 |
2018/03/20 | 1,168 | 4,666 | <issue_start>username_0: Could you explain why:
* when I access an array value using array.first it's optional
* when I access from an index value it is not?
Example:
```
var players = ["Alice", "Bob", "Cindy", "Dan"]
let firstPlayer = players.first
print(firstPlayer) // Optional("Alice")
let firstIndex = players[0]
print(firstIndex) // Alice
```<issue_comment>username_1: This is because with `first`, if the Array is empty, the value will be `nil`. That is why it is an optional. If it is not empty, the first element will be returned.
However, with a subscript (or index value), your program will crash with an error
>
> fatal error: Index out of range
>
>
>
If it is out of range (or is empty) and not return an optional. Else, it will return the element required.
Upvotes: 2 <issue_comment>username_2: There are default behavior of array property. Array is generic type of Element. When you try to access using first it return as optional.
```
public var first: Element? { get }
```
This is available in Array class.
Upvotes: 0 <issue_comment>username_3: The behavior of `first` and *index subscription* is different:
* `first` is declared safely: If the array is empty it returns `nil`, otherwise the (optional) object.
* *index subscription* is unsafe for legacy reasons: If the array is empty it throws an out-of-range exception otherwise it returns the (non-optional) object
Upvotes: 3 [selected_answer]<issue_comment>username_4: if you want to use subscript and you don't want to have a crash, you can add this extension to your code:
```
extension Collection {
subscript (safe index: Index) -> Iterator.Element? {
return indices.contains(index) ? self[index] : nil
}
}
```
and then use it:
```
let array = [0, 1, 2]
let second = array[safe:1] //Optional(1)
let fourth = array[safe:3] //nil instead of crash
```
Upvotes: 3 <issue_comment>username_5: (The short answers to this question are great, and exactly what you need. I just wanted to go a bit deeper into the why and how this interacts with Swift Collections more generally and the underlying types. If you just want "how should I use this stuff?" read the accepted answer and ignore all this.)
Arrays follow the rules of all Collections. A Collection must implement the following subscript:
```
subscript(position: Self.Index) -> Self.Element { get }
```
So to be a Collection, Array's subscript must accept its Index and unconditionally return an Element. For many kinds of Collections, it is impossible to create an Index that does not exist, but Array uses Int as its Index, so it has to deal with the possibility that you pass an Index that is out of range. In that case, it is impossible to return an Element, and its only option is to fail to return at all. This generally takes the form of crashing the program since it's generally more useful than hanging the program, which is the other option.
(This hides a slight bit of type theory, which is that every function in Swift technically can return "crash," but we don't track that in the type system. It's possible to do that to distinguish between functions that can crash and ones that cannot, but Swift doesn't.)
This should naturally raise the question of why Dictionary doesn't crash when you subscript with a non-existant key. The reason is that Dictionary's Index is not its Key. It has a little-used subscript that provides conformance to Collection (little-used in top-level code, but very commonly used inside of stdlib):
```
subscript(position: Dictionary.Index) -> Dictionary.Element { get }
```
Array could have done this as well, having an `Array.Index` type that was independent of Int, and making the Int subscript return an Optional. In Swift 1.0, I opened a radar to request exactly that. The team argued that this would make common uses of Array too difficult and that programmers coming to Swift were used to the idea that out-of-range was a programming error (crash). Dictionary, on the other hand, is common to access with non-existant keys, so the Key subscript should be Optional. Several years using Swift has convinced me they were right.
In general you shouldn't subscript arrays unless you got the index from the array (i.e. using `index(where:)`). But many Cocoa patterns make it very natural to subscript (`cellForRow(at:)` being the most famous). Still, in more pure Swift code, subscripting with arbitrary Ints often suggests a design problem.
Instead you should often use Collection methods like `first` and `first(where:)` which return Optionals and generally safer and clearer, and iterate over them using `for-in` loops rather than subscripts.
Upvotes: 3 |
2018/03/20 | 1,329 | 5,526 | <issue_start>username_0: I have developed the demo app with the square payment SDK with sandbox credential to take payment.
Every time I am getting failed error and it' saying
```
Error Domain=SCCAPIErrorDomain Code=6 "User not activated. Please visit https://squareup.com/activate. The logged-in account cannot take credit card payments. This could be because the account is from a country where Square does not process payments, because the account did not complete the initial activation flow, or because it has been deactivated for security reasons." UserInfo={error_code=user_not_active, NSLocalizedDescription=User not activated. Please visit https://squareup.com/activate. The logged-in account cannot take credit card payments. This could be because the account is from a country where Square does not process payments, because the account did not complete the initial activation flow, or because it has been deactivated for security reasons.}
```
As I am from India and might be it restrict me to test. It is taking geolocation internally from their POS app and it's out of my hand to look into it.
Is there any way I can test the payment flow? So that I can integrate the same into my live App.<issue_comment>username_1: This is because with `first`, if the Array is empty, the value will be `nil`. That is why it is an optional. If it is not empty, the first element will be returned.
However, with a subscript (or index value), your program will crash with an error
>
> fatal error: Index out of range
>
>
>
If it is out of range (or is empty) and not return an optional. Else, it will return the element required.
Upvotes: 2 <issue_comment>username_2: There are default behavior of array property. Array is generic type of Element. When you try to access using first it return as optional.
```
public var first: Element? { get }
```
This is available in Array class.
Upvotes: 0 <issue_comment>username_3: The behavior of `first` and *index subscription* is different:
* `first` is declared safely: If the array is empty it returns `nil`, otherwise the (optional) object.
* *index subscription* is unsafe for legacy reasons: If the array is empty it throws an out-of-range exception otherwise it returns the (non-optional) object
Upvotes: 3 [selected_answer]<issue_comment>username_4: if you want to use subscript and you don't want to have a crash, you can add this extension to your code:
```
extension Collection {
subscript (safe index: Index) -> Iterator.Element? {
return indices.contains(index) ? self[index] : nil
}
}
```
and then use it:
```
let array = [0, 1, 2]
let second = array[safe:1] //Optional(1)
let fourth = array[safe:3] //nil instead of crash
```
Upvotes: 3 <issue_comment>username_5: (The short answers to this question are great, and exactly what you need. I just wanted to go a bit deeper into the why and how this interacts with Swift Collections more generally and the underlying types. If you just want "how should I use this stuff?" read the accepted answer and ignore all this.)
Arrays follow the rules of all Collections. A Collection must implement the following subscript:
```
subscript(position: Self.Index) -> Self.Element { get }
```
So to be a Collection, Array's subscript must accept its Index and unconditionally return an Element. For many kinds of Collections, it is impossible to create an Index that does not exist, but Array uses Int as its Index, so it has to deal with the possibility that you pass an Index that is out of range. In that case, it is impossible to return an Element, and its only option is to fail to return at all. This generally takes the form of crashing the program since it's generally more useful than hanging the program, which is the other option.
(This hides a slight bit of type theory, which is that every function in Swift technically can return "crash," but we don't track that in the type system. It's possible to do that to distinguish between functions that can crash and ones that cannot, but Swift doesn't.)
This should naturally raise the question of why Dictionary doesn't crash when you subscript with a non-existant key. The reason is that Dictionary's Index is not its Key. It has a little-used subscript that provides conformance to Collection (little-used in top-level code, but very commonly used inside of stdlib):
```
subscript(position: Dictionary.Index) -> Dictionary.Element { get }
```
Array could have done this as well, having an `Array.Index` type that was independent of Int, and making the Int subscript return an Optional. In Swift 1.0, I opened a radar to request exactly that. The team argued that this would make common uses of Array too difficult and that programmers coming to Swift were used to the idea that out-of-range was a programming error (crash). Dictionary, on the other hand, is common to access with non-existant keys, so the Key subscript should be Optional. Several years using Swift has convinced me they were right.
In general you shouldn't subscript arrays unless you got the index from the array (i.e. using `index(where:)`). But many Cocoa patterns make it very natural to subscript (`cellForRow(at:)` being the most famous). Still, in more pure Swift code, subscripting with arbitrary Ints often suggests a design problem.
Instead you should often use Collection methods like `first` and `first(where:)` which return Optionals and generally safer and clearer, and iterate over them using `for-in` loops rather than subscripts.
Upvotes: 3 |
2018/03/20 | 1,145 | 4,620 | <issue_start>username_0: I need the background image to be horizontally centered, but vertically the bottom of the image should always be in y=350px.
I have tried with calc(100% - 350px) but that isn't the right answer. The problem is that it is calculating from the top of the image instead of the bottom.<issue_comment>username_1: This is because with `first`, if the Array is empty, the value will be `nil`. That is why it is an optional. If it is not empty, the first element will be returned.
However, with a subscript (or index value), your program will crash with an error
>
> fatal error: Index out of range
>
>
>
If it is out of range (or is empty) and not return an optional. Else, it will return the element required.
Upvotes: 2 <issue_comment>username_2: There are default behavior of array property. Array is generic type of Element. When you try to access using first it return as optional.
```
public var first: Element? { get }
```
This is available in Array class.
Upvotes: 0 <issue_comment>username_3: The behavior of `first` and *index subscription* is different:
* `first` is declared safely: If the array is empty it returns `nil`, otherwise the (optional) object.
* *index subscription* is unsafe for legacy reasons: If the array is empty it throws an out-of-range exception otherwise it returns the (non-optional) object
Upvotes: 3 [selected_answer]<issue_comment>username_4: if you want to use subscript and you don't want to have a crash, you can add this extension to your code:
```
extension Collection {
subscript (safe index: Index) -> Iterator.Element? {
return indices.contains(index) ? self[index] : nil
}
}
```
and then use it:
```
let array = [0, 1, 2]
let second = array[safe:1] //Optional(1)
let fourth = array[safe:3] //nil instead of crash
```
Upvotes: 3 <issue_comment>username_5: (The short answers to this question are great, and exactly what you need. I just wanted to go a bit deeper into the why and how this interacts with Swift Collections more generally and the underlying types. If you just want "how should I use this stuff?" read the accepted answer and ignore all this.)
Arrays follow the rules of all Collections. A Collection must implement the following subscript:
```
subscript(position: Self.Index) -> Self.Element { get }
```
So to be a Collection, Array's subscript must accept its Index and unconditionally return an Element. For many kinds of Collections, it is impossible to create an Index that does not exist, but Array uses Int as its Index, so it has to deal with the possibility that you pass an Index that is out of range. In that case, it is impossible to return an Element, and its only option is to fail to return at all. This generally takes the form of crashing the program since it's generally more useful than hanging the program, which is the other option.
(This hides a slight bit of type theory, which is that every function in Swift technically can return "crash," but we don't track that in the type system. It's possible to do that to distinguish between functions that can crash and ones that cannot, but Swift doesn't.)
This should naturally raise the question of why Dictionary doesn't crash when you subscript with a non-existant key. The reason is that Dictionary's Index is not its Key. It has a little-used subscript that provides conformance to Collection (little-used in top-level code, but very commonly used inside of stdlib):
```
subscript(position: Dictionary.Index) -> Dictionary.Element { get }
```
Array could have done this as well, having an `Array.Index` type that was independent of Int, and making the Int subscript return an Optional. In Swift 1.0, I opened a radar to request exactly that. The team argued that this would make common uses of Array too difficult and that programmers coming to Swift were used to the idea that out-of-range was a programming error (crash). Dictionary, on the other hand, is common to access with non-existant keys, so the Key subscript should be Optional. Several years using Swift has convinced me they were right.
In general you shouldn't subscript arrays unless you got the index from the array (i.e. using `index(where:)`). But many Cocoa patterns make it very natural to subscript (`cellForRow(at:)` being the most famous). Still, in more pure Swift code, subscripting with arbitrary Ints often suggests a design problem.
Instead you should often use Collection methods like `first` and `first(where:)` which return Optionals and generally safer and clearer, and iterate over them using `for-in` loops rather than subscripts.
Upvotes: 3 |
2018/03/20 | 906 | 2,965 | <issue_start>username_0: I would like to tell SQL-Server that a number is of a given type. Here is a little example:
```
drop table if exists test
declare @i int = 100000
SELECT 4 pwr, cast(10000 * @i as bigint) val into test
insert test SELECT 5, cast(100000 * @i as bigint)
insert test SELECT 6, cast(1000000 * @i as bigint)
insert test SELECT 7, cast(10000000 * @i as bigint)
insert test SELECT 8, cast(100000000 * @i as bigint)
insert test SELECT 9, cast(1000000000 * @i as bigint)
insert test SELECT 10, cast(10000000000 * @i as bigint)
select * from test
```
In the table are only row with pwr 4 and 10. SQL-Server seems to give a numeric literal a datatype depending on the value. High values will be converted into numeric, smaller into int.
Casting to a type is a way to handle this.
```
insert test SELECT 6, cast(cast(1000000 as bigint) * @i as bigint) test
```
Is there a way to tell SQL-Server the datatype of a constant other than casting it every time you use it?
Online-Help is unclear here (<https://learn.microsoft.com/en-us/sql/t-sql/data-types/constants-transact-sql>)
as it says 'decimal constants are represented by a string of numbers that are not enclosed in quotation marks and contain a decimal point.'. It should say '.. or represented by a numeric value that does not fit into the int range.'<issue_comment>username_1: No, there is not, if you don't count implicit conversions (i.e. `DECLARE @x TINYINT = 3`). Some types do have separate ways of writing literals (numeric, floating-point, binary, etc.), but this is not the case for the various integral types. The only way to have them distinguished is to explicitly convert constants (which would otherwise by default be of either `INT` or `NUMERIC` type).
You are correct that the documentation is lacking in this regard. Without a decimal point, the type depends on whether it'll fit in an `INT`:
```
SELECT SQL_VARIANT_PROPERTY(CONVERT(SQL_VARIANT, 0), 'BaseType')
-- int
SELECT SQL_VARIANT_PROPERTY(CONVERT(SQL_VARIANT, 2147483648), 'BaseType')
-- numeric
```
There is no way to get `BIGINT` out of this other than a conversion.
Upvotes: 3 [selected_answer]<issue_comment>username_2: You are casting too late
It is as easy as
```
declare @i bigint = 100000;
```
That cast is happening after the multiplication.
The last works because 10000000000 is bigint.
First works because the multiplication result is still int.
Even with `declare @i int = 100000;` this works.
```
insert #test SELECT 5, cast(100000 as bigint) * @i;
```
This has NOTHING to with decimal.
10000 is integer so it does integer math the result is integer.
100000 is integer so it does integer math problem is the result is not integer
10000000000 is bigint so it does bigint math
Per correction from Jeroen 10000000000 is decimal. Decimal math is valid and then the result is successfully cast to bigint.
Upvotes: 0 |
2018/03/20 | 630 | 2,213 | <issue_start>username_0: guys, I m new to loopback so don't know how I can do. here is my code
```
module.exports = function(Customer) {
Customer.beforeRemote('create', function (ctx, user, next) {
var companyprofile=app.models.customerdetail
companyprofile.create(ctx.args.data.cust_cmp_profile,user,next)// is this possible to pass selected value to model customerdetail
console.log(ctx);
});
};
```
both model is unrelated so i call customerdetail model in Customer here i m try to use create method of customerdetail but don't know how to do that . i search lot but have't got any thing how i can do this<issue_comment>username_1: No, there is not, if you don't count implicit conversions (i.e. `DECLARE @x TINYINT = 3`). Some types do have separate ways of writing literals (numeric, floating-point, binary, etc.), but this is not the case for the various integral types. The only way to have them distinguished is to explicitly convert constants (which would otherwise by default be of either `INT` or `NUMERIC` type).
You are correct that the documentation is lacking in this regard. Without a decimal point, the type depends on whether it'll fit in an `INT`:
```
SELECT SQL_VARIANT_PROPERTY(CONVERT(SQL_VARIANT, 0), 'BaseType')
-- int
SELECT SQL_VARIANT_PROPERTY(CONVERT(SQL_VARIANT, 2147483648), 'BaseType')
-- numeric
```
There is no way to get `BIGINT` out of this other than a conversion.
Upvotes: 3 [selected_answer]<issue_comment>username_2: You are casting too late
It is as easy as
```
declare @i bigint = 100000;
```
That cast is happening after the multiplication.
The last works because 10000000000 is bigint.
First works because the multiplication result is still int.
Even with `declare @i int = 100000;` this works.
```
insert #test SELECT 5, cast(100000 as bigint) * @i;
```
This has NOTHING to with decimal.
10000 is integer so it does integer math the result is integer.
100000 is integer so it does integer math problem is the result is not integer
10000000000 is bigint so it does bigint math
Per correction from Jeroen 10000000000 is decimal. Decimal math is valid and then the result is successfully cast to bigint.
Upvotes: 0 |
2018/03/20 | 556 | 2,003 | <issue_start>username_0: We have a website deployed on azure web app, sharing the same app service plan with function app. The website just has two HTML pages within it has a button click to call json service exposed by function app. Since it is plain HTML just embedding JavaScript to carry out the json call.
Question is how to restrict access to the function app so it can only get called within the HTML pages?
Thanks<issue_comment>username_1: No, there is not, if you don't count implicit conversions (i.e. `DECLARE @x TINYINT = 3`). Some types do have separate ways of writing literals (numeric, floating-point, binary, etc.), but this is not the case for the various integral types. The only way to have them distinguished is to explicitly convert constants (which would otherwise by default be of either `INT` or `NUMERIC` type).
You are correct that the documentation is lacking in this regard. Without a decimal point, the type depends on whether it'll fit in an `INT`:
```
SELECT SQL_VARIANT_PROPERTY(CONVERT(SQL_VARIANT, 0), 'BaseType')
-- int
SELECT SQL_VARIANT_PROPERTY(CONVERT(SQL_VARIANT, 2147483648), 'BaseType')
-- numeric
```
There is no way to get `BIGINT` out of this other than a conversion.
Upvotes: 3 [selected_answer]<issue_comment>username_2: You are casting too late
It is as easy as
```
declare @i bigint = 100000;
```
That cast is happening after the multiplication.
The last works because 10000000000 is bigint.
First works because the multiplication result is still int.
Even with `declare @i int = 100000;` this works.
```
insert #test SELECT 5, cast(100000 as bigint) * @i;
```
This has NOTHING to with decimal.
10000 is integer so it does integer math the result is integer.
100000 is integer so it does integer math problem is the result is not integer
10000000000 is bigint so it does bigint math
Per correction from Jeroen 10000000000 is decimal. Decimal math is valid and then the result is successfully cast to bigint.
Upvotes: 0 |
2018/03/20 | 557 | 1,923 | <issue_start>username_0: I am new to python and boto3. Pardon me if this is not the right place to ask.
Can anyone help me to write a script to fetch all the **Users who doesn't activated their MFA device**. I can get the same from credentials reports. But i want to fetch the info using script.
Thanks<issue_comment>username_1: Try this:
```
import boto3
client = boto3.client('iam')
iam_users = []
response = client.list_users()
for user in response['Users']:
iam_users.append(user['UserName'])
while 'Marker' in response:
response = client.list_users(Marker=response['Marker'])
for user in response['Users']:
iam_users.append(user['UserName'])
no_mfa_users = []
for iam_user in iam_users:
response = client.list_mfa_devices(UserName=iam_user)
if not response['MFADevices']:
no_mfa_users.append(iam_user)
```
`no_mfa_users` array will contain a list of IAM Users without MFA enabled.
boto3 reference can be found [here](http://boto3.readthedocs.io/en/latest/reference/services/iam.html#IAM.Client.list_mfa_devices).
Upvotes: 2 <issue_comment>username_2: This is the lambda function which helps you to define the users that doesn't activated the MFA on their account. Also i am adding SNS topic to the function which will send you the list to the defined topic.
```
import boto3
client=boto3.client('iam')
sns=boto3.client('sns')
response = client.list_users()
userVirtualMfa = client.list_virtual_mfa_devices()
physicalString = ''
def lambda_handler(event,context):
mfa_users=[]
for user in response['Users']:
userMfa = client.list_mfa_devices(UserName=user['UserName'])
for uname in userMfa['MFADevices']:
virtualEnabled = []
virtualEnabled.append(uname['UserName'])
if len(userMfa['MFADevices']) == 0 :
if user['UserName'] not in virtualEnabled:
mfa_users.append(user['UserName'])
print (mfa_users)
```
Upvotes: 0 |
2018/03/20 | 734 | 2,269 | <issue_start>username_0: This is what I have tried so far (Oracle SQL):
```
UPDATE table1
SET table1.ADDRESS =
(SELECT table2.ADDRESS
FROM table2 INNER JOIN table1 ON table1.ID = table2.ID
WHERE table1.ADDRESS <> table2.ADDRESS
AND table1.DATE BETWEEN TO_DATE ('9999-12-31')
AND TO_DATE ('9999-01-21'));
```
(The dates I picked are random)
>
> Getting ORA-1427 Error - single-row subquery returns more than one
> row..
>
>
><issue_comment>username_1: You used `= operator` instead of `IN`. `IN accept multiple values` where as `= accepts single value.`
Your query returns more than one values.
Upvotes: -1 <issue_comment>username_2: The problem is that your subquery
```
SELECT table2.column1
FROM table2
INNER JOIN table1 ON table1.column1 = table2.column1
WHERE table1.column1 <> table2.column1
AND table1.column2 BETWEEN TO-DATE('9999-12-31') AND TO_DATE(9999-01-21)
```
returns more than one row (as the error text states). You have either have to add a `WHERE ROWNUM <= 1` or some other condition to reduce the selected rows to exactly one.
Upvotes: 0 <issue_comment>username_3: Plain `UPDATE`:
```
update table1
set table1.address =
( select table2.address
from table2
where table2.id = table1.id
and table2.address <> table1.address )
where table1.date between date '9999-01-21' and date '9999-12-31'
and exists
( select 1
from table2
where table2.id = table1.id
and table2.address <> table1.address );
```
Updateable inline view (requires a unique index or constraint on `table2.id`):
```
update ( select t1.address as old_address, t2.address as new_address
from table1 t1
join table2 t2 on t2.id = t1.id
where t2.address <> t.address
and t1.date between date '9999-01-21' and date '9999-12-31' )
set old_address = new_address;
```
Update-only `MERGE`:
```
merge into table1 t1
using table2 t2
on ( t2.id = t1.id
and t2.address <> t.address
and t1.date between date '9999-01-21' and date '9999-12-31' )
when matched then update set t1.address = t2.address;
```
Upvotes: 0 |
2018/03/20 | 906 | 2,783 | <issue_start>username_0: I use sql server an i try to run the query below:
```
declare @ProductID int
set @ProductID=322
INSERT INTO table_Customer (ProductID,Type,CompanyID)
SELECT @ProductID,' ',CompanyID
FROM table_Companies
WHERE CompanyActive=1
```
And i get the error below:
>
> Msg 2627, Level 14, State 1, Line 19
>
>
> Violation of UNIQUE KEY constraint 'IX\_table\_Customer'. Cannot insert
> duplicate key in object 'dbo.table\_Customer'. The duplicate key value
> is (322, , , , , , ).
>
>
>
Find below table's definition.
```
> CREATE TABLE [dbo].table_Customer(
> [CustomerID] [int] IDENTITY(1,1) NOT NULL,
>
> [ProductID] [int] NOT NULL,
> [Type] [nvarchar](5) NULL,
>
> [CompanyID] [int] NULL,
>
> [RoleID] [int] NULL,
>
>
> CONSTRAINT [PK_table_Customer] PRIMARY KEY CLUSTERED (
>
> [CustomerID] ASC
>
> )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY =
> OFF,
>
> ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON
> [PRIMARY],
>
> CONSTRAINT [IX_table_Customer] UNIQUE NONCLUSTERED (
> [ProductID] ASC,
>
> [CompID] ASC,
>
> [CompManager] ASC
>
> )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY =
> OFF,
>
> ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON
> [PRIMARY]
>
> ) ON [PRIMARY]
```
Can someone help me?<issue_comment>username_1: It's saying that you already have a a unique key of **322** in the customer table
```
Select * from table_Customer where productID = 322;
```
Upvotes: 0 <issue_comment>username_2: You are possibly inserting *multiple* rows as they are being returned from the `SELECT`:
```
INSERT INTO table_Customer (ProductID,Type,CompanyID)
SELECT @ProductID, ' ', CompanyID
FROM table_Companies
WHERE CompanyActive = 1 -- How many rows does this produce...?
```
Note that `@ProductID` is a FIXED value in this query, so it will be the same (322) for all selected-and-then-inserted rows.
Your index is complaining that you can't insert rows having the same set of index values. So either there is already a record with those values, or the `SELECT` produces duplicates, or both of these combined.
Upvotes: 2 <issue_comment>username_3: The query you used will *repeat* the same `@ProductID` value for every active company in the `table_Companies` table.
```
SELECT @ProductID,' ',CompanyID
FROM table_Companies
WHERE CompanyActive=1
```
If the index contains the ProductID column only, or at least it *doesn't* contain the CompanyID field as well, this will result in duplicate ProductID entries.
Upvotes: 3 [selected_answer]<issue_comment>username_4: ProductID is probably identity. if it is really a must. Remove it and add it back when finished. But clearly you already have data for id 322
Upvotes: 0 |
2018/03/20 | 408 | 1,265 | <issue_start>username_0: Is there any difference between the two methods for the Ruby Hash, or is it just "there is more than 1 way to do it"?
I could not see any measureable difference between the two.
Thanks,<issue_comment>username_1: According to the [Ruby source](https://github.com/ruby/ruby/blob/798316eac260e3cd683da2be23fef53ee64cee00/hash.c#L4710), `has_key?`, `key?`, `include?` and `member?` are the same implementation.
Upvotes: 3 <issue_comment>username_2: To see the method definition's source code see documentation, find the method you're looking for then click on the method to expand to see the actual source code:
<https://ruby-doc.org/core-2.5.0/Hash.html#method-i-member-3F>
```
rb_hash_has_key(VALUE hash, VALUE key)
{
if (!RHASH(hash)->ntbl)
return Qfalse;
if (st_lookup(RHASH(hash)->ntbl, key, 0)) {
return Qtrue;
}
return Qfalse;
}
```
<https://ruby-doc.org/core-2.5.0/Hash.html#method-i-has_key-3F>
```
rb_hash_has_key(VALUE hash, VALUE key)
{
if (!RHASH(hash)->ntbl)
return Qfalse;
if (st_lookup(RHASH(hash)->ntbl, key, 0)) {
return Qtrue;
}
return Qfalse;
}
```
Upvotes: 3 [selected_answer] |
2018/03/20 | 734 | 2,518 | <issue_start>username_0: I have a table with two columns: "users" has a full text index, "x" is a simple int column. The table contains just under 2 million entries. Using `match...against` to select rows containing a certain user returns quickly.
Searching by the value of x (which is not indexed) returns in ~3 seconds.
However, when I combine the two the query takes ~9 seconds! If anything, I'd expect the combined query to take far less time since the full text index cuts the possible rows by an order of magnitude. Even forgetting the full text index and using `like "%___%"` is faster!
What's going on here? How can I fix it?
The mySQL output is included below:
```
mysql> desc testing;
+-------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+--------------+------+-----+---------+-------+
| users | varchar(120) | YES | MUL | NULL | |
| x | int(11) | YES | | NULL | |
+-------+--------------+------+-----+---------+-------+
2 rows in set (0.00 sec)
mysql> select count(*) from testing;
+----------+
| count(*) |
+----------+
| 1924272 |
+----------+
1 row in set (3.56 sec)
mysql> select count(*) from testing where match(users) against("shy");
+----------+
| count(*) |
+----------+
| 149019 |
+----------+
1 row in set (0.42 sec)
mysql> select count(*) from testing where x>0;
+----------+
| count(*) |
+----------+
| 1924272 |
+----------+
1 row in set (3.62 sec)
mysql> select count(*) from testing where match(users) against("shy") and x>0;
+----------+
| count(*) |
+----------+
| 149019 |
+----------+
1 row in set (8.82 sec)
mysql> select count(*) from testing where users like "%shy%" and x>0;
+----------+
| count(*) |
+----------+
| 149019 |
+----------+
1 row in set (3.57 sec)
```<issue_comment>username_1: Consider using a subquery e.g.
```
select count(*) from (
select *
from testing
where match(users) against("shy")
) shy_results
where x>0;
```
Upvotes: 0 <issue_comment>username_2: Always check the value of your `innodb_buffer_pool_size` and adjust it according to your system's capabilities and software requirements. This means don't give MySQL more RAM than you have :)
If the index doesn't fit the memory, MySQL will read it off the disk, constraining you to hard drive's speed. If you're on SSD, this can be ok-ish, but on mechanical drives it's slow as snail.
Indexes aren't as useful if they can't fit into RAM.
Upvotes: 2 [selected_answer] |
2018/03/20 | 548 | 1,539 | <issue_start>username_0: Right now I have this code. I am intending to write code that calculates the number of days between today and January 1 of this year.
As you can see in the output below, it prints the number of days and the time.
How can I rewrite the code so that it says just '78', not '78 days, 21:04:08.256440'?
```
from datetime import datetime
Now = datetime.now()
StartDate = datetime.strptime(str(Now.year) +'-01-01', '%Y-%m-%d')
NumberOfDays = (Now - StartDate)
print(NumberOfDays)
#Output: 78 days, 21:04:08.256440
```<issue_comment>username_1: Here's a working [Fiddle](https://pyfiddle.io/fiddle/c8da25e1-ba0f-4323-b935-30238b5e1ff4/?i=true).
As jpp commented on your question you had to use `print(NumberOfDays.days)`.
But be careful, in your solution it return 78 (on the **20/03/2018**) but it is the 79th day (starting from 1).
Another simpler way to do it is : `print(datetime.now().timetuple().tm_yday)`
And another even simpler way to do it : `print(Now.strftime('%j'))`
```
from datetime import datetime
Now = datetime.now()
StartDate = datetime.strptime(str(Now.year) +'-01-01', '%Y-%m-%d')
NumberOfDays = (Now - StartDate)
print(NumberOfDays.days) # 78
print(datetime.now().timetuple().tm_yday) # 79
print(Now.strftime('%j')) # 079
```
Upvotes: 1 <issue_comment>username_2: ```
import datetime
today = datetime.date.today()
first_day = datetime.date(year=today.year, month=1, day=1)
diff = today - first_day
print(diff)
78 days, 0:00:00
```
Upvotes: 0 |
2018/03/20 | 963 | 2,790 | <issue_start>username_0: I have two datetime pickers (from, to). I need to get difference between FROM and TO dates in minutes (1505 min) and in day and time (2 day 1h 35min).
I use moment.js
```
var now = moment('2018-03-28 14:02');
var end = moment('2018-06-02 00:00'); // another date
var duration = moment.duration(end.diff(now));
var days = duration.asDays();
console.log(days) //65.41527777777777
```
Output here is `65.41527777777777` where `65` is correct days, but how to convert `41527777777777` to hours and minutes.
If I make this `0,41527777777777 * 24 = 9,96666666648` i
get 9 hours, and again `0,96666666648 * 60 = 57` and this is correct difference
```
65 day, 9 hour and 57 min
```
But, is there any way to do this directly with moment.js?
Thank you<issue_comment>username_1: Not directly within moment.js, no, but [this open issue](https://github.com/moment/moment/issues/1048) led me to [this moment.js plugin](https://github.com/jsmreese/moment-duration-format/) which allows formatting of durations. Looks like with that plugin you could use this to get your desired output:
```
duration.format("D [day], H [hour and] m [min]")
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: Without a plugin:
```
var now = moment('2018-03-28 14:02');
var end = moment('2018-06-02 00:00'); // another date
var duration = moment.duration(end.diff(now));
//Get Days and subtract from duration
var days = duration.asDays();
duration.subtract(moment.duration(days,'days'));
//Get hours and subtract from duration
var hours = duration.hours();
duration.subtract(moment.duration(hours,'hours'));
//Get Minutes and subtract from duration
var minutes = duration.minutes();
duration.subtract(moment.duration(minutes,'minutes'));
//Get seconds
var seconds = duration.seconds();
console.log("Days: ",days);
console.log("Hours: ",hours);
console.log("Minutes: ",minutes);
console.log("Seconds: ",seconds);
```
Upvotes: 4 <issue_comment>username_3: As `moment.asDays(date)` return float you need to use Math.floor()
```js
function durationAsString(start, end) {
const duration = moment.duration(moment(end).diff(moment(start)));
//Get Days
const days = Math.floor(duration.asDays()); // .asDays returns float but we are interested in full days only
const daysFormatted = days ? `${days}d ` : ''; // if no full days then do not display it at all
//Get Hours
const hours = duration.hours();
const hoursFormatted = `${hours}h `;
//Get Minutes
const minutes = duration.minutes();
const minutesFormatted = `${minutes}m`;
return [daysFormatted, hoursFormatted, minutesFormatted].join('');
}
console.log(durationAsString('2018-03-28 14:02', '2018-06-02 00:00'))
console.log(durationAsString('2018-06-01 14:02', '2018-06-02 00:00'))
```
Upvotes: 3 |
2018/03/20 | 840 | 2,918 | <issue_start>username_0: The first time I used .bind() I was tripped up by the fact that the optional arguments passed to the bound function are prepended. This got me when I was trying to hand things off to anonymous event handling functions, sort of like this:
```
$('#example').on('change', function(arg1, arg2, evt) {
console.log(evt, arg1, arg2);
}.bind(null, arg1, arg2));
```
The MDN for [.bind()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function/bind) mentions the prepending several times but never elaborates, so I'm curious as to why - why do I have to put `arg1` and `arg2` before the `evt` in the function arguments? Wouldn't *appending* be easier to understand and slightly more performant?<issue_comment>username_1: If the additional bound parameters were *appended* to the call time arguments, the behaviour would be like this:
```
function foo(bar, baz) {}
const b = foo.bind(null, 42);
b(); // bar = 42
b('c'); // bar = 'c', baz = 42
b('c', 'd'); // bar = 'c', baz = 'd'
```
Meaning, it is unpredictable where your bound arguments will end up, which is arguably insane.
Prepending also makes more sense if you think of `bind` as [*partial application*](https://en.wikipedia.org/wiki/Partial_application) or [*currying*](https://en.wikipedia.org/wiki/Currying): very loosely functionally speaking, `foo.bind(null, 42)` returns a *partially applied* function,
it turns `a -> b -> c` (a function which takes two values and returns a third)
into `b -> c` (a function which takes only one more value and returns another).
Upvotes: 3 [selected_answer]<issue_comment>username_2: [`bind()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function/bind) produces a new function that, when called, invokes the function on which `.bind()` is called.
Let's name `f` the function you use in your code:
```
var g = function f(arg1, arg2, evt) {
console.log(evt, arg1, arg2);
}.bind(null, arg1, arg2)
```
`bind()` produces a new function (stored in `g`) that accepts any number of arguments (one in this example).
`g` may be called with zero or more arguments.
Only the first argument of `bind()` is required; it is the object to be referenced as `this` inside the function returned by `bind()`. The other arguments of `bind()`, if exist, are always available in `g`.
They are `arg1` and `arg2` in this example and they are always passed to `f`. The other arguments of `f` are the arguments of `g` when it is called. There may be none, one or more.
It's the natural choice to put `arg1` and `arg2` as the first arguments in the call to `f` and the rest of the list (the arguments passed to `g`) at the end.
If `arg1` and `arg2` are passed as the last arguments to `f` you have to check `arguments.length` inside `f` to find the last two arguments and its code becomes cumbersome.
Upvotes: 0 |
2018/03/20 | 655 | 2,498 | <issue_start>username_0: let's say I have the following database entity:
```
@Document(collection = "users")
public class User {
@Id
private String id;
private String firstname;
private String lastname;
private String email;
}
```
How can I enforce the field email to be unique? That means MongoDB should check if a user record with this email address already exists when the application tries to save the entity.<issue_comment>username_1: Mongodb needs to create and index a field in order to know whether the field is unique or not.
```
@Indexed(unique=true)
private String email;
```
Upvotes: 5 <issue_comment>username_2: First, use `Indexed` annotation above of your field in your model as shown below:
```
@Indexed(unique = true)
private String email;
```
Also, you should programmatically define your index. You should use the below code when defining your `MongoTemplate`.
```
mongoTemplate.indexOps("YOUR_COLLECTION_NAME").ensureIndex(new Index("YOUR_FEILD_OF_COLLECTION", Direction.ASC).unique());
```
For your case, you should use:
```
mongoTemplate.indexOps("users").ensureIndex(new Index("email", Direction.ASC).unique());
```
Upvotes: 3 <issue_comment>username_3: As of Spring Data MongoDB 3.0, automatic index creation is turned off by default. So basically, besides using `@Indexed`, you have to configure default indexing options. What you need to do is to make `spring.data.mongodb.auto-index-creation=true` in the `application.properties` file, and then `@Indexed` will work like a charm!
Upvotes: 3 <issue_comment>username_4: This worked for me, but you have to delete your database and then re-run your application
```
spring.data.mongodb.auto-index-creation=true
```
Upvotes: 3 <issue_comment>username_5: You can try one of the below solutions, it worked for me.
Note: Please delete your db before you re-try with the below solutions.
Solution - 1
```
@Indexed(unique = true, background = true)
private String emailId;
```
Solution - 2
Add `spring.data.mongodb.auto-index-creation=true` to your application.properties file.
or
Add `spring.data.mongodb.auto-index-creation:true` to your yaml file
Upvotes: 1 <issue_comment>username_6: If anyone has a custom Mongo configuration -> spring.data.mongodb.auto-index-creation:true won't work. Instead try adding this to your MongoConfig:
```
@Override
public boolean autoIndexCreation() {
return true;
}
```
It solved the problem for me....
Upvotes: 0 |
2018/03/20 | 844 | 2,976 | <issue_start>username_0: I just created a Jenkins Pipeline DSL job where I cloned a Java code from SCM and tried to run `mvn clean`. But the pipeline continuously throwing an error saying:
```
mvn clean install -Dmaven.test.skip=true -Dfindbugs.skip=true
/var/lib/jenkins/workspace/@tmp/durable-77d8d13c/script.sh: 2:
/var/lib/jenkins/workspace/@tmp/durable-77d8d13c/script.sh: mvn: not found
```
Seems like it tries to find `pom.xml` inside the `@tmp` directory which is empty. Actual code is cloned successfully inside the directory. Below is my `Jenkinsfile`:
```
node {
stage ("Clean Workspace") {
echo "${WORKSPACE}"
cleanWs()
}
stage ("Get Code") {
git branch: "${params.branch}", url: '<EMAIL>:xx/xxxxxxx.git'
}
stage ("mvn clean") {
sh "mvn clean install -Dmaven.test.skip=true -Dfindbugs.skip=true"
}
}
```
I also tried with `${WORKSPACE}` env variable but still does not work.<issue_comment>username_1: Mongodb needs to create and index a field in order to know whether the field is unique or not.
```
@Indexed(unique=true)
private String email;
```
Upvotes: 5 <issue_comment>username_2: First, use `Indexed` annotation above of your field in your model as shown below:
```
@Indexed(unique = true)
private String email;
```
Also, you should programmatically define your index. You should use the below code when defining your `MongoTemplate`.
```
mongoTemplate.indexOps("YOUR_COLLECTION_NAME").ensureIndex(new Index("YOUR_FEILD_OF_COLLECTION", Direction.ASC).unique());
```
For your case, you should use:
```
mongoTemplate.indexOps("users").ensureIndex(new Index("email", Direction.ASC).unique());
```
Upvotes: 3 <issue_comment>username_3: As of Spring Data MongoDB 3.0, automatic index creation is turned off by default. So basically, besides using `@Indexed`, you have to configure default indexing options. What you need to do is to make `spring.data.mongodb.auto-index-creation=true` in the `application.properties` file, and then `@Indexed` will work like a charm!
Upvotes: 3 <issue_comment>username_4: This worked for me, but you have to delete your database and then re-run your application
```
spring.data.mongodb.auto-index-creation=true
```
Upvotes: 3 <issue_comment>username_5: You can try one of the below solutions, it worked for me.
Note: Please delete your db before you re-try with the below solutions.
Solution - 1
```
@Indexed(unique = true, background = true)
private String emailId;
```
Solution - 2
Add `spring.data.mongodb.auto-index-creation=true` to your application.properties file.
or
Add `spring.data.mongodb.auto-index-creation:true` to your yaml file
Upvotes: 1 <issue_comment>username_6: If anyone has a custom Mongo configuration -> spring.data.mongodb.auto-index-creation:true won't work. Instead try adding this to your MongoConfig:
```
@Override
public boolean autoIndexCreation() {
return true;
}
```
It solved the problem for me....
Upvotes: 0 |
2018/03/20 | 521 | 1,613 | <issue_start>username_0: ```js
const days = ['sunday', 'monday', 'tuesday', 'wednesday', 'thursday', 'friday', 'saturday'];
for (const day of days) {
console.log(day);
}
```
I need to print the days with the first letters capitalized...<issue_comment>username_1: ```
days.map(day => day[0].toUpperCase() + day.substr(1))
```
Upvotes: 2 <issue_comment>username_2: Try:
```
function capitalizeFirstLetter(string) {
return string.charAt(0).toUpperCase() + string.slice(1);}
```
Upvotes: 2 <issue_comment>username_3: Hope it helps
```
string.charAt(0).toUpperCase() + string.slice(1);
```
Upvotes: 0 <issue_comment>username_4: You can simply loop over the days and get the first character to uppercase like this:
```js
const days = ['sunday', 'monday', 'tuesday', 'wednesday', 'thursday', 'friday', 'saturday'];
for (const day of days) {
console.log(day[0].toUpperCase() + day.substr(1));
}
```
Upvotes: 1 [selected_answer]<issue_comment>username_5: Using the function `map` and the regex `/(.?)/` to replace the captured first letter with its upperCase representation.
```js
const days = ['sunday', 'monday', 'tuesday', 'wednesday', 'thursday', 'friday', 'saturday'];
var result = days.map(d => d.replace(/(.?)/, (letter) => letter.toUpperCase()));
console.log(result);
```
Upvotes: 0 <issue_comment>username_6: Old school:
```js
const days = ['sunday', 'monday', 'tuesday', 'wednesday', 'thursday', 'friday', 'saturday'];
var result = [];
for(var i = 0; i < days.length; i++){
result.push(days[i].charAt(0).toUpperCase() + days[i].substring(1));
}
console.log(result);
```
Upvotes: 0 |
2018/03/20 | 564 | 1,840 | <issue_start>username_0: I have a JSON file displayed as a table, I want to search inside this table using two steps:
1. Selecting an option from a select menu to choose in which column you want to search.
2. an input, for typing the keyword you want to search for.
So how can I concatenate the value of the "selected option form select tag" with the value of the "input"?
For example, the User selected the option "Names" from the select menu then he entered "John" inside the input
Here is my code:
<https://jsfiddle.net/p1nkfpez/7/>
```
var app = angular.module("myApp", []);
app.controller("myCtrl", function($scope, $http) {
$scope.selection = "nm";
$http.get("https://api.myjson.com/bins/i9h1v")
.then(function(response) {
$scope.myRows = response.data;
$scope.rowsStatus = response.status;
$scope.rowsStatusText = response.statusText;
});
});
```
I want the Angular filter to be like:
filter:keyword.selection<issue_comment>username_1: You can create a function to create the filter object dynamicly:
```
$scope.getFilter = function() {
return {
[$scope.selection]: $scope.keyword
}
}
```
And use it like this:
```
|
...
| |
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: You can create a custom filter in which you can access your properties by name.
Custom filter
```
$scope.myCustomFilter = function(row){
if($scope.keyword == undefined || $scope.keyword.length == 0)
return true;
if(row[$scope.selection].toLowerCase().indexOf($scope.keyword) >= 0){
return true;
}
else
return false;
}
```
And add filter to your ng-repeat
```
| | | | | |
| --- | --- | --- | --- | --- |
| {{$index + 1}} | {{row.nm}} | {{row.cty}} | {{row.hse}} | {{row.yrs}} |
```
[Updated Fiddle](https://jsfiddle.net/nn62z81a/)
Upvotes: 0 |
2018/03/20 | 510 | 1,946 | <issue_start>username_0: I have to found **uninstallers** in all the subdirectories inside a specified directory. Once the **uninstaller** is found then, I need to run it.
```
@setlocal enabledelayedexpansion
@for /r %%i in ( un*.exe) do (
@echo Found file: %%~nxi
%%i
)
```
I am already able to search the subdirectories and find the uninstallers inside them using the code above.
**PROBLEM:** The command `%%i` executes the `uninstaller.exe` and the control immediately returns to the command prompt. Due to the immediate return of control to the command prompt, the next iteration of `for-loop` is executed and therefore the next uninstaller (by command `%%i`) is also started.
**What I need:** I want to stay in the current iteration of `for-loop` till the uninstaller started by the command `%%i` is finished.
**PS:** The uninstaller programs do not finish by themselves. They ask for a couple of options and therefore, I want to start them one by one (so that user doesn't get confused).
**PS-2:** As pointed out in one of the comments below that PowerShell can solve the issue so, I am looking for more suggestions from PowerShell community as well.<issue_comment>username_1: you can use `start /wait`. Compare the following two lines:
```
for %%i in (*.txt) do notepad "%%i"
for %%i in (*.txt) do start /wait notepad "%%i"
```
The first one will open all matching files in several Notepads, the second one will only open the next Notepad, when you close the previous one.
Upvotes: 0 <issue_comment>username_2: Presuming you want to 1) find all files named `un*.exe` in a particular path and its subdirectories, 2) execute each one, and 3) wait for each executable to complete before executing the next one, you can write something like this in PowerShell:
```
Get-ChildItem "C:\Uninstaller Path\un*.exe" -Recurse | ForEach-Object {
Start-Process $_ -Wait
}
```
Upvotes: 1 |
2018/03/20 | 1,254 | 5,059 | <issue_start>username_0: I can change it to SizeWE in e.g. Loaded, Initialized, MouseEnter event handler, but after column resize it is changed back to default. Trying to change it in e.g. MouseUp or DragCompleted event handler to SizeWE does not work. When mouse button is released in the end of the resize the cursor is changed to default. When the SizeWE change was done in MouseEnter handler it is ok again when mouse exits and enters again, but the problem is right after the resize.
xaml:
```
<Setter Property="Width" Value="20" />
<Setter Property="Background" Value="Transparent" />
<Setter Property="Cursor" Value="SizeWE" />
<Setter Property="Template">
<Setter.Value>
<ControlTemplate TargetType="{x:Type Thumb}">
<Border Background="{TemplateBinding Background}" Padding="{TemplateBinding Padding}" />
</ControlTemplate>
</Setter.Value>
</Setter>
<Setter Property="FontWeight" Value="{StaticResource Theme.DataGrid.ColumnHeader.FontWeight}"></Setter>
<Setter Property="BorderBrush" Value="Transparent"></Setter>
<Setter Property="BorderThickness" Value="0"></Setter>
<Setter Property="Background" Value="{StaticResource Theme.DataGrid.ColumnHeader.Background}"></Setter>
<Setter Property="Foreground" Value="{StaticResource Theme.DataGrid.ColumnHeader.Foreground}"></Setter>
<Setter Property="HorizontalContentAlignment" Value="Left"></Setter>
<Setter Property="Padding" Value="{StaticResource Theme.DataGrid.Cell.Padding}"/>
<Setter Property="Template">
<Setter.Value>
<ControlTemplate TargetType="{x:Type GridViewColumnHeader}">
<Grid>
<Grid.ColumnDefinitions>
<ColumnDefinition Width="\*" />
<ColumnDefinition Width="1" />
</Grid.ColumnDefinitions>
<Border Grid.Column="0" x:Name="Border" BorderBrush="{TemplateBinding BorderBrush}"
BorderThickness="{TemplateBinding BorderThickness}"
Background="{TemplateBinding Background}">
<ContentPresenter Margin="{TemplateBinding Padding}" HorizontalAlignment="{TemplateBinding HorizontalContentAlignment}"
VerticalAlignment="{TemplateBinding VerticalContentAlignment}" />
</Border>
<Thumb Grid.Column="1" x:Name="PART\_HeaderGripper" HorizontalAlignment="Right"
Style="{DynamicResource GridView.ColumnHeader.Gripper.Style}"
Cursor="SizeWE"
MouseUp="PART\_HeaderGripper\_MouseUp"
MouseEnter="PART\_HeaderGripper\_MouseEnter"
Margin="-18,0, 0, 0"/>
</Grid>
</ControlTemplate>
</Setter.Value>
</Setter>
```
code behind:
```
private void PART_HeaderGripper_MouseUp(object sender, MouseButtonEventArgs e)
{
Thumb gripper = (Thumb)sender;
if (gripper != null)
{
gripper.Cursor = Cursors.SizeWE;
}
}
private void PART_HeaderGripper_MouseEnter(object sender, MouseEventArgs e)
{
Thumb gripper = (Thumb)sender;
if (gripper != null)
{
gripper.Cursor = Cursors.SizeWE;
}
}
```<issue_comment>username_1: One radical solution is to set `Mouse.OverrideCursor` when the cursor enters the Thumb, and set it back when the cursor leaves:
```
```
Handlers:
```
private void UIElement_OnMouseEnter(object sender, MouseEventArgs e)
{
Mouse.OverrideCursor = Cursors.SizeWE;
}
private void UIElement_OnMouseLeave(object sender, MouseEventArgs e)
{
Mouse.OverrideCursor = null;
}
```
[](https://i.stack.imgur.com/HNbST.gif)
Upvotes: 2 [selected_answer]<issue_comment>username_2: Do this to force the mouse cursor on the gripper without overriding the template
```
public static void ColumnHeaderGripper(DataGrid datagrid)
{
var columnpresenter = typeof(DataGrid)
.GetProperty("ColumnHeadersPresenter",
BindingFlags.NonPublic | BindingFlags.Instance);
var headerpresenter = (DataGridColumnHeadersPresenter)
columnpresenter.GetValue(datagrid);
for (var i = 1; i < headerpresenter.Items.Count; i++) {
var columnheader = (DataGridColumnHeader)
headerpresenter.ItemContainerGenerator.ContainerFromIndex(i);
var leftgripper = typeof(DataGridColumnHeader)
.GetField("_leftGripper",
BindingFlags.NonPublic | BindingFlags.Instance);
var rightgripper = typeof(DataGridColumnHeader)
.GetField("_rightGripper",
BindingFlags.NonPublic | BindingFlags.Instance);
var leftThumb = (Thumb)leftgripper.GetValue(columnheader);
var rightThumb = (Thumb)rightgripper.GetValue(columnheader);
leftThumb.MouseEnter += (s, e) => {
System.Windows.Input.Mouse.OverrideCursor = Cursors.SizeWE; };
leftThumb.MouseLeave += (s, e) => {
System.Windows.Input.Mouse.OverrideCursor = Cursors.Arrow; };
rightThumb.MouseEnter += (s, e) => {
System.Windows.Input.Mouse.OverrideCursor = Cursors.SizeWE; };
rightThumb.MouseLeave += (s, e) => {
System.Windows.Input.Mouse.OverrideCursor = Cursors.Arrow; };
}
}
```
Upvotes: 0 |
2018/03/20 | 1,180 | 4,623 | <issue_start>username_0: Hi I have been working through several different tutorials on getting data from a sql database into a listview. I can add data, get data from database and populate the list view, and have a working onclick listener (will fire off a Toast message). However I can not get any data from the listview when clicked. I have tried different combinations of getitem and getItemAtPosition but they all return a empty string(blank toast). Would someone be kind enough to look at my code and tell me if what I am trying to do is possible. In my listview i have four items in each entry, I would like to either get the fourth item directly or get all the items (as string?) then I can pull out the data I need.
Thanks in advance for your time.
```
public class ListViewActivity extends Activity {
SQLiteHelper SQLITEHELPER;
SQLiteDatabase SQLITEDATABASE;
Cursor cursor;
SQLiteListAdapter ListAdapter ;
ArrayList ID\_ArrayList = new ArrayList();
ArrayList GENRE\_ArrayList = new ArrayList();
ArrayList NAME\_ArrayList = new ArrayList();
ArrayList URL\_ArrayList = new ArrayList();
ListView LISTVIEW;
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity\_list\_view);
LISTVIEW = (ListView) findViewById(R.id.listView1);
SQLITEHELPER = new SQLiteHelper(this);
}
@Override
protected void onResume() {
ShowSQLiteDBdata() ;
super.onResume();
}
private void ShowSQLiteDBdata() {
SQLITEDATABASE = SQLITEHELPER.getWritableDatabase();
cursor = SQLITEDATABASE.rawQuery("SELECT \* FROM demoTable1", null);
ID\_ArrayList.clear();
GENRE\_ArrayList.clear();
NAME\_ArrayList.clear();
URL\_ArrayList.clear();
if (cursor.moveToFirst()) {
do {
ID\_ArrayList.add(cursor.getString(cursor.getColumnIndex(SQLiteHelper.KEY\_ID)));
GENRE\_ArrayList.add(cursor.getString(cursor.getColumnIndex(SQLiteHelper.KEY\_Genre)));
NAME\_ArrayList.add(cursor.getString(cursor.getColumnIndex(SQLiteHelper.KEY\_Name)));
URL\_ArrayList.add(cursor.getString(cursor.getColumnIndex(SQLiteHelper.KEY\_Url)));
} while (cursor.moveToNext());
}
ListAdapter = new SQLiteListAdapter(ListViewActivity.this,
ID\_ArrayList,
GENRE\_ArrayList,
NAME\_ArrayList,
URL\_ArrayList
);
LISTVIEW.setAdapter(ListAdapter);
LISTVIEW.setOnItemClickListener(new OnItemClickListener() {
@Override
public void onItemClick(AdapterView parent, View view, int position,
long id) {
// String text = (String) LISTVIEW.getAdapter().getItem(position);
String text = (String) LISTVIEW.getItemAtPosition(position);
//String text = (String) lv.getItemAtPosition(0);
// Object item = (Object) LISTVIEW.getItemAtPosition(position);
Toast.makeText(getApplicationContext(), text, Toast.LENGTH\_SHORT).show();
}
});
cursor.close();
}
```
}<issue_comment>username_1: try to change the line
>
> String text = (String) LISTVIEW.getItemAtPosition(position);
>
>
>
with
>
> String text = (String) parent.getItemAtPosition(position);
>
>
>
this should be the way ListView works.
Also i suggest you to not use Capital Cases with variables, usually in Java is used a [CamelCase](https://sanaulla.info/2008/06/25/camelcase-notation-naming-convention-for-programming-languages/) convention. And also have a look at RecyclerView, that usually is implemented today much more than ListView, because allow a great level of customization
Upvotes: 0 <issue_comment>username_2: try this,
ShowSQLiteDBdata() in onCreate() instead of onResume() method
Upvotes: 0 <issue_comment>username_3: ```
LISTVIEW.setOnItemClickListener(new AdapterView.OnItemClickListener() {
@Override
public void onItemClick(AdapterView parent, View view, int position,
long id) {
String value1 = ID_ArrayList.get(position);
String value2 = GENRE_ArrayList.get(position);
String value3 = NAME_ArrayList.get(position);
String value4 = URL_ArrayList.get(position);
Toast.makeText(getApplicationContext(),value1+" "+value2+" "+value3+" "+value4, Toast.LENGTH_SHORT).show();
}
});
```
Upvotes: 2 [selected_answer]<issue_comment>username_4: Pls use below code within listview setOnItemClickListener :-
```
String genreID = ID_ArrayList.get(position);
String genre = GENRE_ArrayList.get(position);
String genreName = NAME_ArrayList.get(position);
String genreUrl = URL_ArrayList.get(position);
```
Toast.makeText(getApplicationContext(), genreID+", "+genre+","+genreName+", "+genreUrl+", "+, Toast.LENGTH\_SHORT).show();
its return render data of listview.
Upvotes: 0 |
2018/03/20 | 1,018 | 2,518 | <issue_start>username_0: I have a 3-d array. I find the indexes of the maxima along an axis using argmax. How do I now use these indexes to obtain the maximal values?
2nd part: How to do this for arrays of N-d?
Eg:
```
u = np.arange(12).reshape(3,4,1)
In [125]: e = u.argmax(axis=2)
Out[130]: e
array([[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]])
```
It would be nice if u[e] produced the expected results, but it doesn't work.<issue_comment>username_1: The return value of `argmax` along an axis can't be simply used as an index. It only works in a 1d case.
```
In [124]: u = np.arange(12).reshape(3,4,1)
In [125]: e = u.argmax(axis=2)
In [126]: u.shape
Out[126]: (3, 4, 1)
In [127]: e.shape
Out[127]: (3, 4)
```
`e` is (3,4), but its values only index the last dimension of `u`.
```
In [128]: u[e].shape
Out[128]: (3, 4, 4, 1)
```
Instead we have to construct indices for the other 2 dimensions, ones which broadcast with `e`. For example:
```
In [129]: I,J=np.ix_(range(3),range(4))
In [130]: I
Out[130]:
array([[0],
[1],
[2]])
In [131]: J
Out[131]: array([[0, 1, 2, 3]])
```
Those are (3,1) and (1,4). Those are compatible with (3,4) `e` and the desired output
```
In [132]: u[I,J,e]
Out[132]:
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
```
This kind of question has been asked before, so probably should be marked as a duplicate. The fact that your last dimension is size 1, and hence `e` is all 0s, distracting readers from the underlying issue (using a multidimensional `argmax` as index).
[numpy: how to get a max from an argmax result](https://stackoverflow.com/questions/40357335/numpy-how-to-get-a-max-from-an-argmax-result)
[Get indices of numpy.argmax elements over an axis](https://stackoverflow.com/questions/20128837/get-indices-of-numpy-argmax-elements-over-an-axis)
---
Assuming you've taken the argmax on the last dimension
```
In [156]: ij = np.indices(u.shape[:-1])
In [157]: u[(*ij,e)]
Out[157]:
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
```
or:
```
ij = np.ix_(*[range(i) for i in u.shape[:-1]])
```
If the axis is in the middle, it'll take a bit more tuple fiddling to arrange the `ij` elements and `e`.
Upvotes: 3 [selected_answer]<issue_comment>username_2: so for general N-d array
```
dims = np.ix_(*[range(x) for x in u.shape[:-1]])
u.__getitem__((*dims,e))
```
You can't write u[\*dims,e], that's a syntax error, so I think you must use **getitem** directly.
Upvotes: 0 |
2018/03/20 | 1,320 | 4,243 | <issue_start>username_0: I want to deploy my application to a tomcat in version 7 and I get the following Exception `java.lang.NoClassDefFoundError: javax/el/ELManager`
But if I try to deploy this application to tomcat version 8 it works fine.
Do you have any ideas how to fix this?
Why do i switch from tomcat 8 to 7? In the test environment was tomcat 8 in the repo and on the server it is tomcat7.
pom.xml
```
4.0.0
certplatform
certplatform
0.0.1-SNAPSHOT
war
src
maven-compiler-plugin
3.7.0
1.8
1.8
maven-war-plugin
3.0.0
WebContent
org.springframework
spring-core
5.0.4.RELEASE
org.springframework
spring-context
5.0.4.RELEASE
org.springframework
spring-context-support
5.0.4.RELEASE
org.springframework
spring-webmvc
5.0.4.RELEASE
org.springframework
spring-web
5.0.4.RELEASE
javax.servlet
jstl
1.2
javax.servlet.jsp.jstl
javax.servlet.jsp.jstl-api
1.2.1
javax.mail
mail
1.5.0-b01
org.hibernate
hibernate-validator
6.0.8.Final
org.hibernate
hibernate-core
5.2.15.Final
org.hibernate
hibernate-entitymanager
5.2.15.Final
org.postgresql
postgresql
42.2.1
org.bouncycastle
bcprov-jdk15on
1.59
org.bouncycastle
bcpkix-jdk15on
1.59
javax.el
javax.el-api
3.0.0
javax.el
el-api
2.2
```
web.xml
```
xml version="1.0" encoding="UTF-8"?
spring-mvc-demo
dispatcher
org.springframework.web.servlet.DispatcherServlet
contextConfigLocation
/WEB-INF/spring-mvc-servlet.xml
1
dispatcher
/
```<issue_comment>username_1: If you want to run your webapp on Tomcat 7 in production, then you should be using Tomcat 7 in your test environment.
What has happened is that you have accidentally introduced a dependency on the EL 3.0 APIs ... and included it in your POM file. It is working find on Tomcat 8. But on Tomcat 7, the platform supports EL 2.1. So when your app attempts to load the classfile for `javax.el.ELManager` ... it fails.
Solutions:
* Upgrade your production platform to Tomcat 8
* Downgrade your dev and test platforms to Tomcat 7, and eliminate any dependencies on the newer APIs from your codebase .
The [Apache Tomcat Versions](http://tomcat.apache.org/whichversion.html) page lists the various spec versions for each of the major versions of Tomcat.
---
>
> Is there no possibility to solve this issue?
>
>
>
If you were prepared to "butcher" the Tomcat 7 server, you might be able to replace the EL implementation with a newer one. But I would not try that, because 1) it may not work at all, and 2) the result is liable to be difficult to maintain. And you are liable to give your operations, quality assurance and/or security teams an apoplexy!
>
> Is there no possibility to solve this issue with changes in the application?
>
>
>
AFAIK, there is no possibility.
I thought you could try replacing the EL dependency with this:
```
javax.el
javax.el-api
3.0.0
```
But it doesn't work. According to a comment from Alex:
>
> *"Adding the JAR to the application POM does not solve anything. Here its the message printed out by Tomcat bootstrap: "validateJarFile(C:\Software\apache-tomcat-7.0.99\webapps\leonardo-flows-services\WEB-INF\lib\javax.el-api-3.0.0.jar) - jar not loaded. See Servlet Spec 3.0, section 10.7.2. Offending class: javax/el/Expression.class". It conflicts with its lib jar."*
>
>
>
Upvotes: 2 <issue_comment>username_2: The issue is caused by the fact that this class `javax/el/ELManager` is introduced in `el-api` version 3.0.
[Tomcat 7 is coming with pre-bundled el-api 2.2](https://tomcat.apache.org/tomcat-7.0-doc/class-loader-howto.html#Class_Loader_Definitions) (which is missing the class) and it's picking that at runtime, instead of your el-api jar.
[Tomcat 8 has the el-api 3.0](https://tomcat.apache.org/tomcat-8.0-doc/class-loader-howto.html#Class_Loader_Definitions) and hence the `javax/el/ELManager` class is present.
It's best if you can sync your dev/test/prod environments as close as possible.
It makes much more sense to develop against Tomcat 8 in dev, when your test environment has Tomcat 8. As you found out Tomcat 7 and 8 servers comes with different set of libraries, so your code can behave differently against those different set of libraries.
Upvotes: 4 |
2018/03/20 | 680 | 2,071 | <issue_start>username_0: I am doing some conditional concatenations on pairs of strings. if the condition is not satisfied, then a space should be added between the two
The following is a small subset of my larger code but replicates my problem
```
a = "ai";
b = "b";
res = "";
if (a.match(/ai$/))
{
if (b.match(/^ā/) || b.match(/^a/) ||
b.match(/^i/) || b.match(/^ī/) ||
b.match(/^u/) || b.match(/^ū/) ||
b.match(/^e/) || b.match(/^o/) ||
b.match(/^ṛ/))
{
res = a.slice(0, -1) + 'a ' + b
}
}
else
res = a+ ' ' + b
```
the result should be `ai b`
But I get ''
What am I doing wrong?<issue_comment>username_1: Move your `else` inside the first `if` so that that `else` is triggered when the inner `if` is not satisfied:
```js
a = "ai";
b = "b";
res = "";
if (a.match(/ai$/))
{
if (b.match(/^ā/) || b.match(/^a/) ||
b.match(/^i/) || b.match(/^ī/) ||
b.match(/^u/) || b.match(/^ū/) ||
b.match(/^e/) || b.match(/^o/) ||
b.match(/^ṛ/))
{
res = a.slice(0, -1) + 'a ' + b
}
else{
res = a+ ' ' + b
}
}
console.log(res);
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: The nested if doesn't go to the outer else for example:
```
if (1 === 1) {
if (1 === 2) {
console.log(1);
}
}
else {
console.log(2);
}
```
the else statement in this case will never trigger.
It seems to me that what you need to do is simply combine the if statements:
```
a = "ai";
b = "b";
res = "";
if (a.match(/ai$/) && (b.match(/^ā/) || b.match(/^a/) ||
b.match(/^i/) || b.match(/^ī/) ||
b.match(/^u/) || b.match(/^ū/) ||
b.match(/^e/) || b.match(/^o/) ||
b.match(/^ṛ/))) {
res = a.slice(0, -1) + 'a ' + b;
}
else {
res = a + ' ' + b;
}
```
note the extra "(" after the && this is so the entire statement is treated as one unit and not seperated.
Upvotes: 1 |
2018/03/20 | 1,911 | 6,039 | <issue_start>username_0: i followed a Youtube Video to implement ActiveAdmin Theme in my Rails App,- and everything worked like a charm (i thought).
<https://www.youtube.com/watch?v=i2x995hm8r8>
I followed every Step he took and i am a little confused right now because i can't create posts.
Whenever i try to create a New Post and type in the tile, body and select a image- it just won't do anything. It doesn't even give me a Error Message.
posts\_controller.rb
```
class PostController < ApplicationController
def index
@post = Post.all.order('created_at DESC')
end
def create
@post = Post.new(params[:post].permit(:title, :body))
if @post.save
redirect_to @post
else
render 'new'
end
end
def show
@post = Post.find(params[:id])
@post = Post.order("created_at DESC").limit(4).offset(1)
end
def edit
@post = Post.find(params[:id])
end
def update
@post = Post.find(params[:id])
if @post.update(params[:post].permit(:title, :body))
redirect_to @post
else
render 'edit'
end
end
def destroy
@post = Post.find(params[:id])
@post.destroy
redirect_to posts_path
end
private
def post_params
params.require(:post).permit(:title, :body)
end
end
ActiveAdmin.register Post do
```
posts.rb
```
permit_params :title, :body, :image
show do |t|
attributes_table do
row :title
row :body
row :image do
post.image? ? image_tag(post.image.url, height: '100') : content_tag(:span, "No image yet")
end
end
end
form :html => {:multipart => true} do |f|
f.inputs do
f.input :title
f.input :body
f.input :image, hint: f.post.image? ? image_tag(post.image.url, height: '100') : content_tag(:span, "Upload JPG/PNG/GIF image")
end
f.actions
end
end
```
post.rb
```
class Post < ApplicationRecord
belongs_to :user
validates :title, presence: true, length: {minimum: 5}
validates :body, presence: true, length: { maximum: 140}
has_attached_file :image, styles: { medium: "300x300>", thumb: "100x100>" }
validates_attachment_content_type :image, content_type: /\Aimage\/.*\z/
end
```
EDIT I:
`Started POST "/admin/posts" for 127.0.0.1 at 2018-03-20 14:30:24 +0100
Processing by Admin::PostsController#create as HTML
Parameters: {"utf8"=>"✓",
"authenticity_token"=>"<KEY> "post"=>{"title"=>"asdasdasd", "body"=>"asdasdasd"}, "commit"=>"Create Post"}
AdminUser Load (0.3ms) SELECT "admin_users".* FROM "admin_users" WHERE "admin_users"."id" = $1 ORDER BY "admin_users"."id" ASC LIMIT $2 [["id", 1], ["LIMIT", 1]]
(0.2ms) BEGIN
(0.1ms) ROLLBACK
Rendering /Users/useruser/.rvm/gems/ruby-2.4.2/bundler/gems/activeadmin-2cf85fb03ab3/app/views/active_admin/resource/new.html.arb
Rendered /Users/useruser/.rvm/gems/ruby-2.4.2/bundler/gems/activeadmin-2cf85fb03ab3/app/views/active_admin/resource/new.html.arb (134.6ms)
Completed 200 OK in 230ms (Views: 148.3ms | ActiveRecord: 5.7ms)`
If you need more of my Code just tell me what i should post in here.
I am just getting started with Ruby on Rails and Programming overall, so yes indeed, i am a Newb.
Thanks in advance!<issue_comment>username_1: From what I see in your edit 1, I see that you render `new` after submiting form. It means that your Post is not being saved. It also means that your application does exactly what it should do.
I assume that you are using latests Rails 5.
In Post model, you have `belongs_to` association (Post belongs to User).
In Rails 5, that means `user` or `user_id` has to be provided, to create Post (Post cannot belong to noone), else you won't be able to save.
Depending how you created association in table, you might be able to pass `user` or `user_id` in params.
Another way to create post belonging to particular user is:
```
@user = User.first
@post = @user.posts.build(post_params)
```
For ActiveAdmin, you can use default form that is created based on your model.
Just make sure you permit all params when creating it that way
```
ActiveAdmin.register Post do
permit_params %i[title body image user_id]
...
end
```
You can also set `belongs_to :user` association as optional.
---
Now some general advices from me:
First of all use proper indentation.
My advice to you is install Rubocop gem.
Second:
```
def show
@post = Post.find(params[:id])
@post = Post.order("created_at DESC").limit(4).offset(1)
end
```
That doesn't make much sense, you overwrite instance variable just after first assignment.
`@post = Post.order("created_at DESC").limit(4).offset(1)` is more of an index action, since it does not show particular posts, it show 2..5 newest posts.
```
def post_params
params.require(:post).permit(:title, :body)
end
```
Misses `image` attribute.
```
def update
@post = Post.find(params[:id])
if @post.update(params[:post].permit(:title, :body))
redirect_to @post
else
render 'edit'
end
end
```
You duplicate `params[:post].permit(:title, :body)`. You hgave already created private method for that. Use it here. Same goes for creation action, you duplicated it there too. Read what DRY code is about (google it).
Upvotes: 2 [selected_answer]<issue_comment>username_2: You have `belongs_to :user` but never set a `user_id`. A `belongs_to` relation is by default required (in recent rails 4/5), and this will block the saving. The easiest way, for now, to fix this is to write it as following
```
belongs_to :user, optional: true
```
[EDIT: how to store current-user as owner of post]
If you want to automatically set the user to the currently logged in user, which I think is the intention, you can add the following to your active-admin configuration:
```
ActiveAdmin.register Post do
# .. keep your original configuration here ...
before_build do |record|
record.user = current_user
end
end
```
Or add an extra field to the form allowing the administrator to select the user?
Upvotes: 0 |
2018/03/20 | 602 | 1,808 | <issue_start>username_0: my table is here
customers
```
CREATE TABLE `customers` (
`customer_id` int(11) DEFAULT NULL,
`account_num` double DEFAULT NULL,
`lname` varchar(50) DEFAULT NULL,
`fname` varchar(50) DEFAULT NULL,
`mi` varchar(50) DEFAULT NULL,
`address1` varchar(50) DEFAULT NULL,
`address2` varchar(50) DEFAULT NULL,
`address3` varchar(50) DEFAULT NULL,
`address4` varchar(50) DEFAULT NULL,
`postal_code` varchar(50) DEFAULT NULL,
`region_id` int(11) DEFAULT NULL,
`phone1` varchar(50) DEFAULT NULL,
`phone2` varchar(50) DEFAULT NULL,
`birthdate` datetime DEFAULT NULL,
`marital_status` varchar(50) DEFAULT NULL,
`yearly_income` varchar(50) DEFAULT NULL,
`gender` varchar(50) DEFAULT NULL,
`total_children` smallint(6) DEFAULT NULL,
`num_children_at_home` smallint(6) DEFAULT NULL,
`education` varchar(50) DEFAULT NULL,
`member_card` varchar(50) DEFAULT NULL,
`occupation` varchar(50) DEFAULT NULL,
`houseowner` varchar(50) DEFAULT NULL,
`num_cars_owned` smallint(6) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
```
I have to find out
Find the list of all the customers with a name that contains a letter between “a” and “d” as the second letter
my query is not working as needed
```
SELECT *
FROM customers
WHERE fname REGEXP '^[A-D]';
```<issue_comment>username_1: ```
SELECT *
FROM customers
WHERE SUBSTR(fname,2,1) REGEXP '^[A-D]';
```
Upvotes: 1 [selected_answer]<issue_comment>username_2: You can try this.
```
SELECT *
FROM customers
WHERE fname REGEXP '^.[A-Da-d]{1}';
```
[SQLFiddle](http://sqlfiddle.com/#!9/4fd628/1)
Upvotes: 1 <issue_comment>username_3: It can be achieved by simple LIKE statement instead of using regex.
```
SELECT *
FROM customers
WHERE upper(fname) LIKE '_[A-D]%';
```
Upvotes: 0 |
2018/03/20 | 1,958 | 5,745 | <issue_start>username_0: Say I have a string
```
"1974-03-20 00:00:00.000"
```
It is created using `DateTime.now()`,
how do I convert the string back to a `DateTime` object?<issue_comment>username_1: `DateTime` has a `parse` method
```
var parsedDate = DateTime.parse('1974-03-20 00:00:00.000');
```
<https://api.dartlang.org/stable/dart-core/DateTime/parse.html>
Upvotes: 9 [selected_answer]<issue_comment>username_2: There seem to be a *lot* of questions about parsing timestamp strings into `DateTime`. I will try to give a more general answer so that future questions can be directed here.
* **Your timestamp is in an ISO format.** Examples: `1999-04-23`, `1999-04-23 13:45:56Z`, `19990423T134556.789`. In this case, you can use `DateTime.parse` or `DateTime.tryParse`. (See [the `DateTime.parse` documentation](https://api.dart.dev/stable/dart-core/DateTime/parse.html) for the precise set of allowed inputs.)
* **Your timestamp is in a [standard HTTP format](https://www.rfc-editor.org/rfc/rfc2616#section-3.3.1).** Examples: `Fri, 23 Apr 1999 13:45:56 GMT`, `Friday, 23-Apr-99 13:45:56 GMT`, `Fri Apr 23 13:45:56 1999`. In this case, you can use `dart:io`'s [`HttpDate.parse`](https://api.dart.dev/stable/dart-io/HttpDate/parse.html) function.
* **Your timestamp is in some local format.** Examples: `23/4/1999`, `4/23/99`, `April 23, 1999`. You can use [`package:intl`](https://pub.dev/packages/intl)'s [`DateFormat`](https://pub.dev/documentation/intl/latest/intl/DateFormat-class.html) class and provide a *pattern* specifying how to parse the string:
```dart
import 'package:intl/intl.dart';
...
var dmyString = '23/4/1999';
var dateTime1 = DateFormat('d/M/y').parse(dmyString);
var mdyString = '04/23/99';
var dateTime2 = DateFormat('MM/dd/yy').parse(mdyString);
var mdyFullString = 'April 23, 1999';
var dateTime3 = DateFormat('MMMM d, y', 'en_US').parse(mdyFullString));
```
See [the `DateFormat` documentation](https://pub.dev/documentation/intl/latest/intl/DateFormat-class.html) for more information about the pattern syntax.
**`DateFormat` limitations:**
+ [`DateFormat` cannot parse dates that lack explicit field separators](https://github.com/dart-lang/intl/issues/210). For such cases, you can resort to using regular expressions (see below).
+ Prior to version 0.17.0 of `package:intl`, [`yy` did not follow the -80/+20 rule](https://github.com/dart-lang/intl/issues/123) that the documentation describes for inferring the century, so if you use a 2-digit year, you might need to adjust the century afterward.
+ As of writing, [`DateFormat` does not support time zones](https://github.com/dart-lang/intl/issues/19). If you need to deal with time zones, [you will need to handle them separately](https://stackoverflow.com/a/64145309/).
* **Last resort:** If your timestamps are in a fixed, known, numeric format, you always can use regular expressions to parse them manually:
```
var dmyString = '23/4/1999';
var re = RegExp(
r'^'
r'(?[0-9]{1,2})'
r'/'
r'(?[0-9]{1,2})'
r'/'
r'(?[0-9]{4,})'
r'$',
);
var match = re.firstMatch(dmyString);
if (match == null) {
throw FormatException('Unrecognized date format');
}
var dateTime4 = DateTime(
int.parse(match.namedGroup('year')!),
int.parse(match.namedGroup('month')!),
int.parse(match.namedGroup('day')!),
);
```
See <https://stackoverflow.com/a/63402975/> for another example.
(I mention using regular expressions for completeness. There are many more points for failure with this approach, so I do *not* recommend it unless there's no other choice. `DateFormat` usually should be sufficient.)
Upvotes: 7 <issue_comment>username_3: I solved this by creating, on the C# server side, this attribute:
```
using Newtonsoft.Json.Converters;
public class DartDateTimeConverter : IsoDateTimeConverter
{
public DartDateTimeConverter()
{
DateTimeFormat = "yyyy'-'MM'-'dd'T'HH':'mm':'ss.FFFFFFK";
}
}
```
and I use it like this:
```
[JsonConverter(converterType: typeof(DartDateTimeConverter))]
public DateTimeOffset CreatedOn { get; set; }
```
Internally, the precision is stored, but the Dart app consuming it gets an `ISO8601` format with the right precision.
HTH
Upvotes: -1 <issue_comment>username_4: ```
import 'package:intl/intl.dart';
DateTime brazilianDate = new DateFormat("dd/MM/yyyy").parse("11/11/2011");
```
Upvotes: 3 <issue_comment>username_5: ```
void main() {
var dateValid = "30/08/2020";
print(convertDateTimePtBR(dateValid));
}
DateTime convertDateTimePtBR(String validade)
{
DateTime parsedDate = DateTime.parse('0001-11-30 00:00:00.000');
List validadeSplit = validade.split('/');
if(validadeSplit.length > 1)
{
String day = validadeSplit[0].toString();
String month = validadeSplit[1].toString();
String year = validadeSplit[2].toString();
parsedDate = DateTime.parse('$year-$month-$day 00:00:00.000');
}
return parsedDate;
}
```
Upvotes: 2 <issue_comment>username_6: you can just use : **`DateTime.parse("your date string");`**
for any extra formating, you can use "Intl" package.
Upvotes: 2 <issue_comment>username_7: a string can be parsed to DateTime object using Dart default function `DateTime.parse("string");`
```
final parsedDate = DateTime.parse("1974-03-20 00:00:00.000");
```
**[Example on Dart Pad](https://dartpad.dev/?b6299aee059e861df29f2a47cbc6dda1)**
[](https://i.stack.imgur.com/3r42W.png)
Upvotes: 2 <issue_comment>username_8: ```
String dateFormatter(date) {
```
date = date.split('-');
DateFormat dateFormat = DateFormat("yMMMd");
String format = dateFormat.format(DateTime(int.parse(date[0]), int.parse(date[1]), int.parse(date[2])));
return format;
}
Upvotes: 0 |
2018/03/20 | 2,085 | 6,149 | <issue_start>username_0: I have some strings which have line breaks inside, in console they are shown with breaks:
```
string1:
test 1
test 2
test 3
```
I need to split the string and make something like this:
```
string1:
test 1
test 2
test 3
```
For most of them the next code works:
```
string1.split('\n\n').join('')
```
but for some strings this doesn't work, so I could assume they have some other line breaks, for example '\r\n'
Can I somehow see all line breaks as they are? I mean as something like the next:
```
string1:
test 1\n\n
test 2\n\n
test 3\n\n
```<issue_comment>username_1: `DateTime` has a `parse` method
```
var parsedDate = DateTime.parse('1974-03-20 00:00:00.000');
```
<https://api.dartlang.org/stable/dart-core/DateTime/parse.html>
Upvotes: 9 [selected_answer]<issue_comment>username_2: There seem to be a *lot* of questions about parsing timestamp strings into `DateTime`. I will try to give a more general answer so that future questions can be directed here.
* **Your timestamp is in an ISO format.** Examples: `1999-04-23`, `1999-04-23 13:45:56Z`, `19990423T134556.789`. In this case, you can use `DateTime.parse` or `DateTime.tryParse`. (See [the `DateTime.parse` documentation](https://api.dart.dev/stable/dart-core/DateTime/parse.html) for the precise set of allowed inputs.)
* **Your timestamp is in a [standard HTTP format](https://www.rfc-editor.org/rfc/rfc2616#section-3.3.1).** Examples: `Fri, 23 Apr 1999 13:45:56 GMT`, `Friday, 23-Apr-99 13:45:56 GMT`, `Fri Apr 23 13:45:56 1999`. In this case, you can use `dart:io`'s [`HttpDate.parse`](https://api.dart.dev/stable/dart-io/HttpDate/parse.html) function.
* **Your timestamp is in some local format.** Examples: `23/4/1999`, `4/23/99`, `April 23, 1999`. You can use [`package:intl`](https://pub.dev/packages/intl)'s [`DateFormat`](https://pub.dev/documentation/intl/latest/intl/DateFormat-class.html) class and provide a *pattern* specifying how to parse the string:
```dart
import 'package:intl/intl.dart';
...
var dmyString = '23/4/1999';
var dateTime1 = DateFormat('d/M/y').parse(dmyString);
var mdyString = '04/23/99';
var dateTime2 = DateFormat('MM/dd/yy').parse(mdyString);
var mdyFullString = 'April 23, 1999';
var dateTime3 = DateFormat('MMMM d, y', 'en_US').parse(mdyFullString));
```
See [the `DateFormat` documentation](https://pub.dev/documentation/intl/latest/intl/DateFormat-class.html) for more information about the pattern syntax.
**`DateFormat` limitations:**
+ [`DateFormat` cannot parse dates that lack explicit field separators](https://github.com/dart-lang/intl/issues/210). For such cases, you can resort to using regular expressions (see below).
+ Prior to version 0.17.0 of `package:intl`, [`yy` did not follow the -80/+20 rule](https://github.com/dart-lang/intl/issues/123) that the documentation describes for inferring the century, so if you use a 2-digit year, you might need to adjust the century afterward.
+ As of writing, [`DateFormat` does not support time zones](https://github.com/dart-lang/intl/issues/19). If you need to deal with time zones, [you will need to handle them separately](https://stackoverflow.com/a/64145309/).
* **Last resort:** If your timestamps are in a fixed, known, numeric format, you always can use regular expressions to parse them manually:
```
var dmyString = '23/4/1999';
var re = RegExp(
r'^'
r'(?[0-9]{1,2})'
r'/'
r'(?[0-9]{1,2})'
r'/'
r'(?[0-9]{4,})'
r'$',
);
var match = re.firstMatch(dmyString);
if (match == null) {
throw FormatException('Unrecognized date format');
}
var dateTime4 = DateTime(
int.parse(match.namedGroup('year')!),
int.parse(match.namedGroup('month')!),
int.parse(match.namedGroup('day')!),
);
```
See <https://stackoverflow.com/a/63402975/> for another example.
(I mention using regular expressions for completeness. There are many more points for failure with this approach, so I do *not* recommend it unless there's no other choice. `DateFormat` usually should be sufficient.)
Upvotes: 7 <issue_comment>username_3: I solved this by creating, on the C# server side, this attribute:
```
using Newtonsoft.Json.Converters;
public class DartDateTimeConverter : IsoDateTimeConverter
{
public DartDateTimeConverter()
{
DateTimeFormat = "yyyy'-'MM'-'dd'T'HH':'mm':'ss.FFFFFFK";
}
}
```
and I use it like this:
```
[JsonConverter(converterType: typeof(DartDateTimeConverter))]
public DateTimeOffset CreatedOn { get; set; }
```
Internally, the precision is stored, but the Dart app consuming it gets an `ISO8601` format with the right precision.
HTH
Upvotes: -1 <issue_comment>username_4: ```
import 'package:intl/intl.dart';
DateTime brazilianDate = new DateFormat("dd/MM/yyyy").parse("11/11/2011");
```
Upvotes: 3 <issue_comment>username_5: ```
void main() {
var dateValid = "30/08/2020";
print(convertDateTimePtBR(dateValid));
}
DateTime convertDateTimePtBR(String validade)
{
DateTime parsedDate = DateTime.parse('0001-11-30 00:00:00.000');
List validadeSplit = validade.split('/');
if(validadeSplit.length > 1)
{
String day = validadeSplit[0].toString();
String month = validadeSplit[1].toString();
String year = validadeSplit[2].toString();
parsedDate = DateTime.parse('$year-$month-$day 00:00:00.000');
}
return parsedDate;
}
```
Upvotes: 2 <issue_comment>username_6: you can just use : **`DateTime.parse("your date string");`**
for any extra formating, you can use "Intl" package.
Upvotes: 2 <issue_comment>username_7: a string can be parsed to DateTime object using Dart default function `DateTime.parse("string");`
```
final parsedDate = DateTime.parse("1974-03-20 00:00:00.000");
```
**[Example on Dart Pad](https://dartpad.dev/?b6299aee059e861df29f2a47cbc6dda1)**
[](https://i.stack.imgur.com/3r42W.png)
Upvotes: 2 <issue_comment>username_8: ```
String dateFormatter(date) {
```
date = date.split('-');
DateFormat dateFormat = DateFormat("yMMMd");
String format = dateFormat.format(DateTime(int.parse(date[0]), int.parse(date[1]), int.parse(date[2])));
return format;
}
Upvotes: 0 |
2018/03/20 | 718 | 2,990 | <issue_start>username_0: I am using a third party jar that has lots of `System.out.println()`. I am not interested in those messages. It also uses slf4j logs that I am interested in.
So how do I filter out `System.out.println()` messages in my stdout. Is there a quick way? I am trying to avoid writing any script to parse the whole log and keep only the relevant one. I am using maven.
I would prefer to do logger log as stdout and system.out as file.. so that I can see relevant log while build is running in build environment.<issue_comment>username_1: Redirect the System.out and System.err.
**But be careful**: There will be an recursion if you use the Simple Logger
`stdout` and `stderr` will redirect to our own implementation of `OutputStream` (see code comments). Our own implementation redirect the output to `slf4j`
```
System.setOut(new PrintStream(new OutputStream() {
ByteArrayOutputStream buffer = new ByteArrayOutputStream();
//
// Our own implementation of write
//
@Override
public void write(int b) throws IOException {
//
// After a line break the line will redirect to slf4j
//
if ((b == '\r') || (b == '\n')) {
// Only 'real' content will redirect to slf4j
if (buffer.size() != 0) {
LoggerFactory.getLogger("foo").info(new String(buffer.toByteArray(), "UFT-8"));
}
// A new buffer is needed after a line break
// (there is no 'clear()')
buffer = new ByteArrayOutputStream();
}
buffer.write(b);
}
}));
System.setErr(new PrintStream(new OutputStream() {
ByteArrayOutputStream buffer = new ByteArrayOutputStream();
//
// Our own implementation of write
//
@Override
public void write(int b) throws IOException {
if ((b == '\r') || (b == '\n')) {
if (buffer.size() != 0) {
LoggerFactory.getLogger("foo").error(new String(buffer.toByteArray(), "UFT-8"));
}
buffer = new ByteArrayOutputStream();
}
buffer.write(b);
}
}));
```
Upvotes: 0 <issue_comment>username_2: What I would suggest would be to [redirect System.out to slf4j](https://stackoverflow.com/q/11187461/65839). That way, all logging can be configured in your logging framework in a standardized normal way.
Note that depending on the library you're using to do the redirection, if you have some loggers print to standard output, you may need to [carefully configure it](http://projects.lidalia.org.uk/sysout-over-slf4j/faq.html#stackOverflowError) so that you don't end up in an infinite loop, where a printed message gets redirected to logging, which gets redirected to printing a message, and so on.
Also note that most build environments & IDEs have support for viewing the contents of a log file while it's being generated, so you don't have to print anything to standard output if you don't want to.
Upvotes: 2 [selected_answer] |
2018/03/20 | 966 | 3,834 | <issue_start>username_0: I have this simple form pictured below. Depending on the format selected the appropriate input field will show when clicked.
[](https://i.stack.imgur.com/xlGMl.jpg)
This is what I have so far. I have this form in 2 different locations on the page so that's way it's written the way that it is. The first function works but not the second. When PDF is clicked it shows the file upload. When link is clicked it hides the file upload but doesn't show the text field.
HTML
Any help is appreciated or link to similar question that would be helpful. Thanks
\*UPDATE: Something I did notice. Whichever div is first, linkURL or uploadPDF, that is the one that will show. Example if linkURL is first that input will show when clicked but not the file upload for PDF.
```js
//Format check
$(".pdf").click(function() {
if ($(this).prop("checked")) {
$(this).parent().next(".uploadPDF").css("display", "inline-block");
$(".linkURL").css("display", "none");
}
});
$(".link").click(function() {
if ($(this).prop("checked")) {
$(this).parent().next(".linkURL").css("display", "inline-block");
$(".uploadPDF").css("display", "none");
}
});
```
```html
**Name:** \*
**Format:** \*
PDF
Link
**Attach PDF:** \*
**Link URL:** \*
```<issue_comment>username_1: Redirect the System.out and System.err.
**But be careful**: There will be an recursion if you use the Simple Logger
`stdout` and `stderr` will redirect to our own implementation of `OutputStream` (see code comments). Our own implementation redirect the output to `slf4j`
```
System.setOut(new PrintStream(new OutputStream() {
ByteArrayOutputStream buffer = new ByteArrayOutputStream();
//
// Our own implementation of write
//
@Override
public void write(int b) throws IOException {
//
// After a line break the line will redirect to slf4j
//
if ((b == '\r') || (b == '\n')) {
// Only 'real' content will redirect to slf4j
if (buffer.size() != 0) {
LoggerFactory.getLogger("foo").info(new String(buffer.toByteArray(), "UFT-8"));
}
// A new buffer is needed after a line break
// (there is no 'clear()')
buffer = new ByteArrayOutputStream();
}
buffer.write(b);
}
}));
System.setErr(new PrintStream(new OutputStream() {
ByteArrayOutputStream buffer = new ByteArrayOutputStream();
//
// Our own implementation of write
//
@Override
public void write(int b) throws IOException {
if ((b == '\r') || (b == '\n')) {
if (buffer.size() != 0) {
LoggerFactory.getLogger("foo").error(new String(buffer.toByteArray(), "UFT-8"));
}
buffer = new ByteArrayOutputStream();
}
buffer.write(b);
}
}));
```
Upvotes: 0 <issue_comment>username_2: What I would suggest would be to [redirect System.out to slf4j](https://stackoverflow.com/q/11187461/65839). That way, all logging can be configured in your logging framework in a standardized normal way.
Note that depending on the library you're using to do the redirection, if you have some loggers print to standard output, you may need to [carefully configure it](http://projects.lidalia.org.uk/sysout-over-slf4j/faq.html#stackOverflowError) so that you don't end up in an infinite loop, where a printed message gets redirected to logging, which gets redirected to printing a message, and so on.
Also note that most build environments & IDEs have support for viewing the contents of a log file while it's being generated, so you don't have to print anything to standard output if you don't want to.
Upvotes: 2 [selected_answer] |
2018/03/20 | 264 | 951 | <issue_start>username_0: I am working on below jQuery code and target is to change content of any tag is clicked by the user.
I am fine with getting the id of clicked element but not able change its content.
```
$(document).ready(function(){
$("td").click(function(){
var elementID = $(this).attr('id');
alert(elementID); //I am OK by accessing the elementID
$("#elementID").text("X"); // How pass this elementId again in order to change content ?
});
});
```<issue_comment>username_1: You already have the element by accessing `$(this)`.
```
$(document).ready(function(){
$("td").click(function(){
$(this).text("X");
});
});
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: ```
$("td").click(function(){
var elementID = $(this).attr('SomeOtherElementID');
$("#" + elementID).text("X");
});
```
Upvotes: -1 <issue_comment>username_3: correct your line with this: `$("#"+elementID).text("X");`
Upvotes: -1 |
2018/03/20 | 789 | 3,023 | <issue_start>username_0: I've saved `terraform plan -out=my-plan` and intend to save it to source control and inject further to custom tool for ingestion and performing testing etc.
However, the file contents of `my-plan` are jumbled and I'm wondering what the encoding used is.
What is the encoding being used for the Terraform plan file?<issue_comment>username_1: The Terraform plan output is a binary format that is not designed to be used outside of Terraform. Technically you could probably serialise it using whatever Terraform uses to handle the format but there is no stable API for this and could change at any point.
One of the Hashicorp employees ([Phinze](https://github.com/phinze)) briefly covered this in this issue: <https://github.com/hashicorp/terraform/issues/7976>
One, probably reasonably fragile, option would be to simply parse the text output from running `terraform plan`. I use [Terraform Landscape](https://github.com/coinbase/terraform-landscape) to format plan outputs locally when working with JSON diffs that Terraform doesn't handle at all and that copes with this fine. However it also tends to break on the "minor" version upgrades (eg 0.9 to 0.10) as Terraform doesn't specify this as an API at all. [Terraform Plan Parser](https://github.com/lifeomic/terraform-plan-parser) also parses the textual output and notes that it is very much not to be used with the binary output.
Upvotes: 3 <issue_comment>username_2: I found this and is suitable for use [tfjson](https://github.com/palantir/tfjson)
Upvotes: -1 [selected_answer]<issue_comment>username_3: Terraform was using [gob](https://golang.org/pkg/encoding/gob/) format to encode/decode plan till [version 0.11.x](https://github.com/hashicorp/terraform/blob/v0.11.12/terraform/plan.go#L233)
This changed on terraform version [0.12](https://github.com/hashicorp/terraform/blob/v0.12.7/plans/planfile/tfplan.go#L367), they started using protocol buffer.
The `tfjson` project might not be working for 0.11.x and 0.12.x unless you rewrite the dependencies over there
Upvotes: 1 <issue_comment>username_4: While the other tools mentioned here are useful, things change regularly in the Terraform space and third-party tools often aren't able to be kept up-to-date.
For some time now Terraform has directly supported viewing a plan file in the same human-readable format that is displayed at the time you run `plan`:
```
terraform show
```
Since v0.12 you can now also view a plan file in JSON format, which you could save to work on further with other tools:
```
terraform show -json
```
There's an explanation of the JSON schema at <https://www.terraform.io/docs/internals/json-format.html>. As of writing, note that:
>
> The output ... currently has major version zero to indicate that the format is experimental and subject to change. A future version will assign a non-zero major version ... We do not anticipate any significant breaking changes to the format before its first major version, however.
>
>
>
Upvotes: 4 |
2018/03/20 | 729 | 2,309 | <issue_start>username_0: I need folder's full path given it has .txt file
Currently I am using the following command
```
D:\Testfolder\_dn>for /r %i in (*.txt) do @echo %~dpi
```
And getting the following output
```
D:\Testfolder\_dn\2nd\
D:\Testfolder\_dn\3rd\
D:\Testfolder\_dn\4th\
D:\Testfolder\_dn\5th\
D:\Testfolder\_dn\first\
```
But I want the output like following
```
D:\Testfolder\_dn\2nd
D:\Testfolder\_dn\3rd
D:\Testfolder\_dn\4th
D:\Testfolder\_dn\5th
D:\Testfolder\_dn\first
```
I tried [remove last characters string batch](https://stackoverflow.com/a/29504225/1772898)
```
for /r %i in (*.txt) do @echo %~dpi:~0,-1%
```
But it is not working.
How can I remove the last \ from the search result?<issue_comment>username_1: Be sure to enable delayed expansion so that the `P` variable gets reevaluated in the loop.
```
SETLOCAL ENABLEDELAYEDEXPANSION
for /r %%i in (*.txt) do (
SET "P=%%~dpi"
echo !P:~0,-1!
)
```
Upvotes: 0 <issue_comment>username_2: The [sub-string expansion syntax](http://ss64.com/nt/syntax-substring.html "Variables: extract part of a variable (substring)") works on normal environment variables only, but not on [`for`](http://ss64.com/nt/for.html "FOR") variable references. To apply that syntax you need to assign the value to a variable first:
```cmd
for /R %i in ("*.txt") do @(set "VAR=%~dpi" & echo/!VAR:~^,-1!)
```
But since you are editing a variable value within a block of code (loop), you need to enable and to apply [delayed variable expansion](http://ss64.com/nt/delayedexpansion.html "EnableDelayedExpansion"). This can be established by opening the command line instance by `cmd /V:ON` or `cmd /V`. However, this can still cause trouble when a path contains `!`-symbols.
---
An alternative and better solution is to avoid string manipulation and delayed variable expansion by appending `.` to the paths (meaning the current directory) and using another `for` loop to resolve the paths by the [`~f` modifier](http://ss64.com/nt/syntax-args.html "Command Line arguments (Parameters)") of the variable reference, like this:
```cmd
for /R %i in ("*.txt") do @for %j in ("%~dpi.") do @echo/%~fj
```
The `""` avoid problems with paths containing `SPACEs` or other token separators (`,`, `;`, `=`,...).
Upvotes: 2 [selected_answer] |
2018/03/20 | 794 | 2,503 | <issue_start>username_0: I have the following json object I am trying to parse with python 3:
```
customerData = {
"Joe": {"visits": 1},
"Carol": {"visits": 2},
"Howard": {"visits": 3},
"Carrie": {"visits": 4}
}
```
I am using the following python code to parse the object:
```
import json
def greetCustomer(customerData):
response = json.loads(customerData)
```
I'm getting the following error:
>
> TypeError: the JSON object must be str, bytes or bytearray, not 'dict'
>
>
><issue_comment>username_1: ```
customerData = {
"Joe": {"visits": 1},
"Carol": {"visits": 2},
"Howard": {"visits": 3},
"Carrie": {"visits": 4}
}
```
is Python code that defines a dictionary. If you had
```
customerJSON = """{
"Joe": {"visits": 1},
"Carol": {"visits": 2},
"Howard": {"visits": 3},
"Carrie": {"visits": 4}
}"""
```
you would have a string that contains a JSON object to be parsed. (Yes, there is a lot of overlap between Python syntax and JSON syntax.
```
assert customerData == json.loads(customerJSON)
```
would pass.)
---
Note, though, that not *all* valid Python resembles valid JSON.
Here are three different JSON strings that encode the same object:
```
json_strs = [
"{'foo': 'bar'}", # invalid JSON, uses single quotes
'{"foo": "bar"}', # valid JSON, uses double quotes
'{foo: "bar"}' # valid JSON, quotes around key can be omitted
]
```
You can observe that `all(json.loads(x) == {'foo': 'bar'} for x in json_strs)` is true, since all three strings encode the same Python dict.
Conversely, we can define three Python dicts, the first two of which are identical.
```
json_str = json_strs[0] # Just to pick one
foo = ... # Some value
dicts = [
{'foo': 'bar'}, # valid Python dict
{"foo": "bar"}, # valid Python dict
{foo: "bar"} # valid Python dict *if* foo is a hashable value
# and not necessarily
]
```
It is true that `dicts[0] == dicts[1] == json.loads(json_str)`. However,
`dicts[2] == json.loads(json_str)` is only true if `foo == "foo"`.
Upvotes: 4 [selected_answer]<issue_comment>username_2: You seem to be mistaking load and dump.
`json.loads` converts a string to a python object, `json.load` converts a json file into a python object whereas `json.dumps` converts a python object to a string and `json.dump` writes a json string to a file from a python object
*Tip*: notice that `loads` and `dumps` have an **s** at the end, as in **string**
Upvotes: 4 |
2018/03/20 | 2,646 | 9,971 | <issue_start>username_0: I am working on an existing Angular application. The version is Angular 4.
The application makes HTTP calls to a REST API from lot of various components.
I want to show a custom spinner for every HTTP request. Since this is an existing application, there are lot of places where calls to REST API are made. And changing code one at every places is not a feasible option.
I would like to implement an abstract solution which would solve this problem.
Please suggest if any options.<issue_comment>username_1: In Angular 5 comes the `HttpClient` module. You can find **[more information there](https://angular.io/guide/http)**.
With this module, come something called `interceptors`.
They allow you to do something for every HTTP request.
If you migrate from Http to HttpClient (and you should, Http will be deprecated), you can create an interceptor that can handle a variable in a shared service :
```js
intercept(req: HttpRequest, next: HttpHandler): Observable> {
this.sharedService.loading = true;
return next
.handle(req)
.finally(() => this.sharedService.loading = false);
}
```
Now you just have to use this variable into your templates.
```html
```
(Be sure to have an injection of your service in the components that display this spinner)
Upvotes: 4 <issue_comment>username_2: This is a basic loading dialog that can be toggled with an angular property. Just add `*ngIf="loader"` to the center-loader and set the property appropriately
```css
.center-loader {
font-size: large;
position:absolute;
z-index:1000;
top: 50%;
left: 50%;
-ms-transform: translate(-50%, -50%);
transform: translate(-50%, -50%);
}
@keyframes blink {50% { color: transparent }}
.loader__dot { animation: 1s blink infinite; font-size: x-large;}
.loader__dot:nth-child(2) { animation-delay: 250ms; font-size: x-large;}
.loader__dot:nth-child(3) { animation-delay: 500ms; font-size: x-large;}
```
```html
**Loading
.
.
.**
```
Initialize the loader to true for each page, and then set to false once the service finished:
**Top of component**:
```
export class MyComponent implements OnInit {
loader:boolean = true;
//...
```
**onInit():**
```
await this.myService
.yourServiceCall()
.then(result => {
this.resultsSet=result);
this.loader = false; // <- hide the loader
}
.catch(error => console.log(error));
```
Upvotes: 2 <issue_comment>username_3: Angular 4+ has a new HttpClient which supports HttpInterceptors. This allows you to insert code that will be run whenever you make a HTTP request.
It is important to notice that HttpRequest are not long-lived Observables, but they terminate after the response. Furthermore, if the observable is unsubscribed before the response has returned, the request is cancelled and neither of the handlers are being processed. You may therefore end up with a "hanging" loader bar, which never goes away. This typically happens if you navigate a bit fast in your application.
To get around this last issue, we need to create a new Observable to be able to attach teardown-logic.
We return this rather than the original Observable. We also need to keep track of all requests made, because we may run more than one request at a time.
We also need a service which can hold and share the state of whether we have pending requests.
```js
@Injectable()
export class MyLoaderService {
// A BehaviourSubject is an Observable with a default value
public isLoading = new BehaviorSubject(false);
constructor() {}
}
```
The Interceptor uses the MyLoaderService
```js
@Injectable()
export class MyLoaderInterceptor implements HttpInterceptor {
private requests: HttpRequest[] = [];
constructor(private loaderService: MyLoaderService) { }
removeRequest(req: HttpRequest) {
const i = this.requests.indexOf(req);
this.requests.splice(i, 1);
this.loaderService.isLoading.next(this.requests.length > 0);
}
intercept(req: HttpRequest, next: HttpHandler): Observable> {
this.requests.push(req);
this.loaderService.isLoading.next(true);
return Observable.create(observer => {
const subscription = next.handle(req)
.subscribe(
event => {
if (event instanceof HttpResponse) {
this.removeRequest(req);
observer.next(event);
}
},
err => { this.removeRequest(req); observer.error(err); },
() => { this.removeRequest(req); observer.complete(); });
// teardown logic in case of cancelled requests
return () => {
this.removeRequest(req);
subscription.unsubscribe();
};
});
}
}
```
Finally, in our Component, we can use the same MyLoaderService and with the async operator we do not even need to subscribe. Since the source value we want to use is from a service, it should be shared as an Observable so that it gets a rendering scope/zone where it is used. If it is just a value, it may not update your GUI as wanted.
```js
@Component({...})
export class MyComponent {
constructor(public myLoaderService: MyLoaderService) {}
}
```
And an example template using async
```html
Loading!
```
I assume you know how to provide services and set up modules properly.
You can also see a working example at [Stackblitz](https://stackblitz.com/edit/angular-o3my4y)
Upvotes: 4 <issue_comment>username_4: @username_3 has a good idea in his solution. He's handling the case for multiple requests. However, the code could be written simpler, without creating new observable and storing requests in memory. The below code also uses RxJS 6 with pipeable operators:
```js
import { Injectable } from '@angular/core';
import {
HttpRequest,
HttpHandler,
HttpInterceptor,
HttpResponse
} from '@angular/common/http';
import { finalize } from 'rxjs/operators';
import { LoadingService } from '@app/services/loading.service';
import { of } from 'rxjs';
@Injectable()
export class LoadingInterceptor implements HttpInterceptor {
private totalRequests = 0;
constructor(private loadingService: LoadingService) { }
intercept(request: HttpRequest, next: HttpHandler) {
this.totalRequests++;
this.loadingService.setLoading(true);
return next.handle(request).pipe(
finalize(() => {
this.totalRequests--;
if (this.totalRequests === 0) {
this.loadingService.setLoading(false);
}
})
);
}
}
```
Add this interceptor service into your module providers:
```js
@NgModule({
// ...
providers: [
{ provide: HTTP_INTERCEPTORS, useClass: LoadingInterceptor, multi: true }
]
})
export class AppModule { }
```
---
Here's an example of the `LoadingService` implementation:
```js
@Injectable()
export class LoadingService {
private isLoading$$ = new BehaviorSubject(false);
isLoading$ = this.isLoading$$.asObservable();
setLoading(isLoading: boolean) {
this.isLoading$$.next(isLoading);
}
}
```
And here's how you'd use the `LoadingService` in a component:
```js
@Component({
selector: 'app-root',
template: `
`,
changeDetection: ChangeDetectionStrategy.OnPush
})
export class AppComponent {
constructor(public loadingService: LoadingService) {}
}
```
Upvotes: 6 <issue_comment>username_5: you can use some css/gif to show a spinner, and use it on your interceptor class, or you can simply use tur false to show the gif.
```html
```
Upvotes: 0 <issue_comment>username_6: Depends upon the approach you follow to use **REST SERVICES**
**My approach is**
* **`Create a component`** and place it somewhere in the application level.
* **`Create a service`** which has counter with **`increment`** and **`decrements`** methods.
* This service should decide to show the **`loader(component)`** or not by following the below steps.
>
> Increase the counter each for one request from the client.
>
>
> Decrease the counter on each response `success` and `failure`
>
>
>
Upvotes: 1 <issue_comment>username_7: >
> Angular Interceptors can be used in a number of ways as they work
> pretty well in manipulating and managing HTTP calls to communicate
> that we make from a client-side web application. We can create an
> utility for showing Mouse Loader using Interceptor.
>
>
>
Please go through the below post for the implementation of `LoaderInterceptor`:-
[Show Loader/Spinner On HTTP Request In Angular using Interceptor](https://medium.com/@imdurgeshpal/show-loader-spinner-on-http-request-in-angular-using-interceptor-68f57a9557a4)
Upvotes: 1 <issue_comment>username_8: I was searching for something that can be used by every component.
I put in a counter, so the spinner stops when every request has finished.
So this works quite well:
```
export class LoadingStatus{
public counter: number = 0;
public isLoading = new Subject();
public reset(){
this.counter = 0;
this.isLoading.next(false);
}
}
export function triggerLoading(status: LoadingStatus): (source: Observable) => Observable {
return (source: Observable): Observable => source.pipe(
prepare(() => {
if(status != null){
status.counter++;
status.isLoading.next(true)
}
} ),
finalize(() => {
if(status != null){
status.counter--;
// if there is something like a flikering, maybe use a delay.
if(status.counter <= 0) {
status.counter = 0;
status.isLoading.next(false)
}
}
})
)
}
```
And then call it like:
public loadingStatus$ = new LoadingStatus();
```
public makeRequest(){
this.myService.load()
.pipe(triggerLoading(this.loadingStatus$))
.subscribe(v => {});
}
```
HTML:
```
```
Upvotes: 0 <issue_comment>username_9: This guy make an awesome job here: [angular loader using rxjs, handle concurrency](https://medium.com/show-loader-during-http-request-handeling/angular-display-loader-on-every-http-request-handeling-concurrency-with-rxjs-6b2515abe52a)
Basically, you make an interceptor. In the interceptor subscribe to NEVER observable each time a request begins, the first time begin to show the overlay/loading; on each subscription ends unsubscribe, when observable reach 0 subscriptions overlay is removed
Upvotes: 0 |
2018/03/20 | 1,349 | 5,067 | <issue_start>username_0: Given
```
case class Fruit(name: String, colour: String)
```
I'd like to deprecate the case class field `colour`.
I tried to achieve the former by both employing Scala `@deprecated`
```
case class Fruit(name: String, @deprecated("message", "since") colour: String)
```
and Java `@Deprecated` annotation
```
case class Fruit(
name: String,
/* @Deprecated with some message here */
@(Deprecated @field)
colour: String
)
```
Unfortunately I wasn't able to make `colour` deprecated in any case and I'm not able to find any resource on that.
Indeed, I can achieve the same task using other approaches, e.g. by relaxing my case class to a normal class and provide a `getter` for `colour` and make the latter deprecated, but I would like to know if deprecating case class fields is actually not possible and -- eventually -- the reason why.
Thank you.
---
**UPDATE**
As Mike rightly pointed out, `colour` is a required field for a case class, so the sense of setting this attribute to `deprecated` is arguable. I'd like to deprecate `colour` , since I'm now providing `colour` information from another construct, I want to keep the user knowing that `colour` is not the right instance by where fetch that information, and I will remove it from `Fruit` in next releases.
By now, I'd just like warn users to not fetch `colour` information from the Fruit attribute and, temporarily, I don't really care if they can create instances of fruit with colour information.
---
**UPDATE 2**
As Alexey said, I do have the deprecated warning at compile time. But why can't I observe the deprecation in my IDE code as well as I'd deprecating a method, then? I'd expect something like the following
```
val fruit = Fruit("peer", "green")
val colour = fruit.colour
```
I'm using IntelliJ.<issue_comment>username_1: This is an odd use case. If a required field is *deprecated*, then the class itself must be *deprecated* too, surely? That is, assuming what you're trying to do is possible, it's impossible to use the class *at all* without getting a deprecation warning. (This probably explains why it's not possible.)
That said, as you indicate, there are ways to achieve what you're trying to do. Here's one of them:
```scala
case class Fruit(name: String) {
@deprecated("message", "since")
val colour: String = "some default value"
}
object Fruit {
// Deprecated factory method.
@deprecated("message", "since")
def apply(name: String, colour: String): Fruit = Fruit(name)
}
```
Clearly, this assumes that `colour` is no longer required as an attribute of `Fruit`. (If it *is* required, it should not be *deprecated*.) It works by deprecating the *factory method* that creates a `Fruit` using a `colour` value.
If this doesn't work, can you explain why the field needs to be *deprecated*?
Upvotes: 3 <issue_comment>username_2: Your first version seems to work fine: if you paste
```
case class Fruit(name: String, @deprecated("message", "since") colour: String)
println(Fruit("", "").colour)
```
at <https://scastie.scala-lang.org/>, you'll see the deprecation warning. Of course, you need to access the field, whether directly or through an extractor:
```
Fruit("", "") match { case Fruit(x, y) => y }
```
will also warn.
That IDEA doesn't show `colour` as deprecated is I believe just a bug. In general it's quite unreliable in showing errors and warnings in Scala code inline and you should always check by actually building the project.
Upvotes: 4 [selected_answer]<issue_comment>username_3: This is tricky and it also depends on level of compatibility you want.
You might remove the field (and thus remove it from primary constructor), but also create a deprecated constructor with the old signature. Plus you'll need a similar apply method on the companion object. The problem is, it is going to break pattern matching for the case class (unless you override unapply). Maybe you find it reasonable, maybe not. And, if you don't want to break readers, you'll also have to implement a deprecated getter stub. (It is however questionable what value to return.)
A more conservative approach: Deprecate the primary constructor, create a secondary (non-deprecated) constructor without the value. Also, deprecate the field. This seems to work OK and it does not break unapply method. It seems to raise deprecated warnings as you would want, except the definition also raises a deprecation warning. See this snippet:
```
case class Foo @deprecated() (x: String, @deprecated y: Int){
def this(x: String) = this(x, 0)
}
object Foo {
def apply(x: String) = new Foo(x)
}
println((new Foo("x")).x)
println((new Foo("x", 66)).x)
println((Foo("x")).x)
println((Foo("x", 66)).x)
println((new Foo("x")).y)
println((new Foo("x", 66)).y)
println((Foo("x")).y)
println((Foo("x", 66)).y)
```
Unfortunately, there is [no @SuppressWarning](https://stackoverflow.com/questions/3506370/is-there-an-equivalent-to-suppresswarnings-in-scala).
Of course, adjust parameters for @deprecated as you need.
Upvotes: 0 |
2018/03/20 | 923 | 3,189 | <issue_start>username_0: I am trying to write a custom php script in my Drupal site root that checks if the user is logged in. To check this I import bootstrap.inc. However when I do this it throws me this error
[](https://i.stack.imgur.com/8vgqs.png)
This is the code of the php script in my site root:
```
php
require_once './core/includes/bootstrap.inc';
drupal_bootstrap(DRUPAL_BOOTSTRAP_FULL);
global $user;
var_dump($user-uid);
?>
```
Anyone has a solution to this?<issue_comment>username_1: To bootstrap Drupal 8, you need different code. Drupal 8 doesn't have any [`drupal_bootstrap()`](https://api.drupal.org/api/drupal/includes%21bootstrap.inc/function/drupal_bootstrap/7.x) function, so the code you are using would throw a PHP error.
You can use [authorize.php](https://api.drupal.org/api/drupal/core%21authorize.php/8.6.x) as guideline to write your own script.
```
use Drupal\Core\DrupalKernel;
use Symfony\Component\HttpKernel\Exception\HttpExceptionInterface;
use Symfony\Component\HttpFoundation\Request;
use Symfony\Component\HttpFoundation\Response;
$autoloader = (require_once 'autoload.php');
try {
$request = Request::createFromGlobals();
$kernel = DrupalKernel::createFromRequest($request, $autoloader, 'prod');
$kernel->prepareLegacyRequest($request);
} catch (HttpExceptionInterface $e) {
$response = new Response('', $e->getStatusCode());
$response
->prepare($request)
->send();
exit;
}
\Drupal::moduleHandler()
->addModule('system', 'core/modules/system');
\Drupal::moduleHandler()
->addModule('user', 'core/modules/user');
\Drupal::moduleHandler()
->load('system');
\Drupal::moduleHandler()
->load('user');
$account = \Drupal::service('authentication')
->authenticate($request);
if ($account) {
\Drupal::currentUser()
->setAccount($account);
if (\Drupal::currentUser()->isAuthenticated() {
// The user is logged-in.
}
}
```
Upvotes: 1 <issue_comment>username_2: I fixed this by using a complete different approach. I wrote a module which sets a cookie on the moment that the user logs in to drupal (I use the hook\_user\_login for this). When the user logs out I delete that cookie (I use the hook\_user\_logout for this). This is the code of my test.module:
```
/**
* @param $account
*/
function intersoc_content_user_login($account)
{
setcookie("user", "loggedin", time() + (86400 * 30),"/");
}
/**
* @param $account
*/
function intersoc_content_user_logout($account)
{
if (isset($_COOKIE['user']))
{
unset($_COOKIE['user']);
setcookie('user', '', time() - 3600, '/'); //Clearing cookie
}
}
```
Then in my custom script in the site root I check if the cookie is set. When the cookie exists => The user is logged in. If the cookie doesn't exist then the user isn't logged in. The isLoggedIn() function below:
```
/**
* @return bool which indicates if the user is logged in or not
*/
private function isLoggedIn()
{
if(isset($_COOKIE["user"]))
{
return TRUE;
}
else
{
return FALSE;
}
}
```
It isn't the most beautiful solution, but it works!!!
Upvotes: 1 [selected_answer] |
2018/03/20 | 4,369 | 12,955 | <issue_start>username_0: Doing small helping tool for combining two text files into one.
These files stores a big 2D arrays of float values. Here is some of them:
```
File 1
-0,1296169 -0,1286087 -0,1276232 ...
-0,1288124 -0,1278683 -0,1269373 ...
-0,1280221 -0,1271375 -0,12626 ...
...
File 2
-0,1181779 -0,1200798 -0,1219472 ...
-0,1198357 -0,1216468 -0,1234369 ...
-0,1214746 -0,1232006 -0,1249159 ...
...
both may have hunderds of rows and columns ...
```
Values also can be in scientific form (etc. 1.234e-003).
My goal is to read two files simultaneously value by value and write output, while fixing delimeter from comma to point and conver from scientific form to standard in the process.
This version of program combines only prepeared files (delimeter changed to point, values represented in standard form and values moved "one value per line"), but making these preparation is unreal if file have more than million of values.
Here is what i have for now:
```
import (
"bufio"
"fmt"
"io"
"os"
"regexp"
)
func main() {
file_dB, err := os.Open("d:/dB.txt")
if err != nil {
fmt.Printf("error opening file: %v\n", err)
os.Exit(1)
}
file_dL, err := os.Open("d:/dL.txt")
if err != nil {
fmt.Printf("error opening file: %v\n", err)
os.Exit(1)
}
file_out, err := os.Create("d:/out.txt") // also rewrite existing !
if err != nil {
fmt.Printf("error opening file: %v\n", err)
os.Exit(1)
}
dB := bufio.NewReader(file_dB)
dL := bufio.NewReader(file_dL)
err = nil
i := 1
for {
line1, _, err := dB.ReadLine()
if len(line1) > 0 && line1[len(line1)-1] == '\n' {
line1 = line1[:len(line1)-1]
}
line2, _, err := dL.ReadLine()
if len(line2) > 0 && line2[len(line2)-1] == '\n' {
line2 = line2[:len(line2)-1]
}
if len(line1) == 0 || len(line2) == 0 || err == io.EOF {
fmt.Println("Total lines done: ", i)
break
} else if err != nil {
fmt.Printf("Error while reading files: %v\n", err)
os.Exit(1)
}
i++
str := string(line1) + ";" + string(line2) + "\n"
if _, err := file_out.WriteString(str); err != nil {
panic(err)
}
}
}
```
How can i use regexp to make this program read unprepeared files (first listing) value by value and form it like:
```
-0.129617;-0.118178
-0.128609;-0.120080
-0.127623;-0.121947
...
```
Input files always formed in same way:
-decimal separator is comma
-one space after value (even if it last in a row)
-newline in the end of line
Previously used expression like `([-?])([0-9]{1})([,]{1})([0-9]{1,12})( {1})` and Notepad++ replace function to split line-of-values into one-value-per-line (combined to new vaules used expression like `$1$2.$4\r\n\`), but its mess if 'scientific form' value happens.
So is there any way to read files value by value without messing with splitting line into slices/substrings and working over them?<issue_comment>username_1: Something like this. Note the limitation that assumes same number of values per line. Be careful it would blowup with the error if this assumption is wrong :)
```golang
package main
import (
"bufio"
"fmt"
"os"
"strconv"
"strings"
)
func main() {
file_dB, err := os.Open("dB.txt")
if err != nil {
fmt.Printf("error opening file: %v\n", err)
return
}
defer file_dB.Close()
file_dL, err := os.Open("dL.txt")
if err != nil {
fmt.Printf("error opening file: %v\n", err)
return
}
defer file_dL.Close()
file_out, err := os.Create("out.txt") // also rewrite existing !
if err != nil {
fmt.Printf("error opening file: %v\n", err)
return
}
defer file_out.Close()
dB := bufio.NewReader(file_dB)
dL := bufio.NewReader(file_dL)
lc := 0
for {
lc++
line1, _, err := dB.ReadLine()
vals1 := strings.Split(string(line1), " ")
if err != nil {
fmt.Println(lc, err)
return
}
line2, _, err := dL.ReadLine()
vals2 := strings.Split(string(line2), " ")
if err != nil {
fmt.Println(lc, err)
return
}
// Limitation: assumes line1 and line2 have same number of values per line
for i := range vals1 {
dot1 := strings.Replace(vals1[i], ",", ".", 1)
v1, err := strconv.ParseFloat(dot1, 64)
if err != nil {
fmt.Println(lc, err)
continue
}
dot2 := strings.Replace(vals2[i], ",", ".", 1)
v2, err := strconv.ParseFloat(dot2, 64)
if err != nil {
fmt.Println(lc, err)
continue
}
_, err = fmt.Fprintf(file_out, "%v; %v\n", v1, v2)
if err != nil {
fmt.Println(lc, err)
return
}
}
}
}
```
Upvotes: 0 <issue_comment>username_2: For example,
```
package main
import (
"bufio"
"bytes"
"fmt"
"io"
"os"
"strconv"
"strings"
)
var comma, period = []byte{','}, []byte{'.'}
func readNext(r io.Reader) func() (float64, error) {
s := bufio.NewScanner(r)
var fields []string
return func() (float64, error) {
if len(fields) == 0 {
err := io.EOF
for s.Scan() {
line := bytes.Replace(s.Bytes(), comma, period, -1)
fields = strings.Fields(string(line))
if len(fields) > 0 {
err = nil
break
}
}
if err := s.Err(); err != nil {
return 0, err
}
if err == io.EOF {
return 0, err
}
}
n, err := strconv.ParseFloat(fields[0], 64)
fields = fields[1:]
if err != nil {
return 0, err
}
return n, nil
}
}
func main() {
in1Name := `in1.data`
in2Name := `in2.data`
outName := `out.data`
in1, err := os.Open(in1Name)
if err != nil {
fmt.Fprint(os.Stderr, err)
return
}
defer in1.Close()
in2, err := os.Open(in2Name)
if err != nil {
fmt.Fprint(os.Stderr, err)
return
}
defer in2.Close()
out, err := os.Create(outName)
if err != nil {
fmt.Fprint(os.Stderr, err)
return
}
defer out.Close()
outw := bufio.NewWriter(out)
defer outw.Flush()
next1 := readNext(in1)
next2 := readNext(in2)
for {
n1, err1 := next1()
n2, err2 := next2()
if err1 == io.EOF && err2 == io.EOF {
break
}
if err1 != nil || err2 != nil {
fmt.Fprint(os.Stderr, err1, err2)
return
}
_, err := fmt.Fprintf(outw, "%g;%g\n", n1, n2)
if err != nil {
fmt.Fprint(os.Stderr, err)
return
}
}
}
```
Playground: <https://play.golang.org/p/I_sT_EPFI_W>
Output:
```
$ go run data.go
$ cat in1.data
-0,1296169 -0,1286087 -0,1276232
-0,1288124 -0,1278683 -0,1269373
-0,1280221 -0,1271375 -0,12626
$ cat in2.data
-0,1296169 -0,1286087 -0,1276232
-0,1288124 -0,1278683 -0,1269373
-0,1280221 -0,1271375 -0,12626
$ cat out.data
-0.1296169;-0.1296169
-0.1286087;-0.1286087
-0.1276232;-0.1276232
-0.1288124;-0.1288124
-0.1278683;-0.1278683
-0.1269373;-0.1269373
-0.1280221;-0.1280221
-0.1271375;-0.1271375
-0.12626;-0.12626
$
```
Upvotes: 0 <issue_comment>username_3: Thanks for help, with points of view of another peoples i've found my own solution.
**What this tool does?** Generally it combines two text files to one.
**Where i've used it?** Creating "Generic ASCII" text file for "Country specific coordinate system tool". Input text files are ASCII export of GRID files from GIS applications (values in arc degrees expected). Later this file may be used to fix local coordinate shifts when working with precise GPS/GNSS receivers.
Here what i've "developed":
```
package main
import (
"bufio"
"fmt"
"os"
"regexp"
"strconv"
"strings"
)
func main() {
file_dB, err := os.Open("d:/dB.txt")
if err != nil {
fmt.Printf("error opening file: %v\n", err)
os.Exit(1)
}
defer file_dB.Close()
file_dL, err := os.Open("d:/dL.txt")
if err != nil {
fmt.Printf("error opening file: %v\n", err)
os.Exit(1)
}
defer file_dL.Close()
file_out, err := os.Create("d:/out.txt") // also rewrite existing !
if err != nil {
fmt.Printf("error opening file: %v\n", err)
os.Exit(1)
}
defer file_out.Close()
dB := bufio.NewReader(file_dB)
dL := bufio.NewReader(file_dL)
err = nil
xcorn_float := 0.0
ycorn_float := 0.0
cellsize_float := 0.0
ncols := regexp.MustCompile("[0-9]+")
nrows := regexp.MustCompile("[0-9]+")
xcorn := regexp.MustCompile("[0-9]*,[0-9]*")
ycorn := regexp.MustCompile("[0-9]*,[0-9]*")
cellsize := regexp.MustCompile("[0-9]*,[0-9]*")
nodataval := regexp.MustCompile("-?d+")
tmp := 0.0
// n cols --------------------
ncols_dB, err := dB.ReadString('\n')
if err != nil {
panic(err)
}
ncols_dL, err := dL.ReadString('\n')
if err != nil {
panic(err)
}
if ncols.FindString(ncols_dB) != ncols.FindString(ncols_dL) {
panic(err)
}
ncols_dB = ncols.FindString(ncols_dB)
// n rows --------------------
nrows_dB, err := dB.ReadString('\n')
if err != nil {
panic(err)
}
nrows_dL, err := dL.ReadString('\n')
if err != nil {
panic(err)
}
if nrows.FindString(nrows_dB) != nrows.FindString(nrows_dL) {
panic(err)
}
nrows_dB = nrows.FindString(nrows_dB)
// X --------------------
xcorn_dB, err := dB.ReadString('\n')
if err != nil {
panic(err)
}
xcorn_dL, err := dL.ReadString('\n')
if err != nil {
panic(err)
}
if xcorn.FindString(xcorn_dB) != xcorn.FindString(xcorn_dL) {
panic(err)
}
xcorn_float, err = strconv.ParseFloat(strings.Replace(cellsize.FindString(xcorn_dB), ",", ".", 1), 8)
xcorn_float *= 3600.0
// Y --------------------
ycorn_dB, err := dB.ReadString('\n')
if err != nil {
panic(err)
}
ycorn_dL, err := dL.ReadString('\n')
if err != nil {
panic(err)
}
if ycorn.FindString(ycorn_dB) != ycorn.FindString(ycorn_dL) {
panic(err)
}
ycorn_float, err = strconv.ParseFloat(strings.Replace(cellsize.FindString(ycorn_dB), ",", ".", 1), 8)
ycorn_float *= 3600.0
// cell size --------------------
cellsize_dB, err := dB.ReadString('\n')
if err != nil {
panic(err)
}
cellsize_dL, err := dL.ReadString('\n')
if err != nil {
panic(err)
}
if cellsize.FindString(cellsize_dB) != cellsize.FindString(cellsize_dL) {
panic(err)
}
cellsize_float, err = strconv.ParseFloat(strings.Replace(cellsize.FindString(cellsize_dB), ",", ".", 1), 8)
cellsize_float *= 3600.0
// nodata value --------------------
nodataval_dB, err := dB.ReadString('\n')
if err != nil {
panic(err)
}
nodataval_dL, err := dL.ReadString('\n')
if err != nil {
panic(err)
}
if nodataval.FindString(nodataval_dB) != nodataval.FindString(nodataval_dL) {
panic(err)
}
nodataval_dB = nodataval.FindString(nodataval_dB)
fmt.Print(nodataval_dB)
//making header
if _, err := file_out.WriteString("name\n3;0;2\n1;2;" + nrows_dB + ";" + ncols_dB + "\n" + strconv.FormatFloat(xcorn_float, 'f', -1, 32) + ";" + strconv.FormatFloat(ycorn_float, 'f', -1, 32) + ";" + strconv.FormatFloat(cellsize_float, 'f', -1, 32) + ";" + strconv.FormatFloat(cellsize_float, 'f', -1, 32) + "\n1\n"); err != nil {
panic(err)
}
// valuses --------------------
for {
line1, err := dB.ReadString(' ')
if err != nil {
break
}
if tmp, err = strconv.ParseFloat(strings.TrimSpace(strings.Replace(line1, ",", ".", 1)), 64); err == nil {
line1 = strconv.FormatFloat(tmp, 'f', 8, 64)
}
line2, err := dL.ReadString(' ')
if err != nil {
break
}
if tmp, err = strconv.ParseFloat(strings.TrimSpace(strings.Replace(line2, ",", ".", 1)), 64); err == nil {
line2 = strconv.FormatFloat(tmp, 'f', 8, 64)
}
if err != nil {
panic(err)
}
str := string(line1) + ";" + string(line2) + "\n"
if _, err := file_out.WriteString(str); err != nil {
panic(err)
}
}
}
```
If you have any recomendations - feel free to leave a comment!
Upvotes: 1 |
2018/03/20 | 11,702 | 31,002 | <issue_start>username_0: I'm very new to java and android studio i have about 5 days under my belt from just going at it but I've ran into a snag i need help figuring out how to have android studio recognize the file path of a photo.
This program should open and display a imageView on the mainActivity and when you click it it will open up the camera and allow you to take a photo after you take the photo it backs out into the mainActivity and displays the full resolution photo in the imageView and uploads the photo to a website running some php code listening for posts and writing them to the online storage folder
everything works fine except when it tries to upload the photo it will say no file exists ive checked the storage on the device the photo is there maybe its because i had to use a FileProvider to make the uri go from file to content in order to save the image the php scrip works so im in need of help solving the cant find the file issue thank you for your help in advanced
the project code is below
device:Galexy note 8
AndroidManifest.xml
```
xml version="1.0" encoding="utf-8"?
```
MainActivity.java
```
package com.example.photo;
import android.content.Intent;
import android.content.pm.PackageManager;
import android.net.Uri;
import android.os.AsyncTask;
import android.os.Environment;
import android.provider.MediaStore;
import android.support.annotation.NonNull;
import android.support.v4.content.FileProvider;
import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.util.Log;
import android.view.View;
import android.widget.ImageView;
import java.io.DataOutputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.net.HttpURLConnection;
import java.net.URL;
public class MainActivity extends AppCompatActivity {
public interface ApplicationConstant {
String UPLOAD_IMAGE_URL = "http://www.example.com/upload.php";
String TAG = "DEBUG1";
}
String Barcode="9999";
int permsRequestCode;
int RC_PERMISSIONS;
final int CAMERA_REQUEST = 1;
private Uri photoUri;
ImageView mImg;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
mImg = (ImageView) findViewById(R.id.imageView);
mImg.setOnClickListener(new View.OnClickListener() {
//@Override
public void onClick(View v) {
Log.i(ApplicationConstant.TAG," " + "click");
showCamera();
}
});
}
private class uploadFileToServerTask extends AsyncTask {
@Override
protected String doInBackground(String... args) {
try {
String lineEnd = "\r\n";
String twoHyphens = "--";
String boundary = "\*\*\*\*\*";
int bytesRead, bytesAvailable, bufferSize;
byte[] buffer;
@SuppressWarnings("PointlessArithmeticExpression")
int maxBufferSize = 1 \* 1024 \* 1024;
java.net.URL url = new URL((ApplicationConstant.UPLOAD\_IMAGE\_URL));
Log.i(ApplicationConstant.TAG, "url " + url);
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
// Allow Inputs & Outputs.
connection.setDoInput(true);
connection.setDoOutput(true);
connection.setUseCaches(false);
// Set HTTP method to POST.
connection.setRequestMethod("POST");
connection.setRequestProperty("Connection", "Keep-Alive");
connection.setRequestProperty("Content-Type", "multipart/form-data;boundary=" + boundary);
FileInputStream fileInputStream;
DataOutputStream outputStream;
{
outputStream = new DataOutputStream(connection.getOutputStream());
outputStream.writeBytes(twoHyphens + boundary + lineEnd);
String filename = args[0];
outputStream.writeBytes("Content-Disposition: form-data; name=\"file\";filename=\"" + filename + "\"" + lineEnd);
outputStream.writeBytes(lineEnd);
Log.i(ApplicationConstant.TAG, "filename: " + filename);
fileInputStream = new FileInputStream(filename);
bytesAvailable = fileInputStream.available();
bufferSize = Math.min(bytesAvailable, maxBufferSize);
buffer = new byte[bufferSize];
// Read file
bytesRead = fileInputStream.read(buffer, 0, bufferSize);
while (bytesRead > 0) {
outputStream.write(buffer, 0, bufferSize);
bytesAvailable = fileInputStream.available();
bufferSize = Math.min(bytesAvailable, maxBufferSize);
bytesRead = fileInputStream.read(buffer, 0, bufferSize);
}
outputStream.writeBytes(lineEnd);
outputStream.writeBytes(twoHyphens + boundary + twoHyphens + lineEnd);
}
int serverResponseCode = connection.getResponseCode();
String serverResponseMessage = connection.getResponseMessage();
Log.i(ApplicationConstant.TAG, "serverResponseCode: " + serverResponseCode);
Log.i(ApplicationConstant.TAG, "serverResponseMessage: " + serverResponseMessage);
fileInputStream.close();
outputStream.flush();
outputStream.close();
if (serverResponseCode == 200) {
return "true";
}
} catch (Exception e) {
e.printStackTrace();
}
return "false";
}
@Override
protected void onPostExecute(Object result) {
}
}
@Override
public void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (requestCode == CAMERA\_REQUEST) {
if (resultCode == RESULT\_OK) {
if (photoUri != null) {
mImg.setImageURI(photoUri);
Log.i(ApplicationConstant.TAG,"photo\_Uri: " + photoUri);
new uploadFileToServerTask().execute(photoUri.toString());
}
}
}
}
private void showCamera() {
Intent intent = new Intent(MediaStore.ACTION\_IMAGE\_CAPTURE);
if (intent.resolveActivity(this.getPackageManager()) != null) {
File file = null;
try {
file = createImageFile();
} catch (IOException e) {
e.printStackTrace();
}
photoUri = null;
if (file != null) {
photoUri = FileProvider.getUriForFile(MainActivity.this,
getString(R.string.file\_provider\_authority),
file);
//photoUri = Uri.fromFile(file);
intent.putExtra(MediaStore.EXTRA\_OUTPUT, photoUri);
startActivityForResult(intent, CAMERA\_REQUEST);
}
}
}
private File createImageFile() throws IOException {
// Create an image file name
File storageDir = new File(this.getExternalFilesDir(Environment.DIRECTORY\_PICTURES),Barcode + ".jpg");
// File storageDir = Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY\_PICTURES);
return storageDir;
}
}
```
values/strings.xml
```
Photo
com.example.photo.fileprovider
```
xml/file\_provider\_path.xml
```
xml version="1.0" encoding="utf-8"?
```
upload.php
```
php
$target_dir = "";
$target_file = $target_dir . basename($_FILES["file"]["name"]);
$uploadOk = 1;
$imageFileType = strtolower(pathinfo($target_file,PATHINFO_EXTENSION));
// Check if image file is a actual image or fake image
if(isset($_POST["submit"])) {
$check = getimagesize($_FILES["file"]["tmp_name"]);
if($check !== false) {
echo "File is an image - " . $check["mime"] . ".";
$uploadOk = 1;
} else {
echo "File is not an image.";
$uploadOk = 0;
}
}
// Check if file already exists
if (file_exists($target_file)) {
echo "Sorry, file already exists.";
$uploadOk = 0;
}
// Check file size
if ($_FILES["file"]["size"] 500000) {
echo "Sorry, your file is too large.";
$uploadOk = 0;
}
// Allow certain file formats
if($imageFileType != "jpg" && $imageFileType != "png" && $imageFileType != "jpeg"
&& $imageFileType != "gif" ) {
echo "Sorry, only JPG, JPEG, PNG & GIF files are allowed.";
$uploadOk = 0;
}
// Check if $uploadOk is set to 0 by an error
if ($uploadOk == 0) {
echo "Sorry, your file was not uploaded.";
// if everything is ok, try to upload file
} else {
if (move_uploaded_file($_FILES["file"]["tmp_name"], $target_file)) {
echo "The file ". basename( $_FILES["file"]["name"]). " has been uploaded.";
} else {
echo "Sorry, there was an error uploading your file.";
}
}
?>
```
Logcat
```
03-20 02:33:39.234 32312-32312/? E/Zygote: v2
03-20 02:33:39.235 32312-32312/? I/libpersona: KNOX_SDCARD checking this for 10276
03-20 02:33:39.235 32312-32312/? I/libpersona: KNOX_SDCARD not a persona
03-20 02:33:39.235 32312-32312/? E/Zygote: accessInfo : 0
03-20 02:33:39.236 32312-32312/? W/SELinux: SELinux selinux_android_compute_policy_index : Policy Index[2], Con:u:r:zygote:s0 RAM:SEPF_SECMOBILE_7.1.1_0004, [-1 -1 -1 -1 0 1]
03-20 02:33:39.236 32312-32312/? I/SELinux: SELinux: seapp_context_lookup: seinfo=untrusted, level=s0:c512,c768, pkgname=com.example.photo
03-20 02:33:39.241 32312-32312/? I/art: Late-enabling -Xcheck:jni
03-20 02:33:39.452 32312-32312/com.example.photo I/InstantRun: starting instant run server: is main process
03-20 02:33:39.510 32312-32312/com.example.photo V/ActivityThread: performLaunchActivity: mActivityCurrentConfig={0 1.0 themeSeq = 0 showBtnBg = 0 310mcc260mnc [en_US,ja_JP] ldltr sw411dp w411dp h773dp 560dpi nrml long port ?dc finger -keyb/v/h -nav/h mkbd/h desktop/d s.112}
03-20 02:33:39.522 32312-32312/com.example.photo W/art: Before Android 4.1, method android.graphics.PorterDuffColorFilter android.support.graphics.drawable.VectorDrawableCompat.updateTintFilter(android.graphics.PorterDuffColorFilter, android.content.res.ColorStateList, android.graphics.PorterDuff$Mode) would have incorrectly overridden the package-private method in android.graphics.drawable.Drawable
03-20 02:33:39.611 32312-32312/com.example.photo D/TextView: setTypeface with style : 0
03-20 02:33:39.612 32312-32312/com.example.photo D/TextView: setTypeface with style : 0
03-20 02:33:39.649 32312-32312/com.example.photo D/ViewRootImpl@4c4ddef[MainActivity]: ThreadedRenderer.create() translucent=false
03-20 02:33:39.653 32312-32312/com.example.photo D/InputTransport: Input channel constructed: fd=87
03-20 02:33:39.654 32312-32312/com.example.photo D/ViewRootImpl@4c4ddef[MainActivity]: setView = DecorView@94b80fc[MainActivity] touchMode=true
03-20 02:33:39.663 32312-32312/com.example.photo D/ViewRootImpl@4c4ddef[MainActivity]: dispatchAttachedToWindow
03-20 02:33:39.688 32312-32312/com.example.photo V/Surface: sf_framedrop debug : 0x4f4c, game : false, logging : 0
03-20 02:33:39.688 32312-32312/com.example.photo D/ViewRootImpl@4c4ddef[MainActivity]: Relayout returned: oldFrame=[0,0][0,0] newFrame=[0,0][1440,2960] result=0x27 surface={isValid=true 548279813120} surfaceGenerationChanged=true
03-20 02:33:39.688 32312-32312/com.example.photo D/ViewRootImpl@4c4ddef[MainActivity]: mHardwareRenderer.initialize() mSurface={isValid=true 548279813120} hwInitialized=true
03-20 02:33:39.691 32312-32379/com.example.photo D/libEGL: loaded /vendor/lib64/egl/libEGL_adreno.so
03-20 02:33:39.703 32312-32379/com.example.photo D/libEGL: loaded /vendor/lib64/egl/libGLESv1_CM_adreno.so
03-20 02:33:39.709 32312-32312/com.example.photo W/art: Before Android 4.1, method int android.support.v7.widget.ListViewCompat.lookForSelectablePosition(int, boolean) would have incorrectly overridden the package-private method in android.widget.ListView
03-20 02:33:39.711 32312-32379/com.example.photo D/libEGL: loaded /vendor/lib64/egl/libGLESv2_adreno.so
03-20 02:33:39.715 32312-32312/com.example.photo D/ViewRootImpl@4c4ddef[MainActivity]: MSG_RESIZED_REPORT: frame=Rect(0, 0 - 1440, 2960) ci=Rect(0, 84 - 0, 168) vi=Rect(0, 84 - 0, 168) or=1
03-20 02:33:39.715 32312-32312/com.example.photo D/ViewRootImpl@4c4ddef[MainActivity]: MSG_WINDOW_FOCUS_CHANGED 1
03-20 02:33:39.715 32312-32312/com.example.photo D/ViewRootImpl@4c4ddef[MainActivity]: mHardwareRenderer.initializeIfNeeded()#2 mSurface={isValid=true 548279813120}
03-20 02:33:39.716 32312-32312/com.example.photo V/InputMethodManager: Starting input: tba=android.view.inputmethod.EditorInfo@72b6a0b nm : com.example.photo ic=null
03-20 02:33:39.716 32312-32312/com.example.photo I/InputMethodManager: startInputInner - mService.startInputOrWindowGainedFocus
03-20 02:33:39.721 32312-32324/com.example.photo D/InputTransport: Input channel constructed: fd=95
03-20 02:33:39.725 32312-32379/com.example.photo I/Adreno: QUALCOMM build : 33f9106, Ia8660bee96
Build Date : 08/09/17
OpenGL ES Shader Compiler Version: XE031.14.00.01
Local Branch :
Remote Branch : refs/tags/AU_LINUX_ANDROID_LA.UM.5.7.C2.07.01.01.292.070
Remote Branch : NONE
Reconstruct Branch : NOTHING
03-20 02:33:39.726 32312-32379/com.example.photo I/Adreno: PFP: 0x005ff104, ME: 0x005ff063
03-20 02:33:39.727 32312-32379/com.example.photo I/OpenGLRenderer: Initialized EGL, version 1.4
03-20 02:33:39.727 32312-32379/com.example.photo D/OpenGLRenderer: Swap behavior 1
03-20 02:33:39.745 32312-32312/com.example.photo V/InputMethodManager: Starting input: tba=android.view.inputmethod.EditorInfo@b83b500 nm : com.example.photo ic=null
03-20 02:33:42.813 32312-32312/com.example.photo D/ViewRootImpl@4c4ddef[MainActivity]: ViewPostImeInputStage processPointer 0
03-20 02:33:42.888 32312-32312/com.example.photo D/ViewRootImpl@4c4ddef[MainActivity]: ViewPostImeInputStage processPointer 1
03-20 02:33:42.891 32312-32312/com.example.photo I/DEBUG1: click
03-20 02:33:42.915 32312-32312/com.example.photo D/ViewRootImpl@4c4ddef[MainActivity]: MSG_WINDOW_FOCUS_CHANGED 0
03-20 02:33:43.165 32312-32312/com.example.photo D/ViewRootImpl@4c4ddef[MainActivity]: mHardwareRenderer.destroy()#1
03-20 02:33:43.173 32312-32312/com.example.photo D/ViewRootImpl@4c4ddef[MainActivity]: Relayout returned: oldFrame=[0,0][1440,2960] newFrame=[0,0][1440,2960] result=0x5 surface={isValid=false 0} surfaceGenerationChanged=true
03-20 02:33:43.173 32312-32312/com.example.photo D/InputTransport: Input channel destroyed: fd=95
03-20 02:33:49.502 32312-32312/com.example.photo D/ViewRootImpl@4c4ddef[MainActivity]: mHardwareRenderer.destroy()#1
03-20 02:33:49.508 32312-32312/com.example.photo D/ViewRootImpl@4c4ddef[MainActivity]: Relayout returned: oldFrame=[0,0][1440,2960] newFrame=[0,0][1440,2960] result=0x1 surface={isValid=false 0} surfaceGenerationChanged=false
03-20 02:33:49.531 32312-32312/com.example.photo V/Surface: sf_framedrop debug : 0x4f4c, game : false, logging : 0
03-20 02:33:49.532 32312-32312/com.example.photo D/ViewRootImpl@4c4ddef[MainActivity]: Relayout returned: oldFrame=[0,0][1440,2960] newFrame=[0,0][1440,2960] result=0x7 surface={isValid=true 548279813120} surfaceGenerationChanged=true
03-20 02:33:49.532 32312-32312/com.example.photo D/ViewRootImpl@4c4ddef[MainActivity]: mHardwareRenderer.initialize() mSurface={isValid=true 548279813120} hwInitialized=true
03-20 02:33:49.545 32312-32312/com.example.photo D/ViewRootImpl@4c4ddef[MainActivity]: MSG_WINDOW_FOCUS_CHANGED 1
03-20 02:33:49.545 32312-32312/com.example.photo D/ViewRootImpl@4c4ddef[MainActivity]: mHardwareRenderer.initializeIfNeeded()#2 mSurface={isValid=true 548279813120}
03-20 02:33:49.545 32312-32312/com.example.photo V/InputMethodManager: Starting input: tba=android.view.inputmethod.EditorInfo@4c32418 nm : com.example.photo ic=null
03-20 02:33:49.545 32312-32312/com.example.photo I/InputMethodManager: startInputInner - mService.startInputOrWindowGainedFocus
03-20 02:33:49.547 32312-32312/com.example.photo D/InputTransport: Input channel constructed: fd=91
03-20 02:33:49.797 32312-32312/com.example.photo I/DEBUG1: photo_Uri: content://com.example.photo.fileprovider/external_files/Android/data/com.example.photo/files/Pictures/9999.jpg
03-20 02:33:49.799 32312-2026/com.example.photo I/DEBUG1: url http://www.example.com/upload.php
03-20 02:33:49.802 32312-2026/com.example.photo D/NetworkSecurityConfig: No Network Security Config specified, using platform default
03-20 02:33:49.803 32312-2026/com.example.photo I/System.out: (HTTPLog)-Static: isSBSettingEnabled false
03-20 02:33:49.803 32312-2026/com.example.photo I/System.out: (HTTPLog)-Static: isSBSettingEnabled false
03-20 02:33:49.903 32312-2026/com.example.photo D/TcpOptimizer: TcpOptimizer-ON
03-20 02:33:49.983 32312-2026/com.example.photo I/DEBUG1: filename: content://com.example.photo.fileprovider/external_files/Android/data/com.example.photo/files/Pictures/9999.jpg
03-20 02:33:49.983 32312-2026/com.example.photo W/System.err: java.io.FileNotFoundException: content:/com.example.photo.fileprovider/external_files/Android/data/com.example.photo/files/Pictures/9999.jpg (No such file or directory)
03-20 02:33:49.983 32312-2026/com.example.photo W/System.err: at java.io.FileInputStream.open(Native Method)
03-20 02:33:49.983 32312-2026/com.example.photo W/System.err: at java.io.FileInputStream.(FileInputStream.java:146)
03-20 02:33:49.983 32312-2026/com.example.photo W/System.err: at java.io.FileInputStream.(FileInputStream.java:99)
03-20 02:33:49.983 32312-2026/com.example.photo W/System.err: at com.example.photo.MainActivity$uploadFileToServerTask.doInBackground(MainActivity.java:94)
03-20 02:33:49.983 32312-2026/com.example.photo W/System.err: at com.example.photo.MainActivity$uploadFileToServerTask.doInBackground(MainActivity.java:55)
03-20 02:33:49.983 32312-2026/com.example.photo W/System.err: at android.os.AsyncTask$2.call(AsyncTask.java:305)
03-20 02:33:49.983 32312-2026/com.example.photo W/System.err: at java.util.concurrent.FutureTask.run(FutureTask.java:237)
03-20 02:33:49.983 32312-2026/com.example.photo W/System.err: at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:243)
03-20 02:33:49.983 32312-2026/com.example.photo W/System.err: at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1133)
03-20 02:33:49.983 32312-2026/com.example.photo W/System.err: at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:607)
03-20 02:33:49.983 32312-2026/com.example.photo W/System.err: at java.lang.Thread.run(Thread.java:762)
03-20 02:33:52.113 32312-32312/com.example.photo D/ViewRootImpl@4c4ddef[MainActivity]: mHardwareRenderer.destroy()#4
03-20 02:33:52.113 32312-32312/com.example.photo D/ViewRootImpl@4c4ddef[MainActivity]: dispatchDetachedFromWindow
03-20 02:33:52.117 32312-32312/com.example.photo D/InputTransport: Input channel destroyed: fd=87
03-20 02:33:52.117 32312-32312/com.example.photo W/SemDesktopModeManager: unregisterListener: Listener is null
03-20 02:33:52.120 32312-32312/com.example.photo V/ActivityThread: performLaunchActivity: mActivityCurrentConfig={0 1.0 themeSeq = 0 showBtnBg = 0 310mcc260mnc [en\_US,ja\_JP] ldltr sw411dp w797dp h387dp 560dpi nrml long land ?dc finger -keyb/v/h -nav/h mkbd/h desktop/d s.113}
03-20 02:33:52.134 32312-32312/com.example.photo D/TextView: setTypeface with style : 0
03-20 02:33:52.134 32312-32312/com.example.photo D/TextView: setTypeface with style : 0
03-20 02:33:52.141 32312-32312/com.example.photo D/ViewRootImpl@b05cbf4[MainActivity]: ThreadedRenderer.create() translucent=false
03-20 02:33:52.179 32312-32312/com.example.photo D/InputTransport: Input channel constructed: fd=93
03-20 02:33:52.179 32312-32312/com.example.photo D/ViewRootImpl@b05cbf4[MainActivity]: setView = DecorView@8545f1d[MainActivity] touchMode=true
03-20 02:33:52.183 32312-32312/com.example.photo E/ViewRootImpl: sendUserActionEvent() returned.
03-20 02:33:52.183 32312-32312/com.example.photo D/ViewRootImpl@b05cbf4[MainActivity]: dispatchAttachedToWindow
03-20 02:33:52.210 32312-32312/com.example.photo V/Surface: sf\_framedrop debug : 0x4f4c, game : false, logging : 0
03-20 02:33:52.211 32312-32312/com.example.photo D/ViewRootImpl@b05cbf4[MainActivity]: Relayout returned: oldFrame=[0,0][0,0] newFrame=[0,0][2960,1440] result=0x27 surface={isValid=true 548279813120} surfaceGenerationChanged=true
03-20 02:33:52.211 32312-32312/com.example.photo D/ViewRootImpl@b05cbf4[MainActivity]: mHardwareRenderer.initialize() mSurface={isValid=true 548279813120} hwInitialized=true
03-20 02:33:52.220 32312-32312/com.example.photo D/ViewRootImpl@b05cbf4[MainActivity]: MSG\_RESIZED\_REPORT: frame=Rect(0, 0 - 2960, 1440) ci=Rect(168, 84 - 0, 0) vi=Rect(168, 84 - 0, 0) or=2
03-20 02:33:52.231 32312-32312/com.example.photo D/ViewRootImpl@b05cbf4[MainActivity]: MSG\_WINDOW\_FOCUS\_CHANGED 1
03-20 02:33:52.231 32312-32312/com.example.photo D/ViewRootImpl@b05cbf4[MainActivity]: mHardwareRenderer.initializeIfNeeded()#2 mSurface={isValid=true 548279813120}
03-20 02:33:52.232 32312-32312/com.example.photo V/InputMethodManager: Starting input: tba=android.view.inputmethod.EditorInfo@e558592 nm : com.example.photo ic=null
03-20 02:33:52.232 32312-32312/com.example.photo I/InputMethodManager: startInputInner - mService.startInputOrWindowGainedFocus
03-20 02:33:52.236 32312-32312/com.example.photo D/InputTransport: Input channel constructed: fd=95
03-20 02:33:52.236 32312-32312/com.example.photo D/InputTransport: Input channel destroyed: fd=91
```<issue_comment>username_1: Something like this. Note the limitation that assumes same number of values per line. Be careful it would blowup with the error if this assumption is wrong :)
```golang
package main
import (
"bufio"
"fmt"
"os"
"strconv"
"strings"
)
func main() {
file_dB, err := os.Open("dB.txt")
if err != nil {
fmt.Printf("error opening file: %v\n", err)
return
}
defer file_dB.Close()
file_dL, err := os.Open("dL.txt")
if err != nil {
fmt.Printf("error opening file: %v\n", err)
return
}
defer file_dL.Close()
file_out, err := os.Create("out.txt") // also rewrite existing !
if err != nil {
fmt.Printf("error opening file: %v\n", err)
return
}
defer file_out.Close()
dB := bufio.NewReader(file_dB)
dL := bufio.NewReader(file_dL)
lc := 0
for {
lc++
line1, _, err := dB.ReadLine()
vals1 := strings.Split(string(line1), " ")
if err != nil {
fmt.Println(lc, err)
return
}
line2, _, err := dL.ReadLine()
vals2 := strings.Split(string(line2), " ")
if err != nil {
fmt.Println(lc, err)
return
}
// Limitation: assumes line1 and line2 have same number of values per line
for i := range vals1 {
dot1 := strings.Replace(vals1[i], ",", ".", 1)
v1, err := strconv.ParseFloat(dot1, 64)
if err != nil {
fmt.Println(lc, err)
continue
}
dot2 := strings.Replace(vals2[i], ",", ".", 1)
v2, err := strconv.ParseFloat(dot2, 64)
if err != nil {
fmt.Println(lc, err)
continue
}
_, err = fmt.Fprintf(file_out, "%v; %v\n", v1, v2)
if err != nil {
fmt.Println(lc, err)
return
}
}
}
}
```
Upvotes: 0 <issue_comment>username_2: For example,
```
package main
import (
"bufio"
"bytes"
"fmt"
"io"
"os"
"strconv"
"strings"
)
var comma, period = []byte{','}, []byte{'.'}
func readNext(r io.Reader) func() (float64, error) {
s := bufio.NewScanner(r)
var fields []string
return func() (float64, error) {
if len(fields) == 0 {
err := io.EOF
for s.Scan() {
line := bytes.Replace(s.Bytes(), comma, period, -1)
fields = strings.Fields(string(line))
if len(fields) > 0 {
err = nil
break
}
}
if err := s.Err(); err != nil {
return 0, err
}
if err == io.EOF {
return 0, err
}
}
n, err := strconv.ParseFloat(fields[0], 64)
fields = fields[1:]
if err != nil {
return 0, err
}
return n, nil
}
}
func main() {
in1Name := `in1.data`
in2Name := `in2.data`
outName := `out.data`
in1, err := os.Open(in1Name)
if err != nil {
fmt.Fprint(os.Stderr, err)
return
}
defer in1.Close()
in2, err := os.Open(in2Name)
if err != nil {
fmt.Fprint(os.Stderr, err)
return
}
defer in2.Close()
out, err := os.Create(outName)
if err != nil {
fmt.Fprint(os.Stderr, err)
return
}
defer out.Close()
outw := bufio.NewWriter(out)
defer outw.Flush()
next1 := readNext(in1)
next2 := readNext(in2)
for {
n1, err1 := next1()
n2, err2 := next2()
if err1 == io.EOF && err2 == io.EOF {
break
}
if err1 != nil || err2 != nil {
fmt.Fprint(os.Stderr, err1, err2)
return
}
_, err := fmt.Fprintf(outw, "%g;%g\n", n1, n2)
if err != nil {
fmt.Fprint(os.Stderr, err)
return
}
}
}
```
Playground: <https://play.golang.org/p/I_sT_EPFI_W>
Output:
```
$ go run data.go
$ cat in1.data
-0,1296169 -0,1286087 -0,1276232
-0,1288124 -0,1278683 -0,1269373
-0,1280221 -0,1271375 -0,12626
$ cat in2.data
-0,1296169 -0,1286087 -0,1276232
-0,1288124 -0,1278683 -0,1269373
-0,1280221 -0,1271375 -0,12626
$ cat out.data
-0.1296169;-0.1296169
-0.1286087;-0.1286087
-0.1276232;-0.1276232
-0.1288124;-0.1288124
-0.1278683;-0.1278683
-0.1269373;-0.1269373
-0.1280221;-0.1280221
-0.1271375;-0.1271375
-0.12626;-0.12626
$
```
Upvotes: 0 <issue_comment>username_3: Thanks for help, with points of view of another peoples i've found my own solution.
**What this tool does?** Generally it combines two text files to one.
**Where i've used it?** Creating "Generic ASCII" text file for "Country specific coordinate system tool". Input text files are ASCII export of GRID files from GIS applications (values in arc degrees expected). Later this file may be used to fix local coordinate shifts when working with precise GPS/GNSS receivers.
Here what i've "developed":
```
package main
import (
"bufio"
"fmt"
"os"
"regexp"
"strconv"
"strings"
)
func main() {
file_dB, err := os.Open("d:/dB.txt")
if err != nil {
fmt.Printf("error opening file: %v\n", err)
os.Exit(1)
}
defer file_dB.Close()
file_dL, err := os.Open("d:/dL.txt")
if err != nil {
fmt.Printf("error opening file: %v\n", err)
os.Exit(1)
}
defer file_dL.Close()
file_out, err := os.Create("d:/out.txt") // also rewrite existing !
if err != nil {
fmt.Printf("error opening file: %v\n", err)
os.Exit(1)
}
defer file_out.Close()
dB := bufio.NewReader(file_dB)
dL := bufio.NewReader(file_dL)
err = nil
xcorn_float := 0.0
ycorn_float := 0.0
cellsize_float := 0.0
ncols := regexp.MustCompile("[0-9]+")
nrows := regexp.MustCompile("[0-9]+")
xcorn := regexp.MustCompile("[0-9]*,[0-9]*")
ycorn := regexp.MustCompile("[0-9]*,[0-9]*")
cellsize := regexp.MustCompile("[0-9]*,[0-9]*")
nodataval := regexp.MustCompile("-?d+")
tmp := 0.0
// n cols --------------------
ncols_dB, err := dB.ReadString('\n')
if err != nil {
panic(err)
}
ncols_dL, err := dL.ReadString('\n')
if err != nil {
panic(err)
}
if ncols.FindString(ncols_dB) != ncols.FindString(ncols_dL) {
panic(err)
}
ncols_dB = ncols.FindString(ncols_dB)
// n rows --------------------
nrows_dB, err := dB.ReadString('\n')
if err != nil {
panic(err)
}
nrows_dL, err := dL.ReadString('\n')
if err != nil {
panic(err)
}
if nrows.FindString(nrows_dB) != nrows.FindString(nrows_dL) {
panic(err)
}
nrows_dB = nrows.FindString(nrows_dB)
// X --------------------
xcorn_dB, err := dB.ReadString('\n')
if err != nil {
panic(err)
}
xcorn_dL, err := dL.ReadString('\n')
if err != nil {
panic(err)
}
if xcorn.FindString(xcorn_dB) != xcorn.FindString(xcorn_dL) {
panic(err)
}
xcorn_float, err = strconv.ParseFloat(strings.Replace(cellsize.FindString(xcorn_dB), ",", ".", 1), 8)
xcorn_float *= 3600.0
// Y --------------------
ycorn_dB, err := dB.ReadString('\n')
if err != nil {
panic(err)
}
ycorn_dL, err := dL.ReadString('\n')
if err != nil {
panic(err)
}
if ycorn.FindString(ycorn_dB) != ycorn.FindString(ycorn_dL) {
panic(err)
}
ycorn_float, err = strconv.ParseFloat(strings.Replace(cellsize.FindString(ycorn_dB), ",", ".", 1), 8)
ycorn_float *= 3600.0
// cell size --------------------
cellsize_dB, err := dB.ReadString('\n')
if err != nil {
panic(err)
}
cellsize_dL, err := dL.ReadString('\n')
if err != nil {
panic(err)
}
if cellsize.FindString(cellsize_dB) != cellsize.FindString(cellsize_dL) {
panic(err)
}
cellsize_float, err = strconv.ParseFloat(strings.Replace(cellsize.FindString(cellsize_dB), ",", ".", 1), 8)
cellsize_float *= 3600.0
// nodata value --------------------
nodataval_dB, err := dB.ReadString('\n')
if err != nil {
panic(err)
}
nodataval_dL, err := dL.ReadString('\n')
if err != nil {
panic(err)
}
if nodataval.FindString(nodataval_dB) != nodataval.FindString(nodataval_dL) {
panic(err)
}
nodataval_dB = nodataval.FindString(nodataval_dB)
fmt.Print(nodataval_dB)
//making header
if _, err := file_out.WriteString("name\n3;0;2\n1;2;" + nrows_dB + ";" + ncols_dB + "\n" + strconv.FormatFloat(xcorn_float, 'f', -1, 32) + ";" + strconv.FormatFloat(ycorn_float, 'f', -1, 32) + ";" + strconv.FormatFloat(cellsize_float, 'f', -1, 32) + ";" + strconv.FormatFloat(cellsize_float, 'f', -1, 32) + "\n1\n"); err != nil {
panic(err)
}
// valuses --------------------
for {
line1, err := dB.ReadString(' ')
if err != nil {
break
}
if tmp, err = strconv.ParseFloat(strings.TrimSpace(strings.Replace(line1, ",", ".", 1)), 64); err == nil {
line1 = strconv.FormatFloat(tmp, 'f', 8, 64)
}
line2, err := dL.ReadString(' ')
if err != nil {
break
}
if tmp, err = strconv.ParseFloat(strings.TrimSpace(strings.Replace(line2, ",", ".", 1)), 64); err == nil {
line2 = strconv.FormatFloat(tmp, 'f', 8, 64)
}
if err != nil {
panic(err)
}
str := string(line1) + ";" + string(line2) + "\n"
if _, err := file_out.WriteString(str); err != nil {
panic(err)
}
}
}
```
If you have any recomendations - feel free to leave a comment!
Upvotes: 1 |
2018/03/20 | 568 | 1,750 | <issue_start>username_0: I am trying to plot a marker in X range chart. (For example: After Sprint4, I would like to show a circular milestone in DEV category). Refer following JS fiddle.
Demo: <https://jsfiddle.net/amrutaJgtp/jgn9bLak/56/>
```
{
type: "scatter",
x: Date.UTC(2014, 5, 3),
//x2: Date.UTC(2014, 4, 31),
y: 0,
marker: {
enabled: true,
symbol: 'circle',
fillColor: "yellow",
lineWidth: 1
}
},
```
Is there any way to achieve this?<issue_comment>username_1: I would suggest you make a separate series for scatter-points to keep these circular milestones since xRange does not support this point format. For example:
```
series: [{
name: 'milestones'
type: 'scatter',
stickyTracking: false,
marker: {
enabled: true,
symbol: 'circle',
fillColor: "yellow",
lineWidth: 5,
radius: 10
},
data: [{
x: Date.UTC(2014, 5, 3),
y: 0,
}]
},
...
]
```
You could also change marker per point, to have different shapes for different points. In addition to this, you would need to change your tooltip formatter like this:
```
tooltip: {
formatter: function() {
if (this.series.name != 'milestones') {
...
}
}
}
```
**Working example:** <https://jsfiddle.net/jgn9bLak/90/>
Upvotes: 2 <issue_comment>username_2: As @username_1 said you need to create a another serie, here is another example:
<https://jsfiddle.net/username_2/oL83jty5/>
```
{
// Second series
name: 'Granted',
color: 'green',
id: 'blue',
marker: {
symbol: 'diamond'
},
dataRaw: [{
y: 1,
xRanges: [
// first value: from; second value: to
[toMs(2006, 10), toMs(2006, 10)]
]
}]
}
```
Upvotes: 0 |
2018/03/20 | 1,563 | 4,682 | <issue_start>username_0: i want it trigger when number is 0. when number is 1 or 2, when number is 3 or 4, when number is 5 or 6. when trigger it will change height some element. for now it always return 70px.
```
if(y == 0){
$('slpage').style.height = '0px';
}else if(y >= 1){
$('slpage').style.height = '70px';
}else if(y <=4){
$('slpage').style.height = '140px';
}else if(y >= 5){
$('slpage').style.height = '210px';
}
```
thanks<issue_comment>username_1: You should consider reordering your statements. So that it checks the highest first and then for lowest.
```
if(y == 0){
$('slpage').style.height = '0px';
}else if(y >= 5){
$('slpage').style.height = '210px';
}else if(y >=4){
$('slpage').style.height = '140px';
}else if(y >= 1){
$('slpage').style.height = '120px';
}
```
Otherwise, you end up triggering the lowest always before reaching the highest.
And I guess you made a type for the check in your third condition. Corrected.
>
> switch if the integer dependent. its means it should 0,1,2,3,4,5,6. but my situation is if 0. it trigger. if 1-2. it trigger. if 3-4. it trigger. if 5-6. it trigger.
>
>
>
Don't restrict yourself using switch just because you have to cover 2 cases at a time. `Switch` have that facility too.
```
switch (y) {
case 0:
$('slpage').style.height = '0px';
break;
case 1:
case 2:
$('slpage').style.height = '120px';
break;
case 3:
case 4:
$('slpage').style.height = '140px';
break;
case 5:
case 6:
$('slpage').style.height = '240px';
break;
}
```
Assuming you using Jquery, your selector of Jquery is wrong. You are missing `#` or `.` Ignore if you are not.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Switch statement maybe ?
```
switch (y) {
case 0:
$('slpage').style.height = '0px';
break;
case 1:
case 2:
$('slpage').style.height = '70px';
break;
case 3:
case 4:
$('slpage').style.height = '140px';
break;
case 5:
default:
$('slpage').style.height = '210px';
}
```
For more infos you can check: <https://www.w3schools.com/js/js_switch.asp>
Upvotes: 0 <issue_comment>username_3: When you have y = {1,2,3,4,5,6} it satisfies the condition that it is >=1 hence it breaks it at that point. Instead change your logic from highest to lowest
```
if(y >=5){
$('slpage').style.height = '210px';
}else if(y >= 3){
$('slpage').style.height = '140px';
}else if(y >=1){
$('slpage').style.height = '70';
}else {
$('slpage').style.height = '0px';
}
```
Upvotes: 0 <issue_comment>username_4: You can use [switch case](https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Statements/switch) also
```
switch (y) {
case 0:
$('slpage').style.height = '0px';
break;
case 1:
case 2:
$('slpage').style.height = '70px';
break;
case 3:
case 4:
$('slpage').style.height = '140px';
break;
case 5:
case 6:
$('slpage').style.height = '210px';
break;
}
```
```js
function sw(y) {
switch (y) {
case 0:
console.log('0px');
break;
case 1:
case 2:
console.log('70px');
break;
case 3:
case 4:
console.log('140px');
break;
case 5:
case 6:
console.log('210px');
break;
default:
console.log('no value mathed value');
}
}
sw(0);
sw(1);
sw(2);
sw(3);
sw(4);
sw(5);
sw(6);
sw()
```
Upvotes: 1 <issue_comment>username_5: DRY (Don't Repeat Yourself)
```
var h = 0;
if (y >= 5) h = 210;
else if (y >= 3) h = 140;
else if (y >= 1) h = 70;
$('slpage').style.height = h+'px';
```
assuming
```
function $(id) { return document.getElementById(id); }
```
If you are using jQuery, you need
```
$('.slpage').css({"height": h+'px'});
```
or
```
$('#slpage').css({"height": h+'px'});
```
depending on selector
Upvotes: 0 <issue_comment>username_6: Better to use `switch`:
```
switch (y) {
case 0:
$('slpage').style.height = '0px';
break;
case 1:
case 2:
$('slpage').style.height = '70px';
break;
case 3:
case 4:
$('slpage').style.height = '140px';
break;
case 5:
case 6:
$('slpage').style.height = '210px';
break;
default:
$('slpage').style.height = '0px';
}
```
Upvotes: 0 |
2018/03/20 | 828 | 2,342 | <issue_start>username_0: I am trying to produce a variable number of rectangles (layers) in a ggplot of a zoo object. I would like to do this in a loop since I do not know ahead of time how many rectangles I will need. Here is a toy example.
```
library("zoo")
library("ggplot2")
set.seed(1)
y <- runif(50, min = 1, max = 2)
start <- as.numeric(as.Date("2018-01-01"))
x <- as.Date(start:(start + 49))
x.zoo <- zoo(y, order.by = x)
## Fill areas
bars <- data.frame(start = c(x[5], x[20], x[35]),
end = c(x[10], x[25], x[40]))
```
I can plot these manually with this code:
```
## Plot manually
print(autoplot.zoo(x.zoo, facets = NULL) +
geom_rect(aes(xmin = bars[1,1],
xmax = bars[1,2], ymin = -Inf, ymax = Inf),
fill = "pink", alpha = 0.01) +
geom_rect(aes(xmin = bars[2,1],
xmax = bars[2,2], ymin = -Inf, ymax = Inf),
fill = "pink", alpha = 0.01) +
geom_rect(aes(xmin = bars[3,1],
xmax = bars[3,2], ymin = -Inf, ymax = Inf),
fill = "pink", alpha = 0.01))
```
This gives me this desired image:
[](https://i.stack.imgur.com/5vYgO.png)
I tried using the loop below but it only plots the last bar. How do I do this??
```
## This didn't work but illustrates what I am trying to do
p = autoplot.zoo(x.zoo, facets = NULL)
for(i in 1:3) {
p = p + geom_rect(aes(xmin = bars[i,1],
xmax = bars[i,2], ymin = -Inf, ymax = Inf),
fill = "pink", alpha = 0.01)
}
print(p)
```<issue_comment>username_1: One way to avoid the for loop is to convert `x.zoo` into a `data.frame` and map the data to `geom_line`. This way, you can map the `bars` data to `geom_rect` separately.
```
dat <- data.frame(index = index(x.zoo), data.frame(x.zoo))
ggplot() +
geom_rect(data = bars, aes(xmin = start, xmax = end, ymin =-Inf, ymax = Inf), fill = 'pink', alpha = .5) +
geom_line(data=dat, aes(x = index, y = x.zoo))
```
Upvotes: 1 <issue_comment>username_2: You don't need a loop. `geom_rect` is vectorised
```
autoplot.zoo(x.zoo, facets = NULL) +
geom_rect(aes(xmin = start, xmax = end, ymin = -Inf, ymax = Inf), data = bars, fill = "pink", alpha = 0.4, inherit.aes = FALSE)
```
Upvotes: 3 [selected_answer] |
2018/03/20 | 430 | 1,338 | <issue_start>username_0: Inside my **app.component.html** I have the following code:
```
```
When I enter in `http://localhost:4200` it loads the `AppComponent` (no problems so far), but when I enter in `http://localhost:4200/success` (which loads another route) it reloads `AppComponent`.
How can I prevent Angular from reloading my `AppComponent` when I navigate to other routes?
**app-routing.module**
```
const routes: Routes = [
{ path: '', pathMatch: 'full', component: HomeComponent },
{ path: 'success', component: SuccessComponent },
{ path: 'error', component: ErrorComponent },
];
```<issue_comment>username_1: One way to avoid the for loop is to convert `x.zoo` into a `data.frame` and map the data to `geom_line`. This way, you can map the `bars` data to `geom_rect` separately.
```
dat <- data.frame(index = index(x.zoo), data.frame(x.zoo))
ggplot() +
geom_rect(data = bars, aes(xmin = start, xmax = end, ymin =-Inf, ymax = Inf), fill = 'pink', alpha = .5) +
geom_line(data=dat, aes(x = index, y = x.zoo))
```
Upvotes: 1 <issue_comment>username_2: You don't need a loop. `geom_rect` is vectorised
```
autoplot.zoo(x.zoo, facets = NULL) +
geom_rect(aes(xmin = start, xmax = end, ymin = -Inf, ymax = Inf), data = bars, fill = "pink", alpha = 0.4, inherit.aes = FALSE)
```
Upvotes: 3 [selected_answer] |
2018/03/20 | 326 | 1,218 | <issue_start>username_0: I am trying out Rail 5.2.0.rc1 Active Storage, using its included JavaScript library to upload PDF docs directly from the client to the cloud.
But on submitting a form I get a browser error in both Firefox & Chrome:
```
Cross-Origin Request Blocked... (Reason: CORS header ‘Access-Control-Allow-Origin’ missing).
```
Headers are set thus this.xhr.setRequestHeader... in the activestorage.js and dont appear to be configurable: <https://github.com/rails/rails/blob/master/activestorage/app/javascript/activestorage/blob_record.js>
any suggestions anyone?<issue_comment>username_1: Try setting the CORS permission on the S3 bucket with the following:
```
xml version="1.0" encoding="UTF-8"?
\*
GET
3000
Authorization
\*
PUT
POST
3000
\*
```
Upvotes: 5 [selected_answer]<issue_comment>username_2: Note that in the new S3 console, the CORS configuration must be JSON. the following is working correctly:
```json
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"POST",
"GET",
"PUT"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
}
]
```
Upvotes: 0 |
2018/03/20 | 629 | 2,103 | <issue_start>username_0: As i understand from documentation, Azure AD B2C creates a new local account for every user that comes from a social login such as GMail/Facebook while signin first time (Correct me, if i'm wrong). However i want to intercept this and link the user to an already existing (user's own) local account without creating a new local account, through custom policies.<issue_comment>username_1: [The Wingtip sample](https://github.com/Azure-Samples/active-directory-b2c-advanced-policies/blob/master/wingtipgamesb2c) contains an example of this flow.
See [the "B2C\_1A\_link" relying party file](https://github.com/Azure-Samples/active-directory-b2c-advanced-policies/blob/master/wingtipgamesb2c/Policies/b2ctechready.onmicrosoft.com_B2C_1A_link.xml) and [the "Link" user journey](https://github.com/Azure-Samples/active-directory-b2c-advanced-policies/blob/89b0c2e3f78605521e16d443a160fdbb0e5acca9/wingtipgamesb2c/Policies/b2ctechready.onmicrosoft.com_B2C_1A_base.xml#L3596) for reference.
Note this user journey prompts the end user to log in with a local account before they log in with the social account.
Upvotes: 4 [selected_answer]<issue_comment>username_2: A detailed sample is given [here](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/tree/master/scenarios/source/aadb2c-ief-account-linkage).
Updating "user identities" will link the social account with the local account.
This can be achieved by having a user journey similar to the below.
```
false
\*\*
\*\*
\*\*
\*\*
```
Then create a 'LinkExternalAccount.xml' policy similar to the below.
```
PolicyProfile
```
Once we run our 'Linkexternalaccount.xml' it will redirect to our local account login and after successful login, it will ask for IDP selection and based on user selection 'User identities' attribute will be updated. We can check the same by querying against the user. A sample user identity looks like below,
```
**"userIdentities": [
{
"issuer": "google.com",
"issuerUserId": "MTA5MjA5ODQwNzAyNjc3NTEzMzM5"
}**
```
Upvotes: 2 |
2018/03/20 | 1,672 | 5,738 | <issue_start>username_0: I have a list of songs setup with Subject and Observable (shown with `| async` in view), and now I want to delete a song off the list, do some `filter()` and call `next()` on the Subject.
**How and where do I filter?** Right now I am doing `getValue()` on Subject and passing that to `next()` on, well, Subject. This just seems wrong and circularish.
I also tried subscribing to the Subject and getting the data that way, filtering it and calling `next()` inside `subscribe()`, but I got a RangeError.
I could filter the Observable by storing all deleted id's. The Subject's list then becomes out of sync by having deleted songs on there and also every observer would have to have the deleted-id's-list which seems ludicrous. I'm rapidly growing old and mental. Please help me internet :(
```
export class ArtistComponent implements OnInit {
private repertoire$;
private repertoireSubject;
constructor(
private route: ActivatedRoute,
private service: ArtistService
) {
this.getRepertoire().subscribe(
songs => this.repertoireSubject.next(songs)
);
}
getRepertoire() {
return this.route.paramMap
.switchMap((params: ParamMap) =>
this.service.fetchRepertoire(params.get('id')));
}
//THIS IS WHERE I'M HAVING TROUBLE
delete(id): void {
this.repertoireSubject.next(
this.repertoireSubject.getValue().filter(song => (song.id !== id))
);
// TODO remove song from repertoire API
}
ngOnInit() {
this.repertoireSubject = new BehaviorSubject(null);
this.repertoire$ = this.repertoireSubject.asObservable();
}
}
```<issue_comment>username_1: If you want to keep everything in streams then you could take a page from the Redux playbook and do something like this:
```js
const actions = new Rx.Subject();
const ActionType = {
SET: '[Song] SET',
DELETE: '[Song] DELETE'
};
const songs = [
{ id: 1, name: 'First' },
{ id: 2, name: 'Second' },
{ id: 3, name: 'Third' },
{ id: 4, name: 'Fourth' },
{ id: 5, name: 'Fifth' }
];
actions
.do(x => { console.log(x.type, x.payload); })
.scan((state, action) => {
switch(action.type) {
case ActionType.SET:
return action.payload;
case ActionType.DELETE:
return state.filter(x => x.id !== action.payload);
}
return state;
}, [])
.subscribe(x => { console.log('State:', x); });
window.setTimeout(() => {
actions.next({ type: ActionType.SET, payload: songs });
}, 1000);
window.setTimeout(() => {
actions.next({ type: ActionType.DELETE, payload: 2 });
}, 2000);
window.setTimeout(() => {
actions.next({ type: ActionType.DELETE, payload: 5 });
}, 3000);
```
Or something like this:
```js
const deletes = new Rx.Subject();
const songs = Rx.Observable.of([
{ id: 1, name: 'First' },
{ id: 2, name: 'Second' },
{ id: 3, name: 'Third' },
{ id: 4, name: 'Fourth' },
{ id: 5, name: 'Fifth' }
]);
window.setTimeout(() => {
deletes.next(2);
}, 1000);
window.setTimeout(() => {
deletes.next(5);
}, 2000);
songs.switchMap(state => {
return deletes.scan((state, id) => {
console.log('Delete: ', id);
return state.filter(x => x.id !== id);
}, state)
.startWith(state);
}).subscribe(x => { console.log('State:', x); });
```
Upvotes: 0 <issue_comment>username_2: If you stop relying on the async pipe and use a variable to handle your songs, it gets way easier :
```js
import { filter } from 'rxjs/operators';
export class ArtistComponent implements OnInit {
private songs: any;
constructor(
private route: ActivatedRoute,
private service: ArtistService
) {
this.getRepertoire().subscribe(songs => this.songs = songs);
}
getRepertoire() {
return this.route.paramMap
.switchMap((params: ParamMap) =>
this.service.fetchRepertoire(params.get('id')));
}
delete(id): void {
this.songs = this.songs.filter(song => song.id !== id);
}
}
```
This way, you can simply filter like it is a simple array of object.
Upvotes: 0 <issue_comment>username_3: i recommand you to create new attribute on your component, where you will last store state. (here understands array of songs).
Is always better conceptualize your code by, internal property who represent your state (or store) and another attribute who have the role to sync rest of your application (by observable / event).
Another tips is to strong type your code by model. Will be easier to debug and maintain.
Then you just have to update it according to your logic and next on your Subject
```
export interface SongModel {
id: number;
title: string;
artiste: string;
}
export class ArtistComponent implements OnInit {
private repertoire$ : Observable;
private repertoireSubject: BehaviorSubject;
//Array of song, should be same type than repertoireSubject.
private songs: SongModel[];
constructor(
private route: ActivatedRoute,
private service: ArtistService
) {
//We push all actual references.
this.getRepertoire().subscribe(
songs => {
this.songs = songs;
this.repertoireSubject.next(this.songs);
}
);
}
ngOnInit() {
//Because is suject of array, you should init by empty array.
this.repertoireSubject = new BehaviorSubject([]);
this.repertoire$ = this.repertoireSubject.asObservable();
}
getRepertoire() {
return this.route.paramMap
.switchMap((params: ParamMap) =>
this.service.fetchRepertoire(params.get('id')));
}
//THIS IS WHERE I'M HAVING TROUBLE
delete(id: number): void {
// Update your array referencial.
this.songs = this.songs.filter(songs => songs.id !== id);
// Notify rest of your application.
this.repertoireSubject.next(this.songs);
}
}
```
Upvotes: 3 [selected_answer] |
2018/03/20 | 1,518 | 4,773 | <issue_start>username_0: My issue is simple but I'm really out of ideas;
My raw data looks like:
```
xxx,1593,7
\N
yyy,1094,4
ddd,1015,0,2
zzz,1576,5
\N
aaa,1037,6
```
I run the following code:
```
list_data = data.split('\n')
for line in list_data:
if len(line.split(',')) >= 3 :
flag = True
print( line.split(',')[1])
else :
print(line)
```
The output I get:
```
1593
\N
1094
1015
1576
\N
1037
```
I need to replace each '\N' by the number of the line just before. So my output should look like :
```
1593
1593
1094
1015
1576
1576
1037
```
If you have any Idea please help.
Thank you!<issue_comment>username_1: If you want to keep everything in streams then you could take a page from the Redux playbook and do something like this:
```js
const actions = new Rx.Subject();
const ActionType = {
SET: '[Song] SET',
DELETE: '[Song] DELETE'
};
const songs = [
{ id: 1, name: 'First' },
{ id: 2, name: 'Second' },
{ id: 3, name: 'Third' },
{ id: 4, name: 'Fourth' },
{ id: 5, name: 'Fifth' }
];
actions
.do(x => { console.log(x.type, x.payload); })
.scan((state, action) => {
switch(action.type) {
case ActionType.SET:
return action.payload;
case ActionType.DELETE:
return state.filter(x => x.id !== action.payload);
}
return state;
}, [])
.subscribe(x => { console.log('State:', x); });
window.setTimeout(() => {
actions.next({ type: ActionType.SET, payload: songs });
}, 1000);
window.setTimeout(() => {
actions.next({ type: ActionType.DELETE, payload: 2 });
}, 2000);
window.setTimeout(() => {
actions.next({ type: ActionType.DELETE, payload: 5 });
}, 3000);
```
Or something like this:
```js
const deletes = new Rx.Subject();
const songs = Rx.Observable.of([
{ id: 1, name: 'First' },
{ id: 2, name: 'Second' },
{ id: 3, name: 'Third' },
{ id: 4, name: 'Fourth' },
{ id: 5, name: 'Fifth' }
]);
window.setTimeout(() => {
deletes.next(2);
}, 1000);
window.setTimeout(() => {
deletes.next(5);
}, 2000);
songs.switchMap(state => {
return deletes.scan((state, id) => {
console.log('Delete: ', id);
return state.filter(x => x.id !== id);
}, state)
.startWith(state);
}).subscribe(x => { console.log('State:', x); });
```
Upvotes: 0 <issue_comment>username_2: If you stop relying on the async pipe and use a variable to handle your songs, it gets way easier :
```js
import { filter } from 'rxjs/operators';
export class ArtistComponent implements OnInit {
private songs: any;
constructor(
private route: ActivatedRoute,
private service: ArtistService
) {
this.getRepertoire().subscribe(songs => this.songs = songs);
}
getRepertoire() {
return this.route.paramMap
.switchMap((params: ParamMap) =>
this.service.fetchRepertoire(params.get('id')));
}
delete(id): void {
this.songs = this.songs.filter(song => song.id !== id);
}
}
```
This way, you can simply filter like it is a simple array of object.
Upvotes: 0 <issue_comment>username_3: i recommand you to create new attribute on your component, where you will last store state. (here understands array of songs).
Is always better conceptualize your code by, internal property who represent your state (or store) and another attribute who have the role to sync rest of your application (by observable / event).
Another tips is to strong type your code by model. Will be easier to debug and maintain.
Then you just have to update it according to your logic and next on your Subject
```
export interface SongModel {
id: number;
title: string;
artiste: string;
}
export class ArtistComponent implements OnInit {
private repertoire$ : Observable;
private repertoireSubject: BehaviorSubject;
//Array of song, should be same type than repertoireSubject.
private songs: SongModel[];
constructor(
private route: ActivatedRoute,
private service: ArtistService
) {
//We push all actual references.
this.getRepertoire().subscribe(
songs => {
this.songs = songs;
this.repertoireSubject.next(this.songs);
}
);
}
ngOnInit() {
//Because is suject of array, you should init by empty array.
this.repertoireSubject = new BehaviorSubject([]);
this.repertoire$ = this.repertoireSubject.asObservable();
}
getRepertoire() {
return this.route.paramMap
.switchMap((params: ParamMap) =>
this.service.fetchRepertoire(params.get('id')));
}
//THIS IS WHERE I'M HAVING TROUBLE
delete(id: number): void {
// Update your array referencial.
this.songs = this.songs.filter(songs => songs.id !== id);
// Notify rest of your application.
this.repertoireSubject.next(this.songs);
}
}
```
Upvotes: 3 [selected_answer] |
2018/03/20 | 679 | 1,952 | <issue_start>username_0: I've made a horizontal list using flexbox that fills its container evenly.
But when i try to change the background of the list items during hover etc. you can see there is empty space between each element. Is there any way to get rid of this while still making the list items fill the entire width of the container?
```css
.wrapper {
margin-top: 30px;
border: 1px solid lightblue;
width: 100%;
}
ul {
display: flex;
justify-content: space-around;
flex-wrap: wrap;
}
li {
color: black;
cursor: pointer;
list-style: none
}
li:hover {
background-color: lightblue;
color: white;
}
```
```html
* 123
* ABC
* 789
* XYZ
```
For example here the items are evenly spaced, but there's lots of space between each item.
What i want is something like this when i hover over a list item:
[](https://i.stack.imgur.com/OdYP8.png)
Any ideas how to do this?<issue_comment>username_1: Remove all extern padding/margin then use `flex:1` and padding within `li`:
```css
.wrapper {
margin-top: 30px;
border: 1px solid lightblue;
width: 100%;
}
ul {
padding:0;
margin:0;
display: flex;
flex-wrap: wrap;
}
li {
color: black;
flex:1;
text-align:center;
padding:10px 0;
cursor: pointer;
list-style: none
}
li:hover {
background-color: lightblue;
color: white;
}
```
```html
* 123
* ABC
* 789
* XYZ
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: ```css
.wrapper {
margin-top: 30px;
border: 1px solid lightblue;
width: 100%;
}
ul {
display: flex;
list-style: none;
padding: 0;
margin: 0;
}
li {
width: 100%;
color: black;
cursor: pointer;
list-style: none;
text-align: center;
padding: 15px 0;
}
li:hover {
background-color: lightblue;
color: white;
}
```
```html
* 123
* ABC
* 789
* XYZ
```
Upvotes: 0 |
2018/03/20 | 910 | 2,275 | <issue_start>username_0: I am trying to learn R and I have a data frame which contains 68 continuous and categorical variables. There are two variables -> x and lnx, on which I need help. Corresponding to a large number of 0's & NA's in x, lnx shows NA. Now, I want to write a code through which I can take log(x+1) in order to replace those NA's in lnx to 0, where corresponding x is also 0 (if x == 0, then I want only lnx == 0, if x == NA, I want lnx == NA). Data frame looks something like this -
```
a b c d e f x lnx
AB1001 1.00 3.00 67.00 13.90 2.63 1776.7 7.48
AB1002 0.00 2.00 72.00 38.70 3.66 0.00 NA
AB1003 1.00 3.00 48.00 4.15 1.42 1917 7.56
AB1004 0.00 1.00 70.00 34.80 3.55 NA NA
AB1005 1.00 1.00 34.00 3.45 1.24 3165.45 8.06
AB1006 1.00 1.00 14.00 7.30 1.99 NA NA
AB1007 0.00 3.00 53.00 11.20 2.42 0.00 NA
```
I tried writing the following code -
```
data.frame$lnx[is.na(data.frame$lnx)] <- log(data.frame$x +1)
```
but I get the following warning message and the output is wrong:
number of items to replace is not a multiple of replacement length. Can someone guide me please.
Thanks.<issue_comment>username_1: Remove all extern padding/margin then use `flex:1` and padding within `li`:
```css
.wrapper {
margin-top: 30px;
border: 1px solid lightblue;
width: 100%;
}
ul {
padding:0;
margin:0;
display: flex;
flex-wrap: wrap;
}
li {
color: black;
flex:1;
text-align:center;
padding:10px 0;
cursor: pointer;
list-style: none
}
li:hover {
background-color: lightblue;
color: white;
}
```
```html
* 123
* ABC
* 789
* XYZ
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: ```css
.wrapper {
margin-top: 30px;
border: 1px solid lightblue;
width: 100%;
}
ul {
display: flex;
list-style: none;
padding: 0;
margin: 0;
}
li {
width: 100%;
color: black;
cursor: pointer;
list-style: none;
text-align: center;
padding: 15px 0;
}
li:hover {
background-color: lightblue;
color: white;
}
```
```html
* 123
* ABC
* 789
* XYZ
```
Upvotes: 0 |
2018/03/20 | 595 | 2,123 | <issue_start>username_0: Trying to test angular services with Jest and got this error:
```
[$injector:nomod] Module 'superag' is not available! You either misspelled the module name or forgot to load it. If registering a module ensure that you specify the dependencies as the second argument.
```
How do I mock my module 'superag' and make available to `mathService`?
Do I have to import the `app.js` file with the module declaration every test I make?
Ps.: I've tried with `beforeEach(module('superag'))` too without success
**package.json**
```
"jest": {
"collectCoverageFrom": [
"**/*.{js}",
"!**/node_modules/**"
]
},
"devDependencies": {
"angular-mocks": "^1.6.9",
"jest-cli": "^23.0.0-alpha.0"
},
"dependencies": {
"angular": "^1.6.9"
}
}
```
**math.service.js**
```
function MathService(){
var addTwoNumbers = function(x, y){
return x + y;
};
return {
addTwoNumbers
};
}
angular.module('superag').factory('mathservice', MathService);
```
**math.service.test.js**
```
require('angular');
require('angular-mocks');
require('./math.service.js');
describe('Math service - addTwoNumbers', () => {
beforeEach(
angular.mock.module('superag')
);
var _mathservice;
beforeEach(inject((mathservice) => {
_mathservice = mathservice;
}));
it('1 + 1 should equal 2', () => {
var actual = _mathservice.addTwoNumbers(1,1);
expect(actual).toEqual(2);
});
});
```<issue_comment>username_1: You need to load the module under test using the module function provided in angular-mocks, so it is available in your tests, per [docs](https://docs.angularjs.org/guide/unit-testing).
```
beforeEach(module('superag'))
```
Then you can inject in your service.
Upvotes: 1 <issue_comment>username_2: This error occurs when you declare a dependency on a module that isn't defined anywhere or hasn't been loaded in the current browser context.
When you receive this error, check that the name of the module in question is correct and that the file in which this module is defined has been loaded (either via
Upvotes: 3 |
2018/03/20 | 957 | 3,225 | <issue_start>username_0: ```
long seed = 0;
Random rand = new Random(seed);
int rand100 = 0;
for(int i = 0; i < 100; i++)
rand100 = rand.nextInt();
System.out.println(rand100);
```
I wrote this code to get 100th random integer value of given seed. I want to know if there is a way to get 100th random integer value of given seed without calling `nextInt()` 100 times.<issue_comment>username_1: Yes. As long as the `seed` is constant, then the result of executing this 100 times will yield the same result **every** time. As such, you can just do
```
int rand100 = -1331702554;
```
Upvotes: 2 <issue_comment>username_2: If I got you correct, you search for some seeded method like
```
int[] giveMeInts(int amount, long seed);
```
There exists something very similar, the `Stream` methods of `Random` ([documentation](https://docs.oracle.com/javase/9/docs/api/java/util/Random.html)):
```
long seed = ...
int amount = 100;
Random rnd = new Random(seed);
IntStream values = rnd.ints(amount);
```
You could collect the stream values in collections like `List` or an `int[]` array:
```
List values = rnd.ints(amount).collect(Collectors.toList());
int[] values = rnd.ints(amount).toArray();
```
The methods will use the **seed** of the `Random` object, so if fed with the same seed they will always produce the same sequence of values.
Upvotes: 0 <issue_comment>username_2: >
> I want to know if there is a way to get `100`-th random integer value of given seed **without** calling `nextInt()` `100` times.
>
>
>
**No**, there is no way to directly get the `100`-th random number of the sequence without first generating the other `99` values. That's simply because of how the generation works in Java, the values depend on their previous values.
If you want to go into details, take a look at the [source code](http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/8u40-b25/java/util/Random.java#Random). The internal seed changes with every call of the `next` method, using the previous seed:
```
nextseed = (oldseed * multiplier + addend) & mask;
```
So in order to get the seed for the `100`-th value, you need to know the seed for the `99`-th value, which needs the seed for the `98`-th value and so on.
---
However, you can easily get the `100`-th value with a more compact statement like
```
long seed = ...
int amount = 100;
Random rnd = new Random(seed);
// Generate sequence of 100 random values, discard 99 and get the last
int val = rnd.ints(100).skip(amount - 1).findFirst().orElse(-1);
```
Keep in mind that this still computes all previous values, as explained. It just discards them.
---
After you have computed that value for the first time, you could just hardcode it into your program. Let's suppose you have tested it and it yields `123`. Then, if the seed does not change, the value will always be `123`. So you could just do
```
int val = 123;
```
The sequences remain the same through multiple instance of the JVM, so the value will always be valid for this seed. Don't know about release cycles though, I think it's allowed for `Random` to change its behavior through different versions of Java.
Upvotes: 3 [selected_answer] |
2018/03/20 | 953 | 3,294 | <issue_start>username_0: I want to source an asynchronous background processes using System() and Rscript but it doesn't seem to run the script. The line I'm using is below:
system("Rscript -e 'source(\"/Users/Federico/Documents/R/win-library/3.4/taskscheduleR/extdata/PriceTesting.R\")'", wait=FALSE)
In the sourced script I have it write a simple csv and it does not write which leads me to believe its not running the script at all.
Am I doing something wrong?<issue_comment>username_1: Yes. As long as the `seed` is constant, then the result of executing this 100 times will yield the same result **every** time. As such, you can just do
```
int rand100 = -1331702554;
```
Upvotes: 2 <issue_comment>username_2: If I got you correct, you search for some seeded method like
```
int[] giveMeInts(int amount, long seed);
```
There exists something very similar, the `Stream` methods of `Random` ([documentation](https://docs.oracle.com/javase/9/docs/api/java/util/Random.html)):
```
long seed = ...
int amount = 100;
Random rnd = new Random(seed);
IntStream values = rnd.ints(amount);
```
You could collect the stream values in collections like `List` or an `int[]` array:
```
List values = rnd.ints(amount).collect(Collectors.toList());
int[] values = rnd.ints(amount).toArray();
```
The methods will use the **seed** of the `Random` object, so if fed with the same seed they will always produce the same sequence of values.
Upvotes: 0 <issue_comment>username_2: >
> I want to know if there is a way to get `100`-th random integer value of given seed **without** calling `nextInt()` `100` times.
>
>
>
**No**, there is no way to directly get the `100`-th random number of the sequence without first generating the other `99` values. That's simply because of how the generation works in Java, the values depend on their previous values.
If you want to go into details, take a look at the [source code](http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/8u40-b25/java/util/Random.java#Random). The internal seed changes with every call of the `next` method, using the previous seed:
```
nextseed = (oldseed * multiplier + addend) & mask;
```
So in order to get the seed for the `100`-th value, you need to know the seed for the `99`-th value, which needs the seed for the `98`-th value and so on.
---
However, you can easily get the `100`-th value with a more compact statement like
```
long seed = ...
int amount = 100;
Random rnd = new Random(seed);
// Generate sequence of 100 random values, discard 99 and get the last
int val = rnd.ints(100).skip(amount - 1).findFirst().orElse(-1);
```
Keep in mind that this still computes all previous values, as explained. It just discards them.
---
After you have computed that value for the first time, you could just hardcode it into your program. Let's suppose you have tested it and it yields `123`. Then, if the seed does not change, the value will always be `123`. So you could just do
```
int val = 123;
```
The sequences remain the same through multiple instance of the JVM, so the value will always be valid for this seed. Don't know about release cycles though, I think it's allowed for `Random` to change its behavior through different versions of Java.
Upvotes: 3 [selected_answer] |
2018/03/20 | 930 | 3,199 | <issue_start>username_0: I want to count total number of rows in a file.
Please explain your code if possible.
String fileAbsolutePath = "gs://sourav\_bucket\_dataflow/" + fileName;
```
PCollection data = p.apply("Reading Data From File", TextIO.read().from(fileAbsolutePath));
PCollection count = data.apply(Count.globally());
```
Now i want to get the value.<issue_comment>username_1: Yes. As long as the `seed` is constant, then the result of executing this 100 times will yield the same result **every** time. As such, you can just do
```
int rand100 = -1331702554;
```
Upvotes: 2 <issue_comment>username_2: If I got you correct, you search for some seeded method like
```
int[] giveMeInts(int amount, long seed);
```
There exists something very similar, the `Stream` methods of `Random` ([documentation](https://docs.oracle.com/javase/9/docs/api/java/util/Random.html)):
```
long seed = ...
int amount = 100;
Random rnd = new Random(seed);
IntStream values = rnd.ints(amount);
```
You could collect the stream values in collections like `List` or an `int[]` array:
```
List values = rnd.ints(amount).collect(Collectors.toList());
int[] values = rnd.ints(amount).toArray();
```
The methods will use the **seed** of the `Random` object, so if fed with the same seed they will always produce the same sequence of values.
Upvotes: 0 <issue_comment>username_2: >
> I want to know if there is a way to get `100`-th random integer value of given seed **without** calling `nextInt()` `100` times.
>
>
>
**No**, there is no way to directly get the `100`-th random number of the sequence without first generating the other `99` values. That's simply because of how the generation works in Java, the values depend on their previous values.
If you want to go into details, take a look at the [source code](http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/8u40-b25/java/util/Random.java#Random). The internal seed changes with every call of the `next` method, using the previous seed:
```
nextseed = (oldseed * multiplier + addend) & mask;
```
So in order to get the seed for the `100`-th value, you need to know the seed for the `99`-th value, which needs the seed for the `98`-th value and so on.
---
However, you can easily get the `100`-th value with a more compact statement like
```
long seed = ...
int amount = 100;
Random rnd = new Random(seed);
// Generate sequence of 100 random values, discard 99 and get the last
int val = rnd.ints(100).skip(amount - 1).findFirst().orElse(-1);
```
Keep in mind that this still computes all previous values, as explained. It just discards them.
---
After you have computed that value for the first time, you could just hardcode it into your program. Let's suppose you have tested it and it yields `123`. Then, if the seed does not change, the value will always be `123`. So you could just do
```
int val = 123;
```
The sequences remain the same through multiple instance of the JVM, so the value will always be valid for this seed. Don't know about release cycles though, I think it's allowed for `Random` to change its behavior through different versions of Java.
Upvotes: 3 [selected_answer] |
2018/03/20 | 784 | 2,993 | <issue_start>username_0: I'm trying to catch the first time someone logs in so I can redirect him/her to another page then normally.
I'm working with groups. A admin can make a group with multiple users (with generated usernames and passwords). So it can be that a user is already created before the user logs in (There is no register form for just one user)
This is the code that I want:
```
def index(request):
if request.user:
user = request.user
if user.last_login is None:
return redirect('profile')
else:
return redirect('home')
else:
return redirect('/login')
```
I've read about checking user.last\_login but for this case that doesn't work because it checks this method AFTER the user logs in. Which means that user.last\_login is never None.
Can someone help me how can see when a user logs in for the first time?<issue_comment>username_1: The `last_login` value is set via a signal which happens at login, but before your code executes, so you never see `None` as you've discovered.
One way to achieve what you want could be to register your own signal receiver to set a `first_login` attribute somewhere, e.g. in the user's session, before updating the `last_login` value.
```py
from django.contrib.auth.signals import user_logged_in
from django.contrib.auth.models import update_last_login
def update_first_login(sender, user, **kwargs):
if user.last_login is None:
# First time this user has logged in
kwargs['request'].session['first_login'] = True
# Update the last_login value as normal
update_last_login(sender, user, **kwargs)
user_logged_in.disconnect(update_last_login)
user_logged_in.connect(update_first_login)
```
Then in your view, you'd just need to check the session for the `first_login` attribute.
```py
def index(request):
if request.user:
if request.session.get('first_login'):
return redirect('profile')
else:
return redirect('home')
else:
return redirect('/login')
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: It was a big problem to me at once but i found a way to do it easy.
Make a different variable to save the value of the users logged in data, when user is trying to login.
Check my code:
```
if user.last_login is None:
auth.login(request,user)
return redirect(alertPassword)
# when user is login for the first time it will pass a value
else:
auth.login(request,user)
return redirect(moveToLogin)
def moveToLogin(request):
return render(request,"home.html")
def alertPassword(request):
first_login=True
return render(request,"home.html",{"first_login":first_login})
#it will pass a value to the template called first login is True
#then go to your template and add this
{% if first_login==True%}
#put any thing you want to be done
#in my case i wanted to open a new window by using script
window.open("popUpWindow")
{%endif%}
```
Upvotes: 0 |
2018/03/20 | 1,635 | 4,208 | <issue_start>username_0: I want to be able to extract specific characters from a character vector in a data frame and return a new data frame. The information I want to extract is auditors remark on a specific company's income and balance sheet. My problem is that the auditors remarks are stored in vectors containing the different remarks. For instance:
`vec = c("A C G H D E")`. Since `"A" %in% vec` won't return `TRUE`, I have to use `strsplit` to break up each character vector in the data frame, hence `"A" %in% unlist(strsplit(dat[i, 2], " ")`. This returns `TRUE`.
Here is a MWE:
```
dat <- data.frame(orgnr = c(1, 2, 3, 4), rat = as.character(c("A B C")))
dat$rat <- as.character(dat$rat)
dat[2, 2] <- as.character(c("A F H L H"))
dat[3, 2] <- as.character(c("H X L O"))
dat[4, 2] <- as.character(c("X Y Z A B C"))
```
Now, to extract information about every single letter in the `rat` coloumn, I've tried several approaches, following similar problems such as Roland's answer to a similar question ([How to split a character vector into data frame?](https://stackoverflow.com/questions/22455884/how-to-split-a-character-vector-into-data-frame))
```
DF <- data.frame(do.call(rbind, strsplit(dat$rat, " ", fixed = TRUE)))
DF
X1 X2 X3 X4 X5 X6
1 A B C A B C
2 A F H L H A
3 H X L O H X
4 X Y Z A B C
```
This returnsthe following error message: `Warning message:
In (function (..., deparse.level = 1) :
number of columns of result is not a multiple of vector length (arg 2)`
It would be a desirable approach since it's fast, but I can't use `DF` since it recycles.
**Is there a way to insert `NA` instead of the recycling because of the different length of the vectors?**
So far I've found a solution to the problem by using for-loops in combination with `ifelse`-statements. However, with 3 mill obs. this approach takes years!
```
dat$A <- 0
for(i in seq(1, nrow(dat), 1)) {
print(i)
dat[i, 3] <- ifelse("A" %in% unlist(strsplit(dat[i, 2], " ")), 1, 0)
}
dat$B <- 0
for(i in seq(1, nrow(dat), 1)) {
print(i)
dat[i, 4] <- ifelse("B" %in% unlist(strsplit(dat[i, 2], " ")), 1, 0)
}
```
This gives the results I want:
```
dat
orgnr rat A B
1 1 A B C 1 1
2 2 A F H L H 1 0
3 3 H X L O 0 0
4 4 X Y Z A B C 1 1
```
I've searched through most of the relevant questions I could find here on StackOverflow. This one is really close to my problem: [How to convert a list consisting of vector of different lengths to a usable data frame in R?](https://stackoverflow.com/questions/15201305/how-to-convert-a-list-consisting-of-vector-of-different-lengths-to-a-usable-data), but I don't know how to implement `strsplit` with that approach.<issue_comment>username_1: We can use for-loop with `grepl` to achieve this task. `+ 0` is to convert the column form `TRUE` or `FALSE` to 1 or 0
```
for (col in c("A", "B")){
dat[[col]] <- grepl(col, dat$rat) + 0
}
dat
# orgnr rat A B
# 1 1 A B C 1 1
# 2 2 A F H L H 1 0
# 3 3 H X L O 0 0
# 4 4 X Y Z A B C 1 1
```
If performance is an issue, try this `data.table` approach.
```
library(data.table)
# Convert to data.table
setDT(dat)
# Create a helper function
dummy_fun <- function(col, vec){
grepl(col, vec) + 0
}
# Apply the function to A and B
dat[, c("A", "B") := lapply(c("A", "B"), dummy_fun, vec = rat)]
dat
# orgnr rat A B
# 1: 1 A B C 1 1
# 2: 2 A F H L H 1 0
# 3: 3 H X L O 0 0
# 4: 4 X Y Z A B C 1 1
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: using Base R:
```
a=strsplit(dat$rat," ")
b=data.frame(x=rep(dat$orgnr,lengths(a)),y=unlist(a),z=1)
cbind(dat,as.data.frame.matrix(xtabs(z~x+y,b)))
orgnr rat A B C F H L O X Y Z
1 1 A B C 1 1 1 0 0 0 0 0 0 0
2 2 A F H L H 1 0 0 1 2 1 0 0 0 0
3 3 H X L O 0 0 0 0 1 1 1 1 0 0
4 4 X Y Z A B C 1 1 1 0 0 0 0 1 1 1
```
From here you can Just call those columns that you want:
```
d=as.data.frame.matrix(xtabs(z~x+y,b))
cbind(dat,d[c("A","B")])
orgnr rat A B
1 1 A B C 1 1
2 2 A F H L H 1 0
3 3 H X L O 0 0
4 4 X Y Z A B C 1 1
```
Upvotes: 0 |
2018/03/20 | 523 | 2,117 | <issue_start>username_0: Is it possible to know the hostname of the source of an incoming request to a cloud firestore document? I would like to write a database rule of the form `allow write: if request.resource.data.source_host_name == some_predefined_value`. This is a web application so I'm trying to find a good way to limit who gets to write to my database without using traditional auth methods.<issue_comment>username_1: That sort of rule is not possible with Cloud Firestore. It also wouldn't be very secure, as it's possible to [spoof source IP addresses](https://en.wikipedia.org/wiki/IP_address_spoofing).
If you want to limit who can access your database, the only supported mechanism is through security rules and Firebase Authentication.
Upvotes: 4 [selected_answer]<issue_comment>username_2: Here's my solution to logging a client's IP address when interacting with Firestore. I agree with Doug that this won't guard against IP spoofing. Nonetheless, it's helpful for my purposes.
The trick is to create intermediary API endpoints using Cloud Functions, which then interact with Firestore. Here's an example Google Cloud function.
```js
const admin = require("firebase-admin");
const functions = require("firebase-functions");
admin.initializeApp();
exports.yourEndpoint = functions.https.onRequest(req, (res) => {
// Get the IP address from the headers in the request object
const ipAddress = getIPAddress(req);
// Do whatever you need in your firestore DB with ipAddress
admin.firestore().collection('ipAddresses').add({address: ipAddress})
.then((writeResult) => {
return res.status(200).send();
})
.catch((error) => {
return res.status(500).send();
});
});
// Helper function to extract the IP address from request headers
function getIPAddress(req) {
return (
req.headers["fastly-client-ip"] ||
req.headers["x-forwarded-for"] ||
req.headers["No IP, probably because we are in development mode"]
);
}
```
Note: you won't get an IP address in development mode, because the Firebase emulators don't create the proper headers.
Upvotes: 3 |
2018/03/20 | 861 | 3,137 | <issue_start>username_0: Following is legit, because [consolidating data frames in R](https://stackoverflow.com/questions/9807945/consolidating-data-frames-in-r) has not the answer, nor has [How to make a great R reproducible example?](https://stackoverflow.com/questions/5963269/how-to-make-a-great-r-reproducible-example) .
I have a dataset splitted in **multiple csv files** without headers. For a single import, I use:
```
X <- read_delim( ... ,
... ,
col_types = col( X1 = "c" ,
... ,
X100 = "i" )
)
```
To import **all**, I simply **repeat the above**.
I'd like to **shorten the code**, though.
Is it possible to supply the column definitions for *col()* to the *read\_delim* by only defining it once? I've tried to supply a *c=()* list, but it doesn't work.<issue_comment>username_1: A solution with lapply() :
--------------------------
You can set the working directory to a folder containing your files and then create a list of file paths for all of the files that contain ".csv" in that directory. Finally, you can use lapply to apply the `read.csv` function over the list of file paths. **I think you should use `read.csv` because you have .csv files.** You can set your colClasses in the call to lapply and they will be read the same for all of the .csv files you have placed in your working directory.
[Link to lapply() documentation](https://www.rdocumentation.org/packages/base/versions/3.4.3/topics/lapply)
You can try something like this:
```
setwd( "C:/path/to/directory/containing/files/here/")
file.paths <- list.files(pattern = '.csv')
column_classes <- c("character", "numeric", "numeric") # specify for all columns
my.files <- lapply(file.paths, function(x) read.csv(x, colClasses= column_classes))
```
Upvotes: 1 <issue_comment>username_2: if you want to make great code, which it seems you do, shouldn't repeat yourself. What if you get handed another 100 csv files? You won't want to change your code every time. So, you shouldn't just copy and paste your lines of code if you want to do something multiple times.
[Don't repeat yourself](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself)
I think the best way here is to define a custom function which reads the file with those parameters you have used. Then, get a list of all the files you want to read. This can be typed manually or you can use something like `list.files` to get names of files in a directory. Then, you can use `lapply` or `purrr::map` to apply your custom function to each of those filenames.
```
library(readr)
library(purrr)
read_my_file <- function(filename){
read_delim( ... ,
... ,
col_types = col( X1 = "c" ,
... ,
X100 = "i" )
)
}
filenames <- c("one.csv", "two.csv", "three.csv")
dataframes <- map(filenames, read_my_file)
```
If you want to then concatenate all the dataframes (by rows) into one large one, use `map_dfr` in place of `map`.
Upvotes: -1 |
2018/03/20 | 482 | 1,572 | <issue_start>username_0: I want to add a range of OUTPUT chain ports using firewall-cmd using its direct rule method, something like this:
```
firewall-cmd --permanent --direct --add-rule ipv4 filter OUTPUT 0 -p tcp -m tcp --dport=80-1000 -j ACCEPT
```
This says `success` however not seem to work<issue_comment>username_1: Use a comma, i.e. `--dport 80,1000`.
That said, using direct rules is discouraged (your command returns 'success' because `firewall-cmd` doesn't check the directly entered `iptables` syntax -- it assumes you have the rule correct). Man page says:
`Direct options should be used only as a last resort when it's not possible to use for example --add-service=service or --add-rich-rule='rule'.`
See [Configuring Complex Firewall Rules with the "Rich Language" Syntax](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/security_guide/sec-configuring_firewalld#Configuring_Complex_Firewall_Rules_with_the_Rich-Language_Syntax).
Upvotes: 1 <issue_comment>username_2: The below command will accept traffic from ports 22,53 and 80 (see [source](https://manage.accuwebhosting.com/knowledgebase/3802/How-to-allow-or-block-the-port-and-IP-Address-using-Firewalld-IP-tables-and-UFW-in-Linux.html)):
```
/sbin/iptables -A INPUT -p tcp --match multiport --dports 22,53,80 -j ACCEPT
```
I prefer this variation with reload required for permanent rules only:
```
sudo firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 0 -p tcp -m multiport --dport 22,53,80 -j ACCEPT && sudo firewall-cmd --reload
```
Upvotes: 0 |
2018/03/20 | 779 | 2,734 | <issue_start>username_0: Here is simplest piece of code
```
Dim testInvoiceDate As DateTime? = If(String.IsNullOrEmpty(Nothing),
Nothing,
New DateTime(2018, 3, 20))
```
Why variable `testInvoiceDate` is not `Nothing` , but `#1/1/0001 12:00:00 AM#` ?!
That is very weird !<issue_comment>username_1: That compiles in VB.NET(as opposed to C#) because here `Nothing` has multiple meanings.
1. `null`
2. default value for that type
In this case the compiler uses the second option since there is otherwise no implicit conversion between the `DateTime` and `Nothing`(in the meaning of `null`).
The default value of `DateTime`(a `Structure` which is a value type) is `#1/1/0001 12:00:00 AM#`
You could use this to get a `Nullable(Of DateTime)`:
```
Dim testInvoiceDate As DateTime? = If(String.IsNullOrEmpty(Nothing), New Nullable(Of Date), New DateTime(2018, 3, 20))
```
or use an `If`:
```
Dim testInvoiceDate As DateTime? = Nothing
If Not String.IsNullOrEmpty(Nothing) Then testInvoiceDate = New DateTime(2018, 3, 20)
```
Upvotes: 2 <issue_comment>username_2: The `If`-statement will return the same datatype for both cases.
Because the return-type in the `False`-case is `DateTime`, the return-type is the `DateTime`-default-value for the `True`-case.
Default for `DateTime` is `DateTime.MinValue` which is `#1/1/0001 12:00:00 AM#`.
This will work as expected:
```
Dim testInvoiceDate As DateTime? = If(String.IsNullOrEmpty(Nothing),
Nothing,
New DateTime?(New Date(2018, 3, 20)))
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: `Nothing` in VB.Net is the equivalent of `default(T)` in C#: the default value for the given type.
* For value types, this is essentially the equivalent of 'zero': `0` for `Integer`, `False` for `Boolean`, `DateTime.MinValue` for `DateTime`, ...
* For reference types, it is the `null` value (a reference that refers to, well, nothing).
Asigning `Nothing` to a `DateTime` therefore is the same as assigning it `DateTime.MinValue`
Upvotes: 2 <issue_comment>username_4: It's because you're using the 3-argument form of `If()`. It will try to return the same type based on parameters 2 and 3, so the Nothing in parameter 2 gets converted to a DateTime (and you get DateTime.MinValue).
If you use the 2-argument form, it applies null-coalescing, i.e. when the first argument (which must be an Object or a nullable type) is `Nothing`, it returns the second argument, otherwise it returns the first argument.
If you use
`Dim foo As DateTime? = If(Nothing, new DateTime(2018, 3, 20))` you will get the expected value.
Upvotes: 2 |
2018/03/20 | 340 | 1,059 | <issue_start>username_0: I have the following query :-
```
select actual_ssd
from tblsales
```
Is it possible to return records from `tblsales` where `actual_ssd` is dynamically 2 months ahead?
So, if I ran the query today, it would return all records between 01/05/2018 and 31/05/2018. And if I ran it on 04/04/2018, it would return all records between 01/06/2018 and 30/06/2018.<issue_comment>username_1: Use `trunc` with arithmetic.
```
where actual_ssd >= trunc(sysdate,'mm')+interval '2' month
and actual_ssd < trunc(sysdate,'mm')+interval '3' month
```
Upvotes: 1 <issue_comment>username_2: Sure thing
```
select *
from tblsales
where actual_ssd >= trunc(add_months(sysdate, 2), 'MM')
and actual_ssd < trunc(add_months(sysdate, 3), 'MM')
```
`trunc()` on a date value performs a rounding down, depending on the format mask; with the `'MM'` argument it rounds down to the first of the current month. `add_months()` does exactly what you'd think it does, add the required number of months to the date argument.
Upvotes: 3 [selected_answer] |
2018/03/20 | 3,657 | 7,157 | <issue_start>username_0: I have a problem with creating a dataframe which holds a time interval where a measurement of temperature is in. As for now the dataframe has its index as time and another column as the measurements and i would like to have the time converted to an interval of 12 hours and the measurement to be the mean of the values in that timelapse.
```
measurement
time
2016-11-04 08:49:25 17.730000
2016-11-04 10:23:52 18.059999
2016-11-04 11:02:09 18.370001
2016-11-04 12:04:20 18.090000
2016-11-04 14:26:43 18.320000
```
so instead of having each time related to the measurement i want the mean of the value of let's say 12 hours like this:
```
measurement
time
2016-11-04 00:00:00 - 2016-11-04 12:00:00 17.730000
2016-11-04 12:00:00 - 2016-11-05 00:00:00 18.059999
2016-11-05 00:00:00 - 2016-11-05 12:00:00 18.370001
2016-11-05 12:00:00 - 2016-11-06 00:00:00 18.090000
2016-11-06 00:00:00 - 2016-11-06 12:00:00 18.320000
```
is there an easy way to do this with pandas?
Later i would like to convert the measurements into intervals as well so that the data becomes boolean like this:
```
17.0-18.0 18.0-19.0 19.0-20
time
2016-11-04 00:00:00 - 2016-11-04 12:00:00 1 0 0
2016-11-04 12:00:00 - 2016-11-05 00:00:00 0 1 0
2016-11-05 00:00:00 - 2016-11-05 12:00:00 0 1 0
2016-11-05 12:00:00 - 2016-11-06 00:00:00 0 1 0
2016-11-06 00:00:00 - 2016-11-06 12:00:00 0 1 0
```
**EDIT:**
I used a solution first posted by Coldspeed
```
df = pd.DataFrame({'timestamp':time.values, 'readings':readings.values})
df = df.groupby(pd.Grouper(key='timestamp', freq='12H'))['readings'].mean()
v = pd.cut(df, bins=[17,18,19,20,21,22,23,24,25,26,27,28], labels=['17-18','18-19','19-20','20-21','21-22','22-23','23-24','24-25','25-26','26-27','27-28'])
```
I know that the bins and labels could have been done but a for loop but this is just a quick fix.
the groupby function which groups the value of 'timestamp' in the frequency of 12 hours and gets the readings mean value in the timelapse.
Then the cut function is used to categorize the means into their categories.
result:
```
17-18 18-19 19-20 20-21 21-22 22-23 23-24 24-25 \
timestamp
2016-11-04 00:00:00 0 1 0 0 0 0 0 0
2016-11-04 12:00:00 0 1 0 0 0 0 0 0
2016-11-05 00:00:00 0 0 0 0 0 0 0 0
2016-11-05 12:00:00 1 0 0 0 0 0 0 0
2016-11-06 00:00:00 1 0 0 0 0 0 0 0
2016-11-06 12:00:00 0 0 0 0 0 0 0 0
2016-11-07 00:00:00 0 1 0 0 0 0 0 0
2016-11-07 12:00:00 1 0 0 0 0 0 0 0
2016-11-08 00:00:00 0 0 0 0 0 0 0 0
2016-11-08 12:00:00 0 0 0 0 0 0 0 0
2016-11-09 00:00:00 1 0 0 0 0 0 0 0
2016-11-09 12:00:00 1 0 0 0 0 0 0 0
2016-11-10 00:00:00 0 1 0 0 0 0 0 0
2016-11-10 12:00:00 0 0 0 0 0 0 0 0
2016-11-11 00:00:00 0 0 0 0 0 0 0 0
2016-11-11 12:00:00 0 0 0 0 0 0 0 0
2016-11-12 00:00:00 0 0 0 0 0 0 0 0
2016-11-12 12:00:00 0 0 0 0 0 0 0 0
2016-11-13 00:00:00 0 0 0 0 0 0 0 0
2016-11-13 12:00:00 0 0 0 0 0 0 0 0
2016-11-14 00:00:00 0 0 0 0 0 0 0 0
2016-11-14 12:00:00 0 1 0 0 0 0 0 0
2016-11-15 00:00:00 0 0 0 1 0 0 0 0
2016-11-15 12:00:00 0 0 0 0 0 1 0 0
2016-11-16 00:00:00 0 0 0 0 0 0 1 0
2016-11-16 12:00:00 0 0 0 0 0 0 0 0
2016-11-17 00:00:00 0 0 0 0 0 0 0 0
```<issue_comment>username_1: Use `pd.cut` + `pd.get_dummies`:
```
v = pd.cut(df.measurement, bins=[17, 18, 19, 20], labels=['17-18', '18-19', '19-20'])
pd.get_dummies(v)
17-18 18-19 19-20
0 1 0 0
1 0 1 0
2 0 1 0
3 0 1 0
4 0 1 0
```
Upvotes: 1 <issue_comment>username_2: You can use `pd.cut()` + `pd.get_dummies()`:
```
df["measurement"] = pd.cut(df["measurement"], bins=[17.0,18.0,19.0,20.0])
dummies = pd.get_dummies(df["measurement"])
```
Upvotes: 0 <issue_comment>username_3: For your first question: you can use `pandas.TimeGrouper` to group by every 12 h (or any other frequency) and then take the mean of the groups.
`df.groupby([pd.TimeGrouper(freq='12H')]).mean()`
Upvotes: 0 <issue_comment>username_4: IIUC you want to resample by 12 hour chunks, then create dummies.
`pd.cut` is a perfectly acceptable way to cut the resultant data into bins.
However, I use `np.searchsorted` to accomplish the task.
```
bins = np.array([17, 18, 19, 20])
labels = np.array(['<17', '17-18', '18-19', '19-20', '>20'])
resampled = df.resample('12H').measurement.mean()
pd.get_dummies(pd.Series(labels[bins.searchsorted(resampled.values)], resampled.index))
17-18 18-19 19-20 >20
2018-03-20 00:00:00 0 1 0 0
2018-03-20 12:00:00 1 0 0 0
2018-03-21 00:00:00 0 1 0 0
2018-03-21 12:00:00 0 0 0 1
2018-03-22 00:00:00 0 0 1 0
2018-03-22 12:00:00 0 0 0 1
```
---
**Setup**
```
np.random.seed(int(np.pi * 1E6))
tidx = pd.date_range(pd.Timestamp('now'), freq='3H', periods=20)
df = pd.DataFrame(dict(measurement=np.random.rand(len(tidx)) * 6 + 17), tidx)
df
measurement
2018-03-20 06:58:30.484383 17.960744
2018-03-20 09:58:30.484383 18.572100
2018-03-20 12:58:30.484383 17.646766
2018-03-20 15:58:30.484383 19.025463
2018-03-20 18:58:30.484383 17.521399
2018-03-20 21:58:30.484383 17.318663
2018-03-21 00:58:30.484383 19.388553
2018-03-21 03:58:30.484383 19.520969
2018-03-21 06:58:30.484383 19.060640
2018-03-21 09:58:30.484383 17.106034
2018-03-21 12:58:30.484383 22.887546
2018-03-21 15:58:30.484383 18.437271
2018-03-21 18:58:30.484383 18.426362
2018-03-21 21:58:30.484383 20.558928
2018-03-22 00:58:30.484383 22.555121
2018-03-22 03:58:30.484383 17.139489
2018-03-22 06:58:30.484383 17.209499
2018-03-22 09:58:30.484383 19.466367
2018-03-22 12:58:30.484383 21.765692
2018-03-22 15:58:30.484383 19.680785
```
Upvotes: 1 |
2018/03/20 | 1,185 | 4,041 | <issue_start>username_0: I'm creating a website which handles payment from stripe using wp simple pay plugin.
I created a webhook that will allow me to know if the payment is successful or not. If it is success, I will create an order that has data in it but postman always returning 500 Internal Server Error and I cannot see the error from it.
If I remove the `wc_create_order()` and return the $address, it worked perfectly. I am suspecting that I've doing something wrong in my code.
Here's the code I'd created
```
add_action('woocommerce_checkout_process', 'pinion_add_order');
function pinion_add_order($m, $a) {
global $woocommerce;
$address = array(
'first_name' => 'Project Paid ',
'last_name' => $m
);
$order = wc_create_order();
$order->add_product(($a == '100000' ? get_product('2858') : get_product('2859')), 1);
$order->set_address($address, 'billing');
$order->set_address($address, 'shipping');
// $order->set_total($amount);
$order->calculate_totals();
$order->update_status("Completed", 'Imported order', TRUE);
return $order;
}
```
Any help would be appreciated. Thanks<issue_comment>username_1: Although *LoicTheAztek* is correct and there is probably a better way of doing this, the answer to your question is that you haven't saved the Order object.
So before your return statement, try adding `$order->save();`
This will actually save the values to the database.
If that doesn't work, you may also need to adding some additional properties, such as a payment method using `$order->set_payment_method($string);`
Hope that helps.
Upvotes: 2 <issue_comment>username_2: I hope the below code help you.
```
function pinion_add_order() {
global $current_user;
$a = '100000';
$order = wc_create_order();
$order->add_product(($a == '100000' ? get_product('2858') : get_product('2859')), 1);
//$order->add_product( $_product, $item_quantity );
$order->calculate_totals();
$fname = get_user_meta( $current_user->ID, 'first_name', true );
$lname = get_user_meta( $current_user->ID, 'last_name', true );
$email = $current_user->user_email;
$address_1 = get_user_meta( $current_user->ID, 'billing_address_1', true );
$address_2 = get_user_meta( $current_user->ID, 'billing_address_2', true );
$city = get_user_meta( $current_user->ID, 'billing_city', true );
$postcode = get_user_meta( $current_user->ID, 'billing_postcode', true );
$country = get_user_meta( $current_user->ID, 'billing_country', true );
$state = get_user_meta( $current_user->ID, 'billing_state', true );
$billing_address = array(
'first_name' => $fname,
'last_name' => $lname,
'email' => $email,
'address_1' => $address_1,
'address_2' => $address_2,
'city' => $city,
'state' => $state,
'postcode' => $postcode,
'country' => $country,
);
$address = array(
'first_name' => $fname,
'last_name' => $lname,
'email' => $email,
'address_1' => $address_1,
'address_2' => $address_2,
'city' => $city,
'state' => $state,
'postcode' => $postcode,
'country' => $country,
);
$shipping_cost = 5;
$shipping_method = 'Fedex';
$order->add_shipping($shipping_cost);
$order->set_address($billing_address,'billing');
$order->set_address($address,'shipping');
$order->set_payment_method('check');//
$order->shipping_method_title = $shipping_method;
$order->calculate_totals();
$order->update_status('on-hold');
$order->save();
}
add_action('woocommerce_checkout_process', 'pinion_add_order'); //it does not take any parameters.
```
For more help please see the file and class structure in file : `\wp-content\plugins\woocommerce\includes\class-wc-checkout.php`
Upvotes: 1 |
2018/03/20 | 872 | 2,486 | <issue_start>username_0: I am wanting a hidden div to slide down from the top of the screen when the user clicks on "Services".
When the div does slide down, I want the height to be `height: 35vh;`.
I am unsure of what I am doing wrong. Does anyone see the issue?
```js
$('#serviceClick').click( function () {
$('serviceNav').addClass('active').slideDown();
console.log('The click is working');
});
```
```css
#serviceNav {
width: 100%;
top: -35vh;
z-index: -1;
position: absolute;
background-color: rgba(0,0,0,0);
}
#serviceNav.active {
height: 35vh;
top: 0;
width: 100%;
background: red;
z-index: 500;
}
```
```html
- SERVICES
```<issue_comment>username_1: You forgot to add **#** in front of the ID
```js
$('#serviceClick').click( function () {
$('#serviceNav').addClass('active').slideDown();
console.log('The click is working');
});
```
```css
#serviceNav {
width: 100%;
top: -35vh;
z-index: -1;
position: absolute;
background-color: rgba(0,0,0,0);
}
#serviceNav.active {
height: 35vh;
top: 0;
width: 100%;
background: red;
z-index: 500;
}
```
```html
- SERVICES
```
Upvotes: 1 <issue_comment>username_2: Firstly - You need to reference the ID - `$('#serviceNav')`.
Secondly - Why are you sliding down and adding a class which moves it anyway?
Thirdly - You can't animate the `height` element, you should use `max-height`.
```
$('#serviceClick').click( function () {
$('#serviceNav').addClass('active');
console.log('The click is working');
});
#serviceNav {
width: 100%;
top: -35vh;
z-index: -1;
position: absolute;
background-color: rgba(0,0,0,0);
height: 35vh;
display: none;
}
#serviceNav.active {
top: 0;
width: 100%;
background: red;
z-index: 500;
}
```
Edit - example: <https://jsfiddle.net/664chupo/>
Upvotes: 2 [selected_answer]<issue_comment>username_3: Here is a working code and codepen.
HTML
----
```
- SERVICES
text
```
CSS
---
```
#serviceNav {
width: 100%;
top: -35vh;
z-index: -1;
position: absolute;
background-color: rgba(0,0,0,0);
}
#serviceNav {
height: 0vh;
top: 0;
width: 100%;
background: red;
z-index: 500;
display:none;
transition: all 2s;
}
```
JS
--
```
$('#serviceClick').click( function () {
$('#serviceNav').show();
$('#serviceNav').css('height','35vh');
});
```
Codepen url: [codepen](https://codepen.io/smitraval27/pen/Ldxojx)
Upvotes: 0 |
2018/03/20 | 721 | 1,884 | <issue_start>username_0: In tensorflow, I would like to sum columns of a 2D tensor according to multiple sets of indices.
For example:
Summing the columns of the following tensor
```
[[1 2 3 4 5]
[5 4 3 2 1]]
```
according to the 2 sets of indices (first set to sum columns 0 1 2 and second set to sum columns 3 4)
```
[[0,1,2],[3,4]]
```
should give 2 columns
```
[[6 9]
[12 3]]
```
Remarks:
1. All columns' indices will appear in one and only one set of indices.
2. This has to be done in Tensorflow, so that gradient can flow through this operation.
Do you have any idea how to perform that operation? I suspect I need to use tf.slice and probably tf.while\_loop.<issue_comment>username_1: I know of a crude way of solving this in NumPy if you don't mind solving this problem with NumPy.
```
import numpy as np
mat = np.array([[1, 2, 3, 4, 5], [5, 4, 3, 2, 1]])
grid1 = np.ix_([0], [0, 1, 2])
item1 = np.sum(mat[grid1])
grid2 = np.ix_([1], [0, 1, 2])
item2 = np.sum(mat[grid2])
grid3 = np.ix_([0], [3, 4])
item3 = np.sum(mat[grid3])
grid4 = np.ix_([1], [3, 4])
item4 = np.sum(mat[grid4])
result = np.array([[item1, item3], [item2, item4]])
```
Upvotes: -1 <issue_comment>username_2: You can do that with [`tf.segment_sum`](https://www.tensorflow.org/api_docs/python/tf/segment_sum):
```
import tensorflow as tf
nums = [[1, 2, 3, 4, 5],
[5, 4, 3, 2, 1]]
column_idx = [[0, 1, 2], [3, 4]]
with tf.Session() as sess:
# Data as TF tensor
data = tf.constant(nums)
# Make segment ids
segments = tf.concat([tf.tile([i], [len(lst)]) for i, lst in enumerate(column_idx)], axis=0)
# Select columns
data_cols = tf.gather(tf.transpose(data), tf.concat(column_idx, axis=0))
col_sum = tf.transpose(tf.segment_sum(data_cols, segments))
print(sess.run(col_sum))
```
Output:
```
[[ 6 9]
[12 3]]
```
Upvotes: 3 [selected_answer] |
2018/03/20 | 522 | 1,825 | <issue_start>username_0: I have below insert running into pl/sql code:
```
PROCEDURE insertA(param in varchar2)
IS
v_col VARCHAR2(32000) := 'colA, colB, colC';
v_ins VARCHAR2(32000) := 'INSERT INTO '||getU||'.'||My_TABLE||' ('||v_col||') '
|| '(SELECT 'A',
valueB as B,
valueC as C
from table2
where colX = '''||param||''');
BEGIN
EXECUTE IMMEDIATE v_ins;
END;
```
But when `select 'A', valueB, valueC from table2` is 0 then I got error like:
ORA-01403 No data was found from the objects.
**How to proceed anyway?**
Or just skip the insert, as this insertA is executed from WPF by C#.<issue_comment>username_1: You can use Exception when data is not found.
```
EXCEPTION
WHEN NO_DATA_FOUND THEN
```
I'm new in PL/SQL so i can't help too much but really hope this will help you.
Upvotes: 1 <issue_comment>username_2: To expand on @AdrianNicolae's suggestion, rewrite your procedure as:
```
PROCEDURE insertA(param in varchar2)
IS
v_col VARCHAR2(32000) := 'colA, colB, colC';
v_ins VARCHAR2(32000) := 'INSERT INTO '||getU||'.'||My_TABLE||' ('||v_col||') '
|| '(SELECT 'A',
valueB as B,
valueC as C
from table2
where colX = '''||param||''');
BEGIN
EXECUTE IMMEDIATE v_ins;
EXCEPTION
WHEN NO_DATA_FOUND THEN
NULL;
END;
```
We're just adding an exception handler for the NO\_DATA\_FOUND (ORA-1403) exception, which in this case catches the exception but doesn't do anything with it, in effect allowing the code to proceed onwards when this exception is encountered.
Best of luck.
Upvotes: 3 [selected_answer] |
2018/03/20 | 700 | 1,917 | <issue_start>username_0: I would like to calculate this mathematical summation:
```
Σ(n=1 to 36) (C(36,n))*((-1)**(n+1))*0,23*(0,29**(n-1))
```
where 36 C n means the combination of these two numbers. I already got the function for that:
```
def C(n,k):
C = factorial(n)/(factorial(k)*factorial(n-k))
return C
```
How can I make this summation?
I started learning Python this semester, and I happy to use it for the homework.
Thank you.<issue_comment>username_1: You could use a loop to sum all the values into a single variable.
```
val = 0
for n in range(1,37):
intrm_val = C(36,n))*((-1)**(n+1))*0,23*(0,29**(n-1))
val = val + intrm_val
```
Do the same by using list comprehension like this:
```
intrm_val_list = [C(36,n))*((-1)**(n+1))*0,23*(0,29**(n-1)) for n in range(1,37)]
val = sum(intrm_val_list) # sum is built-in function of python.
```
Lambda is similar to inline functions.
```
def foo(a):
return a*a
```
This above function can be easily be rewritten like this `lambda a : a*a`. Isn't this amazing !! We can rewrite our list comprehension using lambda like this
```
lambda n : C(36,n))*((-1)**(n+1))*0,23*(0,29**(n-1))
```
`map` is again a built-in function in python for applying a single function to multiple elements of an iterable( list in our case) simultaneously.
e.g. `map(func1, list_of_values)`.
`range` produces a generator within a given range, excluding the end element.
Combining all of the above information we get the answer given by another good fellow @username_2.
Upvotes: 0 <issue_comment>username_2: How about `sum(map(lambda n: (C(36,n))*((-1)**(n+1))*0,23*(0,29**(n-1)), range(1, 36+1)))`?
Upvotes: 1 <issue_comment>username_3: Did you aim for something like this:
```
def C(n, k):
return factorial(n)/factorial(k)*factorial(n-k)
sum([C(36,i)*((-1)**(i+1))*0.23*(0.29**(i-1)) for i in range(1,37)])
```
Upvotes: 1 [selected_answer] |
2018/03/20 | 725 | 2,780 | <issue_start>username_0: I am rather new to programming and have begun designing a text-adventure game in Java. I am currently having difficulty finding out how to make the text-adventure playable on machines that don't have a JRE such as BlueJ or Eclipse installed. I would like to share my game with friends but don't want to install the JDK and a JRE for them to play it.
Thanks for the help in advance.<issue_comment>username_1: Terminology
===========
* JDK
This is the **Java SE Development Kit**, which contains all of the JRE and also tools for building java programs.
* JRE
This is the **Java Runtime Environment**, which only had the bits needed to run java programs
* IDE
This is the **Integrated Development Environment**, which only the developer needs, such as Eclipse, BlueJ or IDEA.
What do users need
==================
To run a JAR or class file the user needs:
* Any dependencies that the program needs
* Either a JDK or a JRE
Using JLink
===========
You can use JLink to make a small version of the JRE, with only the packages which your program needs to run, reducing the install size, this can then be packaged with your game.
**Advantages**
* Smaller than a full JRE
* Faster to load
**Disadvantages**
* If lots of apps have their own JRE then space may be wasted
* You will need to update the JREs individually if an update happens
Upvotes: 0 <issue_comment>username_2: 1) **Using Java [Web Start](https://docs.oracle.com/javase/tutorial/deployment/webstart/index.html)**
Java Web Start software provides the power to launch full-featured applications with a single click. Users can download and launch applications, such as a complete spreadsheet program or an Internet chat client, without going through lengthy installation procedures.
2) **The [Java Packager Tool](https://docs.oracle.com/javase/8/docs/technotes/guides/deploy/packager.html)** (For Java 9)
The [Java Packager](https://docs.oracle.com/javase/8/docs/technotes/guides/deploy/packager.html) tool can be used to compile, package, sign, and deploy Java and JavaFX applications from the command line. It can be used as an alternative to an Ant task or building the applications in an IDE
3) **Wrapping Java to EXE**
Multi-platform tools can generate native installers for multiple platforms —
Windows, OS X, Linux
* [Advanced Installer for Java](https://www.advancedinstaller.com/java.html)
* [install4j](https://www.ej-technologies.com/products/install4j/overview.html)
* [InstallAnywhere](https://www.flexera.com/producer/products/software-installation/installanywhere/)
* [JWrapper](https://www.jwrapper.com)
* [IzPack](http://izpack.org)
* [InstallShield](https://www.flexera.com/producer/products/software-installation/installshield-software-installer/)
Upvotes: 1 |
2018/03/20 | 847 | 3,174 | <issue_start>username_0: I'm using argparse.ArgumentParser.parse\_known\_args to perform some heuristics on the command-line given to an external utility, while only specifying the relevant parts of its syntax in my Python code. However when known and unknown arguments are given in short form and are joined together in (as in `ls -lh`), it doesn't detect them.
Example:
```
import argparse
parser = argparse.ArgumentParser(prog='PROG')
parser.add_argument('-x', action='store_true')
parser.add_argument('-y', action='store_true')
parser.parse_known_args(['-xy', '-z']) # OK, gives: (Namespace(x=True, y=True), ['-z']
parser.parse_known_args(['-xyz']) # Fails with: PROG: error: argument -y: ignored explicit argument 'z'
```
Is there a way to get it to make better heuristics in this case?<issue_comment>username_1: Terminology
===========
* JDK
This is the **Java SE Development Kit**, which contains all of the JRE and also tools for building java programs.
* JRE
This is the **Java Runtime Environment**, which only had the bits needed to run java programs
* IDE
This is the **Integrated Development Environment**, which only the developer needs, such as Eclipse, BlueJ or IDEA.
What do users need
==================
To run a JAR or class file the user needs:
* Any dependencies that the program needs
* Either a JDK or a JRE
Using JLink
===========
You can use JLink to make a small version of the JRE, with only the packages which your program needs to run, reducing the install size, this can then be packaged with your game.
**Advantages**
* Smaller than a full JRE
* Faster to load
**Disadvantages**
* If lots of apps have their own JRE then space may be wasted
* You will need to update the JREs individually if an update happens
Upvotes: 0 <issue_comment>username_2: 1) **Using Java [Web Start](https://docs.oracle.com/javase/tutorial/deployment/webstart/index.html)**
Java Web Start software provides the power to launch full-featured applications with a single click. Users can download and launch applications, such as a complete spreadsheet program or an Internet chat client, without going through lengthy installation procedures.
2) **The [Java Packager Tool](https://docs.oracle.com/javase/8/docs/technotes/guides/deploy/packager.html)** (For Java 9)
The [Java Packager](https://docs.oracle.com/javase/8/docs/technotes/guides/deploy/packager.html) tool can be used to compile, package, sign, and deploy Java and JavaFX applications from the command line. It can be used as an alternative to an Ant task or building the applications in an IDE
3) **Wrapping Java to EXE**
Multi-platform tools can generate native installers for multiple platforms —
Windows, OS X, Linux
* [Advanced Installer for Java](https://www.advancedinstaller.com/java.html)
* [install4j](https://www.ej-technologies.com/products/install4j/overview.html)
* [InstallAnywhere](https://www.flexera.com/producer/products/software-installation/installanywhere/)
* [JWrapper](https://www.jwrapper.com)
* [IzPack](http://izpack.org)
* [InstallShield](https://www.flexera.com/producer/products/software-installation/installshield-software-installer/)
Upvotes: 1 |
2018/03/20 | 580 | 1,874 | <issue_start>username_0: I'm attempting to write a program that fills an 8x8 checkerboard with two alternating colors, but I am struggling in creating a method `public void fillCheckerboard(Color[][] board)` to test it.
I'm trying to print out an array that shows 'x' for black and 'x' for white.<issue_comment>username_1: You can use the modulo 2 of the sum of x and y to get this checker pattern. Hereby some example code which populates a two dimensional array to do so. Any questions just let me know.
```
public class HelloWorld {
public static void main(String []args){
String[][] board = new String[8][8];
for (int i = 0; i < 8; i++) {
for (int j = 0; j < 8; j++) {
board[i][j] = (i + j) % 2 == 0 ? "X" : "O";
}
}
for (int i = 0; i < 8; i++) {
for (int j = 0; j < 8; j++) {
System.out.print(board[i][j]+" ");
}
System.out.println();
}
}
}
```
It will give you an input like you would most likely want:
```
X O X O X O X O
O X O X O X O X
X O X O X O X O
O X O X O X O X
X O X O X O X O
O X O X O X O X
X O X O X O X O
O X O X O X O X
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Something like this should work:
```
private static class Color {
private final char c;
public Color(char c) {
this.c = c;
}
@Override
public String toString() {
return "" + c;
}
}
public static final Color WHITE = new Color('X');
public static final Color BLACK = new Color(' ');
public void fillCheckerboard(Color[][] board) {
Color c = BLACK;
for (int i = 0; i < board.length; i++) {
for (int j = 0; j < board[i].length; j++) {
board[i][j] = c;
c = c == BLACK ? WHITE : BLACK;
}
}
}
```
Upvotes: 1 |
2018/03/20 | 1,008 | 4,087 | <issue_start>username_0: It's possible to install Application Insights via the extensions section in Azure App Services, but it's also possible to just install the packages via NuGet and define the `APPINSIGHTS_INSTRUMENTATIONKEY` application setting. You can also do both.
What is the difference?
Edit:
I have found what the differences are between installing the extension or the NuGet packages:
>
> You can configure monitoring by instrumenting the app in either of two ways:
>
>
> Run-time - You can select a performance monitoring extension when your web app is already live. It isn't necessary to rebuild or re-install your app. You get a standard set of packages that monitor response times, success rates, exceptions, dependencies, and so on.
>
>
> Build time - You can install a package in your app in development. This option is more versatile. In addition to the same standard packages, you can write code to customize the telemetry or to send your own telemetry. You can log specific activities or record events according to the semantics of your app domain.
>
>
>
Source: <https://learn.microsoft.com/en-us/azure/application-insights/app-insights-azure-web-apps#run-time-or-build-time>
But what if you do both? Will there be anything beneficial about it?<issue_comment>username_1: >
> But what if you do both? Will there be anything beneficial about it?
>
>
>
As you know we could install the packages via NuGet to use Application Insights .For this way, we could add custom telemetry data in my code,and monitor the telemetry data in application insights tools in Visual Studio. This will be very convenient. You could also refer to this [article](https://learn.microsoft.com/en-us/azure/application-insights/app-insights-api-custom-events-metrics) to add custom telemetry data.
Code in MVC Porject:
```
public ActionResult Index()
{
Trace.TraceInformation("my trace info Home/Index");
var telemetry = new Microsoft.ApplicationInsights.TelemetryClient();
RequestTelemetry requestTelemetry = new RequestTelemetry();
telemetry.TrackTrace("Home/Index Main");
telemetry.TrackPageView("Home/Index");
return View();
}
```
The telemetry data in Application insight tool:
[](https://i.stack.imgur.com/Jorx9.png)
The application insights in **App Service**,you can only see limited data and trends in past 24 hours. This is very convenient for you to view the main telemetry data in the app service directly. But if you want to know more details, that is not a good choose.
[](https://i.stack.imgur.com/qhcEj.png)
The most comprehensive monitor data and services are in **application insight service**. You could click '**View more in application insights**' in App Service monitor extension to go. Or you could go to application insight service directly.
[](https://i.stack.imgur.com/FVDt1.png)
**Time Range** in application insight service(include custom time range).
[](https://i.stack.imgur.com/k5jGe.png)
Upvotes: 1 <issue_comment>username_2: >
> But what if you do both? Will there be anything beneficial about it?
>
>
>
* The extension detects that your app has already brought Application Insights with it and won't do anything, except dropping a profiler, which helps collect full SQL Statement in Dependencies. Without the profiler full SQL Statement won't be collected but everything else should just work fine.
(If you are using 2.3.0 or earlier of SDK or if you application is targeting old .NET Framework like 4.0, then profiler does better correlation of dependencies as well.
In short, Starting with 2.4.0 of SDK, the only advantage of installing extension on top on nuget installation is to get full SQL Statements in Dependency Telemetry.
Upvotes: 4 [selected_answer] |
2018/03/20 | 339 | 1,130 | <issue_start>username_0: I want to send a text to a field and submit it. HTML code was shown. How should I do?
This is my HTML code:
```
```
I want send a text to box with this code:
```
driver.execute_script("arguments[0].value = arguments[1]", driver.find_element_by_css_selector("textarea._bilrf"), "nice!")
```
How can I press enter to send my text?
Could you help me, please?<issue_comment>username_1: Try using `send_keys()` (<https://seleniumhq.github.io/selenium/docs/api/py/webdriver_remote/selenium.webdriver.remote.webelement.html#selenium.webdriver.remote.webelement.WebElement.send_keys>) and special keys module (<https://seleniumhq.github.io/selenium/docs/api/py/webdriver/selenium.webdriver.common.keys.html#module-selenium.webdriver.common.keys>)
```
from selenium.webdriver.common.keys import Keys
...
driver.find_element_by_css_selector("textarea._bilrf").send_keys("nice!", Keys.ENTER)
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: ```
from selenium.webdriver.common.keys import Keys
a = driver.find_element_by_css_selector("textarea._bilrf").send_keys("<PASSWORD>!")
a.submit()
```
Upvotes: 0 |
2018/03/20 | 1,894 | 6,857 | <issue_start>username_0: ```
ClassicEditor
.create( editorElement, {
ckfinder: {
uploadUrl: 'my_server_url'
}
} )
.then( ... )
.catch( ... );
```
What should be my server response? I am using Java in the backend.
Whatever my response is, it throws a dialog box 'cannot upload file'.<issue_comment>username_1: The `ckfinder.uploadUrl` property configures the [`CKFinderUploadAdapter`](https://docs.ckeditor.com/ckeditor5/latest/api/module_adapter-ckfinder_uploadadapter-CKFinderUploadAdapter.html) plugin. This plugin is responsible for communication with the [CKFinder's server-side connector](https://ckeditor.com/ckeditor-4/ckfinder/).
So, in other words, your server should run the CKFinder's server-side connector. This is a proprietary software, so I won't go deeper into how it works.
If you wish to learn about all ways to configure image upload, please read [How to enable image upload support in CKEditor 5?](https://stackoverflow.com/questions/46765197/how-to-enable-image-upload-support-in-ckeditor-5).
Upvotes: 1 <issue_comment>username_2: Success response :
```
{
"uploaded": true,
"url": "http://1172.16.58.3/uploaded-image.jpeg"
}
```
Failure response :
```
{
"uploaded": false,
"error": {
"message": "could not upload this image"
}
}
```
Upvotes: 5 <issue_comment>username_3: You can configure CKEditor to upload files
ClassicEditor.create( document.querySelector( '#editor' ), {
```
cloudServices: {
tokenUrl: 'https://example.com/cs-token-endpoint',
uploadUrl: 'https://your-organization-id.cke-cs.com/easyimage/upload/'
}
} )
.then( ... )
.catch( ... );
```
For the more details visit this link : [<https://docs.ckeditor.com/ckeditor5/latest/features/image-upload.html>](https://docs.ckeditor.com/ckeditor5/latest/features/image-upload.html)
Upvotes: 0 <issue_comment>username_4: ```
class UploadAdapter {
constructor( loader ) {
this.loader = loader;
this.upload = this.upload.bind(this)
this.abort = this.abort.bind(this)
}
upload() {
const data = new FormData();
data.append('typeOption', 'upload_image');
data.append('file', this.loader.file);
return axios({
url: `${API}forums`,
method: 'post',
data,
headers: {
'Authorization': tokenCopyPaste()
},
withCredentials: true
}).then(res => {
console.log(res)
var resData = res.data;
resData.default = resData.url;
return resData;
}).catch(error => {
console.log(error)
return Promise.reject(error)
});
}
abort() {
// Reject promise returned from upload() method.
}
}
{
editor.ui.view.editable.element.style.height = '200px';
editor.plugins.get( 'FileRepository' ).createUploadAdapter = function( loader ) {
return new UploadAdapter( loader );
};
} }
onChange={ ( event, editor ) => {
console.log(editor.getData())
} }
/>
```
Upvotes: 2 <issue_comment>username_5: this is my code for Ckeditor 5 and Phalcon framework.#products\_desc point to textarea id.
```
var myEditor;
ClassicEditor
.create( document.querySelector( '#products\_desc' ) ,
{
ckfinder: {
uploadUrl: 'Ckfinder/upload'
}
}
)
.then( editor => {
console.log( 'Editor was initialized', editor );
myEditor = editor;
} )
.catch( err => {
console.error( err.stack );
} );
```
and my php controller:
```
php
use Phalcon\Mvc\Controller;
class CkfinderController extends Controller
{
public function uploadAction()
{
try {
if ($this-request->hasFiles() == true) {
$errors = []; // Store all foreseen and unforseen errors here
$fileExtensions = ['jpeg','jpg','png','gif','svg'];
$uploadDirectory = "../public/Uploads/";
$Y=date("Y");
$M=date("m");
foreach ($this->request->getUploadedFiles() as $file) {
if (in_array($file->getExtension(),$fileExtensions)) {
if($file->getSize()<2000000)
{
if (!file_exists($uploadDirectory.$Y)) {
mkdir($uploadDirectory.$Y, 0777, true);
}
if (!file_exists($uploadDirectory.$Y.'/'.$M)) {
mkdir($uploadDirectory.$Y.'/'.$M, 0777, true);
}
$namenew=md5($file->getName().time()).'.'.$file->getExtension();
$uploadDirectory .=$Y.'/'.$M.'/';
$file->moveTo($uploadDirectory.$namenew);
}
else{
$errors[] = "This file is more than 2MB. Sorry, it has to be less than or equal to 2MB";
}
}
else{$errors[] = "This file extension is not allowed. Please upload a JPEG ,svg,gif,,jpg,PNG file";}
if(empty($errors))
{
echo '{
"uploaded": true,
"url": "http://localhost/cms/public/Uploads/'.$Y.'/'.$M.'/'.$namenew.'"}';
}
else{
echo '{
"uploaded": false,
"error": {
"message": "could not upload this image1"
}';}
}
}
else{
echo '{
"uploaded": false,
"error": {
"message": "could not upload this image1"
}';}
}
catch (\Exception $e) {
echo '{
"uploaded": false,
"error": {
"message": "could not upload this image0"
}';
}
}
}
?>
```
Upvotes: 2 <issue_comment>username_6: How I do it in React, should be similar. I have custom uploader for this.
`UploadAdapter.js`
```
// Custom Upload Adapter
export class UploadAdapter {
constructor(loader) {
this.loader = loader
}
async upload() {
return this.loader.file.then((file) => {
const data = new FormData()
data.append("file", file)
const genericError = `Couldn't upload file: ${file.name}.`
return axios({
data,
method: "POST",
url: "API_UPLOAD_URL",
headers: {
"Content-Type": "multipart/form-data",
},
onUploadProgress: (progressEvent) => {
loader.uploadTotal = progressEvent.total
loader.uploaded = progressEvent.loaded
const uploadPercentage = parseInt(
Math.round((progressEvent.loaded / progressEvent.total) * 100)
)
},
})
.then(({ data }) => ({ default: data.url }))
.catch(({ error }) => Promise.reject(error?.message ?? genericError))
})
}
abort() {
return Promise.reject()
}
}
// CKEditor FileRepository
export function uploadAdapterPlugin(editor) {
editor.plugins.get("FileRepository").createUploadAdapter = (loader) =>
new UploadAdapter(loader)
}
```
Using the above:
```
const CustomEditor = () => (
{
editor.ui.view.editable.element.style.height = "200px"
uploadAdapterPlugin(editor)
}}
onChange={(event, editor) => {
console.log(editor.getData())
}}
/>
)
```
Upvotes: 1 |
2018/03/20 | 2,722 | 9,339 | <issue_start>username_0: So, I try to create my own neural network. Something really simple.
My input is the MNIST database of handwritten digits.
Input: 28\*28 neurons (Images).
Output: 10 neurons (0/1/2/3/4/5/6/7/8/9).
So my network is as follow: 28\*28 -> 15 -> 10.
The problem remains in my estimated output. Indeed, it seems I have a gradient explosion.
The output given by my network is here: <https://pastebin.com/EFpBGAZd>
As you can see, the first estimated output is wrong. So my network adjust the weights thanks to the backpropagation. But It doesn't seems to updates the weights correctly. Indeed the estimated output is too high compared to the second highest value.
So the first estimated output keeps being the best estimated output for the following training (13 in my example).
My backpropagation code:
```
VOID BP(NETWORK &Network, double Target[OUTPUT_NEURONS]) {
double DeltaETotalOut = 0;
double DeltaOutNet = 0;
double DeltaErrorNet = 0;
double DeltaETotalWeight = 0;
double Error = 0;
double ErrorTotal = 0;
double OutputUpdatedWeights[OUTPUT_NEURONS*HIDDEN_NEURONS] = { 0 };
unsigned int _indexOutput = 0;
double fNetworkError = 0;
//Calculate Error
for (int i = 0; i < OUTPUT_NEURONS; i++) {
fNetworkError += 0.5*pow(Target[i] - Network.OLayer.Cell[i].Output, 2);
}
Network.Error = fNetworkError;
//Output Neurons
for (int i = 0; i < OUTPUT_NEURONS; i++) {
DeltaETotalOut = -(Target[i] - Network.OLayer.Cell[i].Output);
DeltaOutNet = ActivateSigmoidPrime(Network.OLayer.Cell[i].Output);
for (int j = 0; j < HIDDEN_NEURONS; j++) {
OutputUpdatedWeights[_indexOutput] = Network.OLayer.Cell[i].Weight[j] - 0.5 * DeltaOutNet*DeltaETotalOut* Network.HLayer.Cell[j].Output;
_indexOutput++;
}
}
//Hidden Neurons
for (int i = 0; i < HIDDEN_NEURONS; i++) {
ErrorTotal = 0;
for (int k = 0; k < OUTPUT_NEURONS; k++) {
DeltaETotalOut = -(Target[k] - Network.OLayer.Cell[k].Output);
DeltaOutNet = ActivateSigmoidPrime(Network.OLayer.Cell[k].Output);
DeltaErrorNet = DeltaETotalOut * DeltaOutNet;
Error = DeltaErrorNet * Network.OLayer.Cell[k].Weight[i];
ErrorTotal += Error;
}
DeltaOutNet = ActivateSigmoidPrime(Network.HLayer.Cell[i].Output);
for (int j = 0; j < INPUT_NEURONS; j++) {
DeltaETotalWeight = ErrorTotal * DeltaOutNet*Network.ILayer.Image[j];
Network.HLayer.Cell[i].Weight[j] -= 0.5 * DeltaETotalWeight;
}
}
//Update Weights
_indexOutput = 0;
for (int i = 0; i < OUTPUT_NEURONS; i++) {
for (int j = 0; j < HIDDEN_NEURONS; j++) {
Network.OLayer.Cell[i].Weight[j] = OutputUpdatedWeights[_indexOutput];
_indexOutput++;
}
}}
```
How can I solve this issue?
I didn't worked on the hidden layer nor biases, is it due to it?
Thanks<issue_comment>username_1: The `ckfinder.uploadUrl` property configures the [`CKFinderUploadAdapter`](https://docs.ckeditor.com/ckeditor5/latest/api/module_adapter-ckfinder_uploadadapter-CKFinderUploadAdapter.html) plugin. This plugin is responsible for communication with the [CKFinder's server-side connector](https://ckeditor.com/ckeditor-4/ckfinder/).
So, in other words, your server should run the CKFinder's server-side connector. This is a proprietary software, so I won't go deeper into how it works.
If you wish to learn about all ways to configure image upload, please read [How to enable image upload support in CKEditor 5?](https://stackoverflow.com/questions/46765197/how-to-enable-image-upload-support-in-ckeditor-5).
Upvotes: 1 <issue_comment>username_2: Success response :
```
{
"uploaded": true,
"url": "http://127.0.0.1/uploaded-image.jpeg"
}
```
Failure response :
```
{
"uploaded": false,
"error": {
"message": "could not upload this image"
}
}
```
Upvotes: 5 <issue_comment>username_3: You can configure CKEditor to upload files
ClassicEditor.create( document.querySelector( '#editor' ), {
```
cloudServices: {
tokenUrl: 'https://example.com/cs-token-endpoint',
uploadUrl: 'https://your-organization-id.cke-cs.com/easyimage/upload/'
}
} )
.then( ... )
.catch( ... );
```
For the more details visit this link : [<https://docs.ckeditor.com/ckeditor5/latest/features/image-upload.html>](https://docs.ckeditor.com/ckeditor5/latest/features/image-upload.html)
Upvotes: 0 <issue_comment>username_4: ```
class UploadAdapter {
constructor( loader ) {
this.loader = loader;
this.upload = this.upload.bind(this)
this.abort = this.abort.bind(this)
}
upload() {
const data = new FormData();
data.append('typeOption', 'upload_image');
data.append('file', this.loader.file);
return axios({
url: `${API}forums`,
method: 'post',
data,
headers: {
'Authorization': tokenCopyPaste()
},
withCredentials: true
}).then(res => {
console.log(res)
var resData = res.data;
resData.default = resData.url;
return resData;
}).catch(error => {
console.log(error)
return Promise.reject(error)
});
}
abort() {
// Reject promise returned from upload() method.
}
}
{
editor.ui.view.editable.element.style.height = '200px';
editor.plugins.get( 'FileRepository' ).createUploadAdapter = function( loader ) {
return new UploadAdapter( loader );
};
} }
onChange={ ( event, editor ) => {
console.log(editor.getData())
} }
/>
```
Upvotes: 2 <issue_comment>username_5: this is my code for Ckeditor 5 and Phalcon framework.#products\_desc point to textarea id.
```
var myEditor;
ClassicEditor
.create( document.querySelector( '#products\_desc' ) ,
{
ckfinder: {
uploadUrl: 'Ckfinder/upload'
}
}
)
.then( editor => {
console.log( 'Editor was initialized', editor );
myEditor = editor;
} )
.catch( err => {
console.error( err.stack );
} );
```
and my php controller:
```
php
use Phalcon\Mvc\Controller;
class CkfinderController extends Controller
{
public function uploadAction()
{
try {
if ($this-request->hasFiles() == true) {
$errors = []; // Store all foreseen and unforseen errors here
$fileExtensions = ['jpeg','jpg','png','gif','svg'];
$uploadDirectory = "../public/Uploads/";
$Y=date("Y");
$M=date("m");
foreach ($this->request->getUploadedFiles() as $file) {
if (in_array($file->getExtension(),$fileExtensions)) {
if($file->getSize()<2000000)
{
if (!file_exists($uploadDirectory.$Y)) {
mkdir($uploadDirectory.$Y, 0777, true);
}
if (!file_exists($uploadDirectory.$Y.'/'.$M)) {
mkdir($uploadDirectory.$Y.'/'.$M, 0777, true);
}
$namenew=md5($file->getName().time()).'.'.$file->getExtension();
$uploadDirectory .=$Y.'/'.$M.'/';
$file->moveTo($uploadDirectory.$namenew);
}
else{
$errors[] = "This file is more than 2MB. Sorry, it has to be less than or equal to 2MB";
}
}
else{$errors[] = "This file extension is not allowed. Please upload a JPEG ,svg,gif,,jpg,PNG file";}
if(empty($errors))
{
echo '{
"uploaded": true,
"url": "http://localhost/cms/public/Uploads/'.$Y.'/'.$M.'/'.$namenew.'"}';
}
else{
echo '{
"uploaded": false,
"error": {
"message": "could not upload this image1"
}';}
}
}
else{
echo '{
"uploaded": false,
"error": {
"message": "could not upload this image1"
}';}
}
catch (\Exception $e) {
echo '{
"uploaded": false,
"error": {
"message": "could not upload this image0"
}';
}
}
}
?>
```
Upvotes: 2 <issue_comment>username_6: How I do it in React, should be similar. I have custom uploader for this.
`UploadAdapter.js`
```
// Custom Upload Adapter
export class UploadAdapter {
constructor(loader) {
this.loader = loader
}
async upload() {
return this.loader.file.then((file) => {
const data = new FormData()
data.append("file", file)
const genericError = `Couldn't upload file: ${file.name}.`
return axios({
data,
method: "POST",
url: "API_UPLOAD_URL",
headers: {
"Content-Type": "multipart/form-data",
},
onUploadProgress: (progressEvent) => {
loader.uploadTotal = progressEvent.total
loader.uploaded = progressEvent.loaded
const uploadPercentage = parseInt(
Math.round((progressEvent.loaded / progressEvent.total) * 100)
)
},
})
.then(({ data }) => ({ default: data.url }))
.catch(({ error }) => Promise.reject(error?.message ?? genericError))
})
}
abort() {
return Promise.reject()
}
}
// CKEditor FileRepository
export function uploadAdapterPlugin(editor) {
editor.plugins.get("FileRepository").createUploadAdapter = (loader) =>
new UploadAdapter(loader)
}
```
Using the above:
```
const CustomEditor = () => (
{
editor.ui.view.editable.element.style.height = "200px"
uploadAdapterPlugin(editor)
}}
onChange={(event, editor) => {
console.log(editor.getData())
}}
/>
)
```
Upvotes: 1 |
2018/03/20 | 2,205 | 6,247 | <issue_start>username_0: How do i use an if statement to only do what's inside the brackets when a character is entered. But when an integer/nothing is entered send them to the else statement.
```
#include
#include
#include
#include
int main(void)
{
// use boolean for printing if user fails(true) or passes(false) test
int score = 0; //score starts at 0, and 10 points are added if correct.
char answer1;
char answer2;
char answer3;
char answer4;
char answer5;
char answer6;
char answer7;
char answer8;
char answer9;
char answer10;
system("clear");
printf("\n1. What is the capital of Russia?\n");
printf("a)Washington DC \nb)Moscow\nc)Copenhagen\nd)Stockholm\n");
scanf("%s", &answer1);
system("clear");
if (answer1 == 'b') {
printf("Correct!\n");
(score += 10); //if answer is correct, score adds 10
printf("Your score is: %i\n",score);
}
if (answer1 != 'b') {
printf("Wrong!\n");
printf("Your score is: %i\n",score);
}
else {
printf("\nPlease enter in a valid response\n");
}
printf("\n2. Who was the first president of the United States?\n");
printf("a)Washington\nb)Lincoln\nc)Obama\nd)Adams\n");
scanf("%s", &answer2 &&);
system("clear");
if (answer2 == 'a') {
printf("Correct!\n");
(score += 10); //if answer is correct, score adds 10
printf("Your score is: %i\n",score);
}
if (answer2 != 'a') {
printf("Wrong!\n");
printf("Your score is: %i\n",score);
}
printf("\n3. How many states are there in the United States?\n");
printf("a)25\nb)87\nc)42\nd)50\n");
scanf("%s", &answer3);
system("clear");
if (answer3 == 'd') {
printf("Correct!\n");
(score += 10); //if answer is correct, score adds 10
printf("Your score is: %i\n",score);
}
if (answer3 != 'd') {
printf("Wrong!\n");
printf("Your score is: %i\n",score);
}
printf("\n4. What is the square root of 144?\n");
printf("a)12\nb)14\nc)7\nd)24\n");
scanf("%s", &answer4);
system("clear");
if (answer4 == 'a') {
printf("Correct!\n");
(score += 10); //if answer is correct, score adds 10
printf("Your score is: %i\n",score);
}
if (answer4 != 'a') {
printf("Wrong!\n");
printf("Your score is: %i\n",score);
}
printf("\n5. What is the most basic unit of human life?\n");
printf("a)Skin\nb)Mitochondra\nc)Cell\nd)ATP\n");
scanf("%s", &answer5);
system("clear");
if (answer5 == 'c') {
printf("Correct!\n");
(score += 10); //if answer is correct, score adds 10
printf("Your score is: %i\n",score);
}
if (answer5 != 'c') {
printf("Wrong!\n");
printf("Your score is: %i\n",score);
}
printf("\n6. What programming language is this written in?\n");
printf("a)Objective-C\nb)C++\nc)C#\nd)C\n");
scanf("%s", &answer6);
system("clear");
if (answer6 == 'd') {
printf("Correct!\n");
(score += 10); //if answer is correct, score adds 10
printf("Your score is: %i\n",score);
}
if (answer6 != 'd') {
printf("Wrong!\n");
printf("Your score is: %i\n",score);
}
printf("\n7. What is apple's newest programming language?\n");
printf("a)Swift\nb)Java\nc)Python\nd)Objective-C\n");
scanf("%s", &answer7);
system("clear");
if (answer7 == 'a') {
printf("Correct!\n");
(score += 10); //if answer is correct, score adds 10
printf("Your score is: %i\n",score);
}
if (answer7 != 'a') {
printf("Wrong!\n");
printf("Your score is: %i\n",score);
}
printf("\n8. What is the most portable way to store data?\n");
printf("a)Ram\nb)Flash Drive\nc)Solid-state drive\nd)Hard Drive\n");
scanf("%s", &answer8);
system("clear");
if (answer8 == 'b') {
printf("Correct!\n");
(score += 10); //if answer is correct, score adds 10
printf("Your score is: %i\n",score);
}
if (answer8 != 'b') {
printf("Wrong!\n");
printf("Your score is: %i\n",score);
}
printf("\n9. What is the name of IBM's AI?\n");
printf("a)Arnold\nb)Jonathan\nc)Watson\nd)Pablo\n");
scanf("%s", &answer9);
system("clear");
if (answer9 == 'c') {
printf("Correct!\n");
(score += 10); //if answer is correct, score adds 10
printf("Your score is: %i\n",score);
}
if (answer9 != 'c') {
printf("Wrong!\n");
printf("Your score is: %i\n",score);
}
printf("\n10. What is the universal sign for peace?\n");
printf("a)Index finger and thumb raised\nb)Index finger raised\nc)Middle finger raised\nd)Index and middle finger raised\n");
scanf("%s", &answer10);
system("clear");
if (answer10 == 'd') {
printf("Correct!\n");
(score += 10); //if answer is correct, score adds 10
printf("Your score is: %i\n",score);
}
if (answer10 != 'd') {
printf("Wrong!\n");
printf("Your score is: %i\n",score);
}
if (score > 70) {
printf("\nCongratulations! You have passed!\n");
}
else {
printf("\nUnfortunately, you have failed. Please try again\n");
}
}
```
This is my code, I tried putting an else statement below the if statements saying to enter in a valid response, but it doesn't work. Please help.
Edit: I fixed it
```
q1:
printf("\n1. What is the capital of Russia?\n");
printf("a)Washington DC \nb)Moscow\nc)Copenhagen\nd)Stockholm\n");
scanf(" %c", &answer1);
system("clear");
if (answer1 == 'a' || answer1 == 'b' || answer1 == 'c' || answer1 == 'd') {
if (answer1 == 'b') {
printf("Correct!\n");
(score += 10); //if answer is correct, score adds 10
printf("Your score is: %i\n",score);
goto q2;
}
if (answer1 != 'b') {
printf("Wrong!\n");
printf("Your score is: %i\n",score);
}
}
else {
printf("Please enter a letter a-b!");
goto q1;
}
```<issue_comment>username_1: Assuming:
```
char answer1;
```
then
```
scanf("%s", &answer1);
```
tries to read a string and places that in `answer1`. This will always overwrite memory following `answer1` because a string will be null-terminated, even if you would enter only one letter. As a result, your program has become unreliable. Instead (see comment of P.P.), use:
```
scanf(" %c", &answer1);
```
Upvotes: 0 <issue_comment>username_2: First check to see if the user entered something valid...
```
if (answer1 != '') {
if (answer1 == 'b') {
printf("Correct!\n");
(score += 10); //if answer is correct, score adds 10
printf("Your score is: %i\n",score);
}
if (answer1 != 'b') {
printf("Wrong!\n");
printf("Your score is: %i\n",score);
}
}
else {
printf("please enter valid answer!\n");
}
```
Upvotes: 1 |
2018/03/20 | 688 | 2,493 | <issue_start>username_0: I'm trying to get all the properties, in a watchlist(s) where the list has a user id.
The relationship is set up as follows.
Each watchlist is related to a user id. Each Property has a watchlist id.
I need all properties, in all the watchlists belonging to that user.
The watchlist gets the user\_id automatically upon creation.
Here are my models
Watchlist
```
public function properties(){
return $this->hasMany('App\WatchedProperties');
}
public function user(){
return $this->belongsTo('App\User');
}
```
WatchedProperties
```
public function watchlist(){
return $this->belongsTo('App\Watchlist');
}
```
Query
Gets all books in every list disregarding user ids and list ids
```
$Watchlists = WatchedBooks::all();
```
Currently gets all books regardless of userid.
I need all books in all of the user's lists.
A user could have multiple lists
List A
List B
So something like
All books from all lists where the list id is related to user id.
This is what the Watchlist DB looks like
[WatchlistDB](https://i.stack.imgur.com/ss1XZ.png)
This is what the WatchedBooks DB looks like
[Books in watchlist](https://i.stack.imgur.com/qYLoY.png)<issue_comment>username_1: whereHas allows you to query in relationships. More information on this can be found at <https://laravel.com/docs/5.6/eloquent-relationships#querying-relationship-existence>
```
$Watchlists = WatchedBooks::whereHas('watchlist', function($query) use ($user_id){
$query->where('user_id', $user_id)
})->get();
```
The script above gets all WatchedBooks that are associated with a Watchlist that is owned by a user ($user\_id).
Upvotes: 0 <issue_comment>username_2: Laravel has a beautiful solution for this: you can add a `->hasManyThrough` relation to the user model. You can find more information about this type of relation in [the Laravel documentation about Eloquent relationships](https://laravel.com/docs/5.5/eloquent-relationships#has-many-through "Eloquent: Relationships - Laravel - The PHP Framework For Web Artisans").
The user model will look like this:
```
class User extends Model {
[...]
public function watchedBooks()
return $this->hasManyThrough('App\WatchedBook', 'App\Watchlist');
}
[...]
}
```
Then you can get all the `WatchedBooks`, associated with the user, using
```
$user->watchedBooks;
```
or
```
$user->watchedBooks()->yourBuilderQueryHere()->get();
```
Upvotes: 2 [selected_answer] |
2018/03/20 | 767 | 2,390 | <issue_start>username_0: I have tried adding a where clause in my join statement, at the ON,
However i am receiving syntax errors, i am not sure where to put this as i need it to pull the data from a table called systemlookup
```
DECLARE @OptionalModules TABLE (moduleid INT, name VarChar(200))
INSERT INTO @OptionalModules
SELECT CAST (LookupReference AS INT)
FROM dbo.systemlookup
left join @xml.nodes('//Modules/*') as organisation(license) on
organisation.license.value('local-name(.)', 'varchar(50)') =
case LookupReference
when '1' then 'a'
when '2' then 'b'
when '6' then 'c'
when '8' then 'd'
when '9' then 'e'
when '10' then 'f'
when '11' then 'g'
when '12' then 'h'
when '13' then 'i'
when '14' then 'j'
when '15' then 'k'
when '16' then 'l'
when '17' then 'm'
when '18' then 'n'
when '20' then 'o'
when '21' then 'p'
when '22' then 'q'
when '23' then 'r'
when '24' then 's'
when '25' then 't'
when '26' then 'u'
when '27' then 'v'
when '28' then 'w'
when '29' then 'x'
when '31' then 'y'
when '32' then 'z'
when '33' then 'aa'
when '16016' then 'bb'
end
```<issue_comment>username_1: whereHas allows you to query in relationships. More information on this can be found at <https://laravel.com/docs/5.6/eloquent-relationships#querying-relationship-existence>
```
$Watchlists = WatchedBooks::whereHas('watchlist', function($query) use ($user_id){
$query->where('user_id', $user_id)
})->get();
```
The script above gets all WatchedBooks that are associated with a Watchlist that is owned by a user ($user\_id).
Upvotes: 0 <issue_comment>username_2: Laravel has a beautiful solution for this: you can add a `->hasManyThrough` relation to the user model. You can find more information about this type of relation in [the Laravel documentation about Eloquent relationships](https://laravel.com/docs/5.5/eloquent-relationships#has-many-through "Eloquent: Relationships - Laravel - The PHP Framework For Web Artisans").
The user model will look like this:
```
class User extends Model {
[...]
public function watchedBooks()
return $this->hasManyThrough('App\WatchedBook', 'App\Watchlist');
}
[...]
}
```
Then you can get all the `WatchedBooks`, associated with the user, using
```
$user->watchedBooks;
```
or
```
$user->watchedBooks()->yourBuilderQueryHere()->get();
```
Upvotes: 2 [selected_answer] |
2018/03/20 | 481 | 1,667 | <issue_start>username_0: I have CSS:
```
@media (max-width: 480px) {
#creating_products {
max-height: 400px;
overflow-y: scroll;
}
}
```
HTML:
```
{section name=foo start=1 loop=11 step=1}
* ........
{/section}
```
When I tested this code on my Android, part of the "creating\_products" div was hidden, but my scroll bar did not appear. How can I fix this problem?<issue_comment>username_1: whereHas allows you to query in relationships. More information on this can be found at <https://laravel.com/docs/5.6/eloquent-relationships#querying-relationship-existence>
```
$Watchlists = WatchedBooks::whereHas('watchlist', function($query) use ($user_id){
$query->where('user_id', $user_id)
})->get();
```
The script above gets all WatchedBooks that are associated with a Watchlist that is owned by a user ($user\_id).
Upvotes: 0 <issue_comment>username_2: Laravel has a beautiful solution for this: you can add a `->hasManyThrough` relation to the user model. You can find more information about this type of relation in [the Laravel documentation about Eloquent relationships](https://laravel.com/docs/5.5/eloquent-relationships#has-many-through "Eloquent: Relationships - Laravel - The PHP Framework For Web Artisans").
The user model will look like this:
```
class User extends Model {
[...]
public function watchedBooks()
return $this->hasManyThrough('App\WatchedBook', 'App\Watchlist');
}
[...]
}
```
Then you can get all the `WatchedBooks`, associated with the user, using
```
$user->watchedBooks;
```
or
```
$user->watchedBooks()->yourBuilderQueryHere()->get();
```
Upvotes: 2 [selected_answer] |
2018/03/20 | 514 | 2,099 | <issue_start>username_0: I'm trying to switch my local Pycharm + Django to docker based dev env. I run on mac and use Docker-Compose (few dockers: my django app, some db and nginx).
All runs fine, code change immediately reflected in docker and correct packages available while coding in Pycharm. Once docker-compose started, a list of running containers is shown in Pycharm's docker plugin window, for each container i can see its log/properties/port/volume bindings.
Interpreter seems to be configured properly with Docker Compose(app at [{my path}/docker-compose.yaml]) as project interpreter and path mapping for ->/code to the correct folder in docker.
The problem is I don't manage to debug it. When i select Docker-Compose in debug dialog the only option available is Run, not Debug.
[](https://i.stack.imgur.com/ieI9S.png)
It doesn't look to me as Docker/Compose issue, but Pycharm plugin which doesn't let to run in debug with docker-compose run/debug configuration.
Any idea how to debug it with Pycharm?<issue_comment>username_1: You are supposed to use standard Python run configuration, no the Docker-specific one. The latter is used to build containers, start docker-compose services and so on.
PyCharm will auto-start your services, mount your code and execute it inside a container with pure Python Run Configuration if a Docker-based interpreter is selected.
Perhaps the docs can help: <https://www.jetbrains.com/help/pycharm/using-docker-compose-as-a-remote-interpreter.html>
Upvotes: 5 [selected_answer]<issue_comment>username_2: There is currently a problem running dockerized django with an entry point, so to give an idea of the problem if someone else lands here with that! if you are facing this problem create a different docker-compose.dev.yml file just for debugging without an entry point. This is the work around for now
Upvotes: 2 <issue_comment>username_3: Following is a video I created showing different ways to run docker in Pycharm and debug it using Pycharm breakpoints.
<https://youtu.be/NMFAkrZTciM>
Upvotes: -1 |
2018/03/20 | 389 | 1,263 | <issue_start>username_0: I have two arrays, one of them is sparse. I'd like to perform what is essentially an *outer join* on them.
```
const a = ['a', 'b']
a[3] = 'c'
// a is Array(4) [ "a", "b", <1 empty slot>, "c" ]
R.zipSparse([1, 2, 3, 4], a)
// expected output: [[1, 'a'], [2, 'b'], [3, undefined], [4, 'c']]
```
Any suggestions on how to approach this functionally in Ramda or plain Javascript welcome.<issue_comment>username_1: ```
function zipSparse(arr, join) {
return arr.map((i, index) => [i, join[index]])
}
```
To explain what is going on here we're using the map iterator and looping over each item in arr and then joining the same item at the index of the second array join
>
> <https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map>
>
>
>
Upvotes: 3 [selected_answer]<issue_comment>username_2: This is actually the behavior of `R.zip`.
More specifically, `R.zip` combines things up to the smaller of the reported lengths of the inputs. Those both have length `4`, so it will return a four-element array in response. Index `3` will contain `[3, undefined]`.
Of course this is quite easy to write yourself, without Ramda, as the answer from @JoeWarner and the comment from @dsfq show.
Upvotes: 3 |
2018/03/20 | 649 | 2,159 | <issue_start>username_0: I've tried
```
'field1'+'field2'+'field3' as combination
```
and tried:
```
((((field_1||'_')||field_2_yes)||'_')||field_3_yes) AS combination
```
tried:
```
field1 || field2 as combination
```
i've also tried Concat, coalesce, can't seem to get it to work.
Also I am querying SQL server through excel if that helps.
Expected output is a column named `combination` with the `field1field2field3` data as the rows within combination column. I'm am not trying to create the literal string `field1field2field3` but combine the data from these fields into a string and display in new column.
The issue is with the data types as to why these fields are not combining.<issue_comment>username_1: Could you please try without aposthropes?:
```
select combination = (field1 + field2)
from table
```
In case you need to put a space between the values, if field1 and field2 is varchar, then you can do:
```
select combination = (field1 + ' ' + field2)
from table
```
If field1 and field2 is not varchar, then you can cast like below and use in your main query:
```
cast(field1 as varchar(50))
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: So do you want only to display 2 fields as a one field?
Field1 = 'hello'
field2 = 'world'
```
select field1 + field2 as concatfield from table
select concat(field1, field2) as concatfield from table
```
This will give you output like that:
```
helloworld
```
If you want to add space then try this:
```
select field1 + ' ' + field2 as concatfield from table
select concat(field1, ' ', field2) as concatfield from table
```
This will give you output like that:
```
hello world
```
And then exacly the same if you want to update a field in DB
```
update table
set concatfield = field1 + ' ' + field2 --or concat(field1, ' ', field2)
--where 1=1
```
Upvotes: 2 <issue_comment>username_3: It is pretty basic
```
declare @Field varchar(10) = 'value'
select @Field + @Field + @Field as comb
select '@Field' + '@Field' + '@Field' as comp
comb
------------------------------
valuevaluevalue
comp
------------------
@Field@Field@Field
```
Upvotes: 1 |
2018/03/20 | 767 | 2,620 | <issue_start>username_0: Hi I am building a Django application and during this I am doing test uploads into my database, then deleting the data. I understand that when I do this, the next insert will 'pick up where it left off' when inserting the auto-increment primary key int.
I.e:
I insert 3 rows into my table 'Sample':
```
auto_id | sample
1 | JSUDH172
2 | QIUWJ185
3 | PNMSY111
```
When i delete these, and enter them in again, it will start at 4:
```
auto_id | sample
4 | JSUDH172
5 | QIUWJ185
6 | PNMSY111
```
I understand this is built in to stop Django overwriting primary keys, but when the primary keys are no longer there, it (mainly from a superficial point of view) annoys me that it doesn't reset. I am uploading thousands of rows into certain tables during this development, so sometimes it starts hundreds/thousands in.
I know I can always restart the project 'from scratch' after development is finished, but I was wondering whether anyone had any useful workarounds or if people just left it as it is (if they have the same view as me)
thanks<issue_comment>username_1: Could you please try without aposthropes?:
```
select combination = (field1 + field2)
from table
```
In case you need to put a space between the values, if field1 and field2 is varchar, then you can do:
```
select combination = (field1 + ' ' + field2)
from table
```
If field1 and field2 is not varchar, then you can cast like below and use in your main query:
```
cast(field1 as varchar(50))
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: So do you want only to display 2 fields as a one field?
Field1 = 'hello'
field2 = 'world'
```
select field1 + field2 as concatfield from table
select concat(field1, field2) as concatfield from table
```
This will give you output like that:
```
helloworld
```
If you want to add space then try this:
```
select field1 + ' ' + field2 as concatfield from table
select concat(field1, ' ', field2) as concatfield from table
```
This will give you output like that:
```
hello world
```
And then exacly the same if you want to update a field in DB
```
update table
set concatfield = field1 + ' ' + field2 --or concat(field1, ' ', field2)
--where 1=1
```
Upvotes: 2 <issue_comment>username_3: It is pretty basic
```
declare @Field varchar(10) = 'value'
select @Field + @Field + @Field as comb
select '@Field' + '@Field' + '@Field' as comp
comb
------------------------------
valuevaluevalue
comp
------------------
@Field@Field@Field
```
Upvotes: 1 |
2018/03/20 | 427 | 1,441 | <issue_start>username_0: Suppose I have two list of `T`, I need to remove from list `A` all the elements that are not in list `B`. I did this:
```
A.RemoveAll(item => !B.Contains(item));
```
this working pretty well, but if the list `B` does not contains any elements, this code will remove all the items from the list `A` and it shouldn't. Why happen that?<issue_comment>username_1: It happens because `!B.Contains(item)` returns true for all elements from `A`. If you don't want this to happen, just check if `B` is not empty:
```
A.RemoveAll(item => B.Any() && !B.Contains(item));
```
Upvotes: -1 <issue_comment>username_2: This happens because list B doesn't contain any items. And you are saying remove everything from A that doesn't exist in B. Effectively saying remove everything because B doesn't have anything
Upvotes: 0 <issue_comment>username_3: Well it happens because if `B` is empty, for **all** elements in `A` the expression `!B.Contains(item)` is `true`.
Try this:
```
A.RemoveAll(item => B.Count > 0 && !B.Contains(item));
```
or better avoid iterating at all if `B` is empty:
```
if (B.Count > 0) A.RemoveAll(item => !B.Contains(item));
```
Upvotes: 0 <issue_comment>username_4: Well, it removes everyhing because the condition is true for all items in list A.
Since you want to remove only items from A if list B is **not** empty:
```
if(B.Any())
A.RemoveAll(item => !B.Contains(item));
```
Upvotes: 1 |
2018/03/20 | 499 | 1,531 | <issue_start>username_0: I am trying to get the last record from my tinyDB, so I want to make a query that looks like that:
`"SELECT * FROM table ORDER BY id DESC LIMIT 1"`
which should give me the last row. though I can't figure out how to do it with TinyDB.<issue_comment>username_1: If you want to order db by time descending for example:
```
od = sorted(db.all(), key=lambda k: k['time'])
print(od[-1])
```
Upvotes: 2 <issue_comment>username_2: How about:
```
table = db.table('table_name')
table.get(doc_id=len(table))
```
See the way of `doc_id` [here](https://tinydb.readthedocs.io/en/latest/usage.html#using-document-ids)
Upvotes: 0 <issue_comment>username_3: Using a `Query` and doing an update :
```
with TinyDB('db.json') as db:
my_table = db.table('a_table_name')
my_query= Query()
first_of_table_for_this_query = my_table.search(my_query.some_field == some_value)[1]
last_of_table_for_this_query = my_table.search(my_query.some_field == some_value)[-1]
# exemple of use for updating first inserted and last inserted only
my_table.update({'some_field': some_value+42}, doc_ids=[
first_of_table_for_this_query,
last_of_table_for_this_query
])
```
Upvotes: 0 <issue_comment>username_4: According to the documentation, the following would return the doc id of the final element in the db in TinyDB 4.7.0:
```
el = db.all()[-1]
record = db.get(doc_id=el.doc_id)
```
[Using Document IDs](https://tinydb.readthedocs.io/en/latest/usage.html#using-document-ids)
Upvotes: 1 |
2018/03/20 | 622 | 1,672 | <issue_start>username_0: I am currently working on a stenography assignment on how to embed secret image to a cover image using Wang's algorithm. Basically i just want to change for eg:
**3d matrix**
```
A(:,:,1) = [5 7 8; 0 1 9; 4 3 6];
A(:,:,2) = [1 0 4; 3 5 6; 9 8 7];
A(:,:,3) = [7 9 3; 4 5 9; 1 9 9];
To
Str = '578019436104356987793459199'
```
and also vice-versa if anybody can help out.<issue_comment>username_1: If you want to order db by time descending for example:
```
od = sorted(db.all(), key=lambda k: k['time'])
print(od[-1])
```
Upvotes: 2 <issue_comment>username_2: How about:
```
table = db.table('table_name')
table.get(doc_id=len(table))
```
See the way of `doc_id` [here](https://tinydb.readthedocs.io/en/latest/usage.html#using-document-ids)
Upvotes: 0 <issue_comment>username_3: Using a `Query` and doing an update :
```
with TinyDB('db.json') as db:
my_table = db.table('a_table_name')
my_query= Query()
first_of_table_for_this_query = my_table.search(my_query.some_field == some_value)[1]
last_of_table_for_this_query = my_table.search(my_query.some_field == some_value)[-1]
# exemple of use for updating first inserted and last inserted only
my_table.update({'some_field': some_value+42}, doc_ids=[
first_of_table_for_this_query,
last_of_table_for_this_query
])
```
Upvotes: 0 <issue_comment>username_4: According to the documentation, the following would return the doc id of the final element in the db in TinyDB 4.7.0:
```
el = db.all()[-1]
record = db.get(doc_id=el.doc_id)
```
[Using Document IDs](https://tinydb.readthedocs.io/en/latest/usage.html#using-document-ids)
Upvotes: 1 |
2018/03/20 | 517 | 1,654 | <issue_start>username_0: I am a beginner in Cordova and Phonegap, can anybody guide me how to run Cordova and Phonegap application after making changes without build.phonegap. Because my index.html not run directly on web.
Is there any other way to run and build the application quickly because when I build does any small changes I have again upload the zip file on phonegap.build<issue_comment>username_1: If you want to order db by time descending for example:
```
od = sorted(db.all(), key=lambda k: k['time'])
print(od[-1])
```
Upvotes: 2 <issue_comment>username_2: How about:
```
table = db.table('table_name')
table.get(doc_id=len(table))
```
See the way of `doc_id` [here](https://tinydb.readthedocs.io/en/latest/usage.html#using-document-ids)
Upvotes: 0 <issue_comment>username_3: Using a `Query` and doing an update :
```
with TinyDB('db.json') as db:
my_table = db.table('a_table_name')
my_query= Query()
first_of_table_for_this_query = my_table.search(my_query.some_field == some_value)[1]
last_of_table_for_this_query = my_table.search(my_query.some_field == some_value)[-1]
# exemple of use for updating first inserted and last inserted only
my_table.update({'some_field': some_value+42}, doc_ids=[
first_of_table_for_this_query,
last_of_table_for_this_query
])
```
Upvotes: 0 <issue_comment>username_4: According to the documentation, the following would return the doc id of the final element in the db in TinyDB 4.7.0:
```
el = db.all()[-1]
record = db.get(doc_id=el.doc_id)
```
[Using Document IDs](https://tinydb.readthedocs.io/en/latest/usage.html#using-document-ids)
Upvotes: 1 |
2018/03/20 | 626 | 1,930 | <issue_start>username_0: When I try to compile, in emacs, my file with
```
ocamlbuild -use-ocamlfind main.native
```
I have this error :
```
ocamlbuild -use-ocamlfind main.native
Failure: ocamlfind not found on path, but -no-ocamlfind not used.
```
When I compile it with the same command but in my terminal (zsh), I have no problem. I think there's a problem with emacs not finding `ocamlfind` but I don't understand what this problem may be. (Everything is installed with opam, I have OCamlbuild and OCamlfind in my installed packages, I did `eval $(opam config env)`)
When I do `M-x getenv PATH` I have exactly the same PATH I have in my terminal.<issue_comment>username_1: If you want to order db by time descending for example:
```
od = sorted(db.all(), key=lambda k: k['time'])
print(od[-1])
```
Upvotes: 2 <issue_comment>username_2: How about:
```
table = db.table('table_name')
table.get(doc_id=len(table))
```
See the way of `doc_id` [here](https://tinydb.readthedocs.io/en/latest/usage.html#using-document-ids)
Upvotes: 0 <issue_comment>username_3: Using a `Query` and doing an update :
```
with TinyDB('db.json') as db:
my_table = db.table('a_table_name')
my_query= Query()
first_of_table_for_this_query = my_table.search(my_query.some_field == some_value)[1]
last_of_table_for_this_query = my_table.search(my_query.some_field == some_value)[-1]
# exemple of use for updating first inserted and last inserted only
my_table.update({'some_field': some_value+42}, doc_ids=[
first_of_table_for_this_query,
last_of_table_for_this_query
])
```
Upvotes: 0 <issue_comment>username_4: According to the documentation, the following would return the doc id of the final element in the db in TinyDB 4.7.0:
```
el = db.all()[-1]
record = db.get(doc_id=el.doc_id)
```
[Using Document IDs](https://tinydb.readthedocs.io/en/latest/usage.html#using-document-ids)
Upvotes: 1 |
2018/03/20 | 871 | 2,479 | <issue_start>username_0: ```
var str=" hello world! , this is Cecy ";
var r1=/^\s|\s$/g;
console.log(str.match(r1));
console.log(str.replace(r1,''))
```
Here, the output I expect is "hello world!,this is Cecy", which means remove whitespaces in the beginning and the end of the string, and the whitespace before and after the non-word characters.
The output I have right now is"hello world! , this is Cecy", I don't know who to remove the whitespace before and after "," while keep whitespace between "o" and "w" (and between other word characters).
p.s. I feel I can use group here, but don't know who<issue_comment>username_1: the method that your are looking for is trim()
<https://www.w3schools.com/Jsref/jsref_trim_string.asp>
```
var str = " Hello World! ";
console.log(str.trim())
```
Upvotes: 1 <issue_comment>username_2: You can use the RegEx [`^\s|\s$|(?<=\B)\s|\s(?=\B)`](https://regex101.com/r/AgdBXv/1)
* `^\s` handles the case of a space at the beginning
* `\s$` handles the case of a space at the end
* `(?<=\B)\s` handles the case of a space after a non-word char
* `\s(?=\B)` handles the case of a space before a non-word char
[**Demo.**](https://regex101.com/r/AgdBXv/1)
***EDIT :*** As username_4 pointed out, `\b` is a zero-length assertion, therefore you don't need any lookbehind nor lookahead.
*Here is a shorter, simpler version :* `^\s|\s$|\B\s|\s\B`
```js
var str = " hello world! , this is Cecy ";
console.log(str.replace(/^\s|\s$|\B\s|\s\B/g, ''));
```
Upvotes: 2 <issue_comment>username_3: yuo can use the following command
```
str.replace(/ /g,'')
```
Upvotes: -1 <issue_comment>username_4: ### Method 1
[See regex in use here](https://regex101.com/r/JUlTed/3)
```
\B\s+|\s+\B
```
* `\B` Matches a location where `\b` doesn't match
* `\s+` Matches one or more whitespace characters
```js
const r = /\B\s+|\s+\B/g
const s = ` hello world! , this is Cecy `
console.log(s.replace(r, ''))
```
---
### Method 2
[See regex in use here](https://regex101.com/r/JUlTed/2).
```
(?!\b\s+\b)\s+
```
* `(?!\b +\b)` Negative lookahead ensuring what follows doesn't match
+ `\b` Assert position as a word boundary
+ `\s+` Match any whitespace character one or more times
+ `\b` Assert position as a word boundary
* `\s+` Match any whitespace character one or more times
```js
const r = /(?!\b\s+\b)\s+/g
const s = ` hello world! , this is Cecy `
console.log(s.replace(r, ''))
```
Upvotes: 4 [selected_answer] |
2018/03/20 | 667 | 2,457 | <issue_start>username_0: I'm trying to insert `values` using mysql in nodejs. I had written the following code and installed MySQL support via npm,But canot to INSERT INTO the table due to this problem.
My code;
```
var mysql = require('mysql');
var values=randomValueHex(8);
var sql = "INSERT INTO `activationkeys`(`activationKey`, `productId`)
VALUES ( values ,'3')";
con.query(sql, function (err, result) {
if (err) throw err;
console.log("1 record inserted");
});
```
My Error on terminal:
Error: ER\_PARSE\_ERROR: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''3')'
How can i solve this problem?<issue_comment>username_1: `values` currently means nothing the way you're using it in the SQL query.
What is `values`? Is it an integer, or a string?
Nevertheless, you need to concatenate the values variable within the string.
```
var sql = "INSERT INTO `activationkeys`(`activationKey`, `productId`) VALUES (" + values + ",'3')";
```
Upvotes: 0 <issue_comment>username_2: Instead of
```
var sql = "INSERT INTO `activationkeys`(`activationKey`, `productId`)
VALUES ( values ,'3')";
```
Please try this
```
var sql = "INSERT INTO `activationkeys`(`activationKey`, `productId`)
VALUES ( " + values + " ,'3')";
```
provided values is a string
Upvotes: 1 <issue_comment>username_3: Why are you using back quote for the column names? We do not need that in column names. You can simply create your dynamic sql query by using `+` operator on the column values like this:
```
var sql = "INSERT INTO activationkeys (activationKey, productId) VALUES ( " + values + " ,'3')";
```
Upvotes: 3 [selected_answer]<issue_comment>username_4: And one more correction `values` variable have to give like this `'" + values + "'` . This is the most correct way of define as a variables. Otherwise you give like this `" + values + "` , you may be have an error accure like this `Error: ER_BAD_FIELD_ERROR: Unknown column 'xxxxxx' in 'field list'`. And my code is
```
var sql = "INSERT INTO `activationkeys`(`activationKey`, `productId`) VALUES ( '" + values + "' , 3 )";
```
Upvotes: 0 <issue_comment>username_5: This is simple way to write SQL query in you JavaScript code.
Try It Once
const QUERY = INSERT INTO users (firstName, lastName) VALUES ( "${firstName}", "${lastName}")
Note: Please wrap your complete query into the back ticks.
Upvotes: 0 |
2018/03/20 | 1,158 | 4,052 | <issue_start>username_0: I have the following vuex getters
```
import { isEmpty } from 'lodash'
// if has token, we assume that user is logged in our system
export const isLogged = ({ token }) => !isEmpty(token)
// get current user data
export const currentUser = ({ user }) => user
export const timeLimit = ({ token_ttl }) => token_ttl
export const getToken = ({ token }) => token
```
I have the following computed Vuex properties in a child component
```
name: "ProfilePic",
computed: {
...mapGetters(['currentUser']),
url() {
return new String('').concat(window.location.protocol, '//', window.location.hostname , ':8000', '/games/create/?search=player_id:').valueOf()
}
},
mounted(){
console.log(this.currentUser)
},
watch: {
currentUser(value, old){
console.re.log('ok', value, old);
new QRCode(document.querySelector(".profile-userpic"), {
text: this.url + value,
width: 128,
height: 128,
colorDark : "#000000",
colorLight : "#ffffff",
correctLevel : QRCode.CorrectLevel.H
})
}
}
```
the parent
```
import ProfilePic from '../../components/general/qrcode.vue'
export default {
name: 'CcDashboard',
methods : {
...mapActions(['checkUserToken', 'setMessage'])
},
computed: {
...mapGetters(['isLogged'])
},
mounted() {
this.checkUserToken().then(tkn => this.$store.dispatch('setMessage', {type: 'success', message: 'Your Game Starts Now!!!!'})).catch(err => this.$store.dispatch('setMessage', {type: 'error', message: ['Your time is up!']}))
},
components: {
'profile-pic': ProfilePic
}
}
```
Store
```
Vue.use(Vuex)
export default new Vuex.Store({
state,
mutations,
actions,
modules,
plugins,
getters,
strict: false, //process.env.NODE_ENV !== 'production',
})
```
I'm using VuexPersist with localforage
```
localforage.config({
name: 'vuevue'
});
const vuexLocalStorage = new VuexPersist({
key: 'vuex', // The key to store the state on in the storage provider.
storage: localforage, // or window.sessionStorage or localForage
// Function that passes the state and returns the state with only the objects you want to store.
// reducer: state => ({ Collect: state.Collect, Auth: state.Auth}),
// Function that passes a mutation and lets you decide if it should update the state in localStorage.
// filter: mutation => (true)
modules: ['Auth','Collect'],
asyncStorage: true
})
export const RESTORE_MUTATION = vuexLocalStorage.RESTORE_MUTATION
// // create a new object and preserv original keys
export default [...app.plugins, vuexLocalStorage.plugin]
```
executing `console.log` on `mounted()` I get
```
{__ob__: Observer}current_points: 45email: "<EMAIL>"id: 2name: "<NAME>"total_points: 45__ob__: Observerdep: Dep {id: 20, subs: Array(4)}value: {id: 2, name: "<NAME>", email: "<EMAIL>", current_points: 45, total_points: 45, …}
```
However,
When running the logic the `this.currentUser.id` returns **undefined** rather than a value( which it does)
Is it that I need to "wait" for it to properly populate from the store? or do I need to call it from the `$store.dispatch()` ?<issue_comment>username_1: I guess what you want here is to watch the state of your computed property itemGetter, and when itemGetter is different from null/undefined trigger the method createProductSet ? <https://v2.vuejs.org/v2/guide/computed.html>
```
computed : {
...mapGetters([
'itemGetter'
])
},
watch : {
itemGetter(newVal, oldVal) {
if (typeof newVal == null || typeof newVal == undefined)
return
this.createProductSet(newVal)
}
},
methods : {
createProductSet(id){
// logic
}
}
```
Upvotes: 2 <issue_comment>username_2: Figured it out and had to do with a bug with one of my packages[enter link description here](https://github.com/championswimmer/vuex-persist/issues/15)
Upvotes: 1 [selected_answer] |
2018/03/20 | 847 | 3,840 | <issue_start>username_0: It seems it's possible to remove packages (or package versions) for Pypi: [How to remove a package from Pypi](https://stackoverflow.com/questions/20403387/how-to-remove-a-package-from-pypi)
This can be a problem if you've completed development of some software and expect to be able to pull the dependencies from Pypi when building.
What are the best practices to protect against this?<issue_comment>username_1: You can create your own local PyPI Mirror(where you update, add, remove packages on your policies) and use your local PyPI mirror for future package installations/update. A full mirrored PyPI will consume about 30GB of storage, so all you need is 30-40GB storage space.
There are many ways to create local pypi mirror. e.g [How to roll my own pypi?](https://stackoverflow.com/questions/1235331/how-to-roll-my-own-pypi)
Upvotes: 1 <issue_comment>username_2: ### Unnecessary\_intro=<< SKIP\_HERE
In fact, this is a much deeper problem than just preventing another leftpad instance.
In general, practices of dependencies management are defined by community norms which are often implicit. Python is especially bad in this sense, because not just it is implicit about these norms, its package management tools also built on premise of no guarantee of dependencies compatibility. Since the emergence of PyPI, neither package installers guaranteed to install compatible version of dependencies. If package A requires packages B==1.0 and C==1.0, and C requires B==0.8, after installing A you might end up having B==0.8, i.e. A dependencies won't be satisfied.
### SKIP\_HERE
0. Choose wisely, use signals.
------------------------------
Both developers and package maintainers are aware of the situation. People try to minimize chance of such disruption by selecting "good" projects, i.e. having a healthy community where one person is unlikely to make such decisions, and capable to revive the project in an unlikely case it happens.
Design and evaluation of these signals is an area of active research. Most used factors are number of package contributors (bus factor), healthy practices (tests, CI, quality of documentation), number of forks on GitHub, stars etc.
1. Copy package code under your source tree
-------------------------------------------
This is the simplest scenario if you do not expect package to change a lot but afraid of package deletion or breaking changes. It also gives you advantage of customization the package for your needs; on the other side, package updates now will take quite a bit of effort.
2. Republish a copy of the package on PyPI
------------------------------------------
I do not remember exact names, but some high profile packages were republished by other developers under different names. In this case all you need is just to copy package files and republish, which is presumably less expensive than maintaining a copy under your source tree. It looks dirty, though, and I would discourage from doing this.
3. A private PyPI mirror
------------------------
A cleaner but more expensive version of #2.
4. Another layer of abstraction
-------------------------------
Some people select few competing alternatives and create an abstraction over them with a capability to use different "backends". The reason for this usually is not to cope with possible package deletion, and depending on complexity of the interfaces it might be quite expensive. An example of such abstraction is Keras, an abstraction for neural networks which provides a consistent interface to both tensorflow and theano backends
5. There are more options
-------------------------
More exotic options include distributing snapshots of virtual environments/containers/VMs, reimplementing the packgage (especially if because of licensing issues) etc.
Upvotes: 3 [selected_answer] |
2018/03/20 | 813 | 3,763 | <issue_start>username_0: I have created an android app on android studio with firebase database.
Now I want to develop an administrative website through which the admin will control all the operations of the app.
I want to know how to connect the app to the website. Also, should the database be connected to the app or the website?<issue_comment>username_1: You can create your own local PyPI Mirror(where you update, add, remove packages on your policies) and use your local PyPI mirror for future package installations/update. A full mirrored PyPI will consume about 30GB of storage, so all you need is 30-40GB storage space.
There are many ways to create local pypi mirror. e.g [How to roll my own pypi?](https://stackoverflow.com/questions/1235331/how-to-roll-my-own-pypi)
Upvotes: 1 <issue_comment>username_2: ### Unnecessary\_intro=<< SKIP\_HERE
In fact, this is a much deeper problem than just preventing another leftpad instance.
In general, practices of dependencies management are defined by community norms which are often implicit. Python is especially bad in this sense, because not just it is implicit about these norms, its package management tools also built on premise of no guarantee of dependencies compatibility. Since the emergence of PyPI, neither package installers guaranteed to install compatible version of dependencies. If package A requires packages B==1.0 and C==1.0, and C requires B==0.8, after installing A you might end up having B==0.8, i.e. A dependencies won't be satisfied.
### SKIP\_HERE
0. Choose wisely, use signals.
------------------------------
Both developers and package maintainers are aware of the situation. People try to minimize chance of such disruption by selecting "good" projects, i.e. having a healthy community where one person is unlikely to make such decisions, and capable to revive the project in an unlikely case it happens.
Design and evaluation of these signals is an area of active research. Most used factors are number of package contributors (bus factor), healthy practices (tests, CI, quality of documentation), number of forks on GitHub, stars etc.
1. Copy package code under your source tree
-------------------------------------------
This is the simplest scenario if you do not expect package to change a lot but afraid of package deletion or breaking changes. It also gives you advantage of customization the package for your needs; on the other side, package updates now will take quite a bit of effort.
2. Republish a copy of the package on PyPI
------------------------------------------
I do not remember exact names, but some high profile packages were republished by other developers under different names. In this case all you need is just to copy package files and republish, which is presumably less expensive than maintaining a copy under your source tree. It looks dirty, though, and I would discourage from doing this.
3. A private PyPI mirror
------------------------
A cleaner but more expensive version of #2.
4. Another layer of abstraction
-------------------------------
Some people select few competing alternatives and create an abstraction over them with a capability to use different "backends". The reason for this usually is not to cope with possible package deletion, and depending on complexity of the interfaces it might be quite expensive. An example of such abstraction is Keras, an abstraction for neural networks which provides a consistent interface to both tensorflow and theano backends
5. There are more options
-------------------------
More exotic options include distributing snapshots of virtual environments/containers/VMs, reimplementing the packgage (especially if because of licensing issues) etc.
Upvotes: 3 [selected_answer] |
2018/03/20 | 1,070 | 2,562 | <issue_start>username_0: I have data in Excel in the following format:
```
Column A Column B
20/03/2018 300
21/03/2018 200
22/03/2018 100
23/03/2018 90
24/03/2018 300
25/03/2018 200
26/03/2018 100
27/03/2018 50
28/03/2018 90
29/03/2018 100
30/03/2018 110
31/03/2018 120
```
I would like to get the date where the minimum of B would never be under 99 again chronologically. It the example above, that would happen the 29th of March.
If I try to get it with:
```
=INDEX(A:A,MATCH(99,B1:B12,-1))
```
the value returned is 22/03/2018 as it is the first occurrence found, searched from top to bottom.
In this case it would be perfect to be able to do a reverse match(e.g. a match that searches from bottom to top of the range) but this option is not available. I have seen that it is possible to do reverse matches with the lookup function but in that case I need to provide a value that is actually in my data set (99 would not work).
The workaround I have found is to add a third column like the following (with the minimum of the upcoming value of B going down) and index match on top it.
```
Column A Column B Column C
20/03/2018 300 50
21/03/2018 200 50
22/03/2018 100 50
23/03/2018 90 50
24/03/2018 300 50
25/03/2018 200 50
26/03/2018 100 50
27/03/2018 50 50
28/03/2018 90 90
29/03/2018 100 100
30/03/2018 110 110
31/03/2018 120 120
```
Is there a way of achieving this without a third column?<issue_comment>username_1: `=INDEX(A1:A12,LARGE(IF(B1:B12<=99,ROW(B1:B12)+1),1))`
This is an array formula (`Ctrl`+`Shift`+`Enter` while still in the formula bar)
Builds an array of the row 1 below results that are less than or equal to 99. Large then returns the largest row number for index.
Upvotes: 1 <issue_comment>username_2: The `AGGREGATE` function is great for problems like these:
```
=AGGREGATE(14,4,(B2:B13<99)*A2:A13,1)+1
```
*What are those numeric arguments?*
>
> * `14` tells the function to replicate a `LARGE` function
> * `4` to ignore no values (this function can ignore error values and other things)
>
>
>
More info [here](https://www.techonthenet.com/excel/formulas/aggregate.php). I checked it works below:
[](https://i.stack.imgur.com/37fs7.png)
If your dates aren't always consecutive, you'll need to add a bit more to the function:
```
=INDEX(A1:A12,MATCH(AGGREGATE(14,6,(B1:B12<99)*A1:A12,1),A1:A12,0)+1)
```
Upvotes: 3 [selected_answer] |
2018/03/20 | 1,557 | 4,618 | <issue_start>username_0: I'm trying to learn haskell by solving some online problems and training exercises.
Right now I'm trying to make a function that'd remove adjacent duplicates from a list.
>
> Sample Input
>
>
> "acvvca"
>
>
> "1456776541"
>
>
> "abbac"
>
>
> "aabaabckllm"
>
>
>
>
> ---
>
>
> Expected Output
>
>
> ""
>
>
> ""
>
>
> "c"
>
>
> "ckm"
>
>
>
My first though was to make a function that'd simply remove first instance of adjacent duplicates and restore the list.
```
module Test where
removeAdjDups :: (Eq a) => [a] -> [a]
removeAdjDups [] = []
removeAdjDups [x] = [x]
removeAdjDups (x : y : ys)
| x == y = removeAdjDups ys
| otherwise = x : removeAdjDups (y : ys)
```
---
```
*Test> removeAdjDups "1233213443"
"122133"
```
---
This func works for first found pairs.
So now I need to apply same function over the result of the function.
Something I think foldl can help with but I don't know how I'd go about implementing it.
Something along the line of
```
removeAdjDups' xs = foldl (\acc x -> removeAdjDups x acc) xs
```
---
Also is this approach the best way to implement the solution or is there a better way I should be thinking of?<issue_comment>username_1: Start in last-first order: first remove duplicates from the tail, then check if head of the input equals to head of the tail result (which, by this moment, won't have any duplicates, so the only possible pair is head of the input vs. head of the tail result):
```
main = mapM_ (print . squeeze) ["acvvca", "1456776541", "abbac", "aabaabckllm"]
squeeze :: Eq a => [a] -> [a]
squeeze (x:xs) = let ys = squeeze xs in case ys of
(y:ys') | x == y -> ys'
_ -> x:ys
squeeze _ = []
```
Outputs
```
""
""
"c"
"ckm"
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: I don't see how `foldl` could be used for this. (Generally, `foldl` pretty much combines the disadvantages of `foldr` and `foldl'`... those, or `foldMap`, are the folds you should normally be using, not `foldl`.)
What you seem to intend is: *repeating* the `removeAdjDups`, until no duplicates are found anymore. The repetition is a job for
```
iterate :: (a -> a) -> a -> [a]
```
like
```
Prelude> iterate removeAdjDups "1233213443"
["1233213443","122133","11","","","","","","","","","","","","","","","","","","","","","","","","","","",""...
```
This is an infinite list of ever reduced lists. Generally, it will *not* converge to the empty list; you'll want to add some termination condition. If you want to remove as many dups as necessary, that's the *fixpoint*; it can be found in a very similar way to how you implemented `removeAdjDups`: compare neighbor elements, just this time in the list of *reductions*.
[username_1's suggestion](https://stackoverflow.com/a/49386171/745903) to handle recursive duplicates is much better though, it avoids unnecessary comparisons and traversing the start of the list over and over.
Upvotes: 2 <issue_comment>username_3: List comprehensions are often overlooked. They are, of course syntactic sugar but some, like me are addicted. First off, strings are lists as they are. This functions could handle any list, too as well as singletons and empty lists. You can us map to process many lists in a list.
```
(\l -> [ x | (x,y) <- zip l $ (tail l) ++ " ", x /= y]) "abcddeeffa"
```
"abcdefa"
Upvotes: 1 <issue_comment>username_4: I don't see either how to use `foldl`. It's maybe because, if you want to fold something here, you have to use `foldr`.
```
main = mapM_ (print . squeeze) ["acvvca", "1456776541", "abbac", "aabaabckllm"]
-- I like the name in @username_1 answer
squeeze = foldr (\ x xs -> if xs /= "" && x == head(xs) then tail(xs) else x:xs) ""
```
Let's analyze this. The idea is taken from @username_1 answer: go from right to left. If `f` is the lambda function, then by definition of `foldr`:
```
squeeze "abbac" = f('a' f('b' f('b' f('a' f('c' "")))
```
By definition of `f`, `f('c' "") = 'c':"" = "c"` since `xs == ""`. Next char from the right: `f('a' "c") = 'a':"c" = "ac"` since `'a' != head("c") = 'c'`. `f('b' "ac") = "bac"` for the same reason. But `f('b' "bac") = tail("bac") = "ac"` because `'b' == head("bac")`. And so forth...
Bonus: by replacing `foldr` with `scanr`, you can see the whole process:
```
Prelude> squeeze' = scanr (\ x xs -> if xs /= "" && x == head(xs) then tail(xs) else x:xs) ""
Prelude> zip "abbac" (squeeze' "abbac")
[('a',"c"),('b',"ac"),('b',"bac"),('a',"ac"),('c',"c")]
```
Upvotes: 0 |
2018/03/20 | 1,189 | 4,290 | <issue_start>username_0: I have a requirement where I need to create a list of objects from another list based on 2 conditions.
**Either the object should be in session or should be active.**
[This question is vaguely related to- [Create list of object from another using java8 streams](https://stackoverflow.com/questions/35650974/create-list-of-object-from-another-using-java8-streams]])
```
class Person
{
private String firstName;
private String lastName;
private String personId;
//Getters and Setters
}
class UserInfo
{
private String uFirstName;
private String uLastName;
private String userId;
//Constructor, Getters and Setters
}
```
Main class:
```
Boolean isActiveUser = ..... [getting from upstream systems]
HttpSession session = request.session();
List sessionUserIds = session.getAttribute("users");
List personList = criteria.list(); //Get list from db.
//CURRENT COCDE
List userList = new ArrayList<>();
for(Person person: personList) {
UserInfo userInfo = new UserInfo(person);
if(sessionUserIds.contains(person.getPersonId())){
userInfo.setLoginTime(""); //Get LoginTime from upstream
userInfo.setIdleTime(""); // Get IdleTime from Upstream
userList.add(userInfo);
} else {
if(isActiveUser) {
userList.add(userInfo);
}
}
}
// Trying to GET IT converted to JAVA-8
List userList = personList.stream()
.filter(p -> ??)
.map( ??
.collect(Collectors.toList());
```
Any help would be appreciated.<issue_comment>username_1: Not a very good idea but just a try.
***if personId becomes userId then you can try as below:***
```
List userList = new ArrayList<>();
personList.stream()
.map(p -> new UserInfo(person)) // person becomes user here
.forEach(u -> {
if(sessionUserIds.contains(u.getUserId())){
userInfo.setLoginTime("");
userInfo.setIdleTime("");
userList.add(userInfo);
} else if(isActiveUser) {
userList.add(userInfo);
}
}
});
```
Upvotes: 0 <issue_comment>username_2: You can create the stream, then check if the user is active or is in the session, then if the user is in the session use the setters and finally collect.
```
List userList = personList.stream()
.map(UserInfo::new)
.filter(userInfo -> isUserActive || sessionUserIds.contains(userInfo.getUserId()))
.map(userInfo -> {
if(sessionUserIds.contains(userInfo.getUserId())) {
userInfo.setLoginTime("");
userInfo.setIdleTime("");
}
return userInfo;
})
.collect(Collectors.toList());
```
Or if you don't mind to use the setters in the filter.
```
List userList = personList.stream()
.map(UserInfo::new)
.filter(userInfo -> {
if (sessionUserIds.contains(userInfo.getUserId())) {
userInfo.setLoginTime("");
userInfo.setIdleTime("");
return true;
}
return isUserActive;
})
.collect(Collectors.toList());
```
Upvotes: 1 <issue_comment>username_3: Do you want to adjust the data for all those users which are in the session user id list, but not for the others? Maybe the following is ok for you (assuming, as you said elsewhere, that `getPersonId()` returns the same as `getUserId()`):
```
List userList;
userList = personList.stream()
.filter(p -> isActiveUser || sessionUserIds.contains(p.getPersonId()))
.map(UserInfo::new)
.collect(Collectors.toList());
userList.stream()
.filter(u -> sessionUserIds.contains(u.getUserId())
.forEach(u -> {
u.setLoginTime(""); // Get LoginTime from upstream
u.setIdleTime(""); // Get IdleTime from Upstream
});
```
Be sure to read the [`foreach` javadoc](https://docs.oracle.com/javase/8/docs/api/java/util/stream/Stream.html#peek-java.util.function.Consumer-).
Alternatively using only one stream:
```
List userList;
userList = personList.stream()
.filter(p -> isActiveUser || sessionUserIds.contains(p.getPersonId()))
.map(p -> {
UserInfo u = new UserInfo(p);
if (sessionUserIds.contains(u.getUserId())) {
u.setLoginTime(""); // Get LoginTime from upstream
u.setIdleTime(""); // Get IdleTime from Upstream
}
return u;
})
.collect(Collectors.toList());
```
I can recommend to filter before you map. For me, it's more readable. That's definitely not true for everyone. When mentally debugging through that code, at least I can filter out already some of the elements and concentrate on the things that are important in the further processing.
Upvotes: 2 |
2018/03/20 | 1,333 | 5,173 | <issue_start>username_0: In my sql database `WorkDay` field is in string format and in model it is `nullable DayOfWeek`, i.e `public DayOfWeek? WorkDay { get; set; }`. While Converting database `WorkDay` field into model `WorkDay` field it will generate an error like:
>
> Could not translate expression 'Table(StaffSchedule)' into SQL and
> could not treat it as a local expression.
>
>
>
I have also tried to create three different linq statements which are as below.
1) Retrieve Data from StaffSchedule table.
2) Apply select operation on it.
3) Apply AddRange operation on selected data.
```
results.AddRange(dataContext.StaffSchedules
.Where(x => !x.Excluded)
.Where(x => x.DistrictID == districtId && x.District.Active && (x.Position == positionTeacher || x.Position == positionDirector || x.Position == positionAssistant || x.Position == positionAssistantDirector))
.Select(x => new Core.StaffSchedule()
{
ID = x.ID,
Staff = x.Staff.SelectSummary(),
Position = (StaffPosition)Enum.Parse(typeof(StaffPosition), x.Position, true),
Class = refs.Class,
District = x.District.SelectSummary(),
Time = null,
Reoccurring = false,
Inherited = true,
ReoccourringStart = x.ReoccourringStart,
ReoccourringEnd = x.ReoccourringEnd,
WorkDay = x.WorkDay == null ? (DayOfWeek?)null : (DayOfWeek)Enum.Parse(typeof(DayOfWeek), x.WorkDay, true)
}));
```
This is the conversion code for `string` to `nullable DayOfWeek` field. Which cause an error in my case.
```
WorkDay = x.WorkDay == null ? (DayOfWeek?)null : (DayOfWeek)Enum.Parse(typeof(DayOfWeek), x.WorkDay, true)
```
I have already gone through below link.
* [How to solve issue "Could not translate expression ...into SQL and could not treat it as a local expression."](https://stackoverflow.com/questions/21590215/how-to-solve-issue-could-not-translate-expression-into-sql-and-could-not-tre)<issue_comment>username_1: Not a very good idea but just a try.
***if personId becomes userId then you can try as below:***
```
List userList = new ArrayList<>();
personList.stream()
.map(p -> new UserInfo(person)) // person becomes user here
.forEach(u -> {
if(sessionUserIds.contains(u.getUserId())){
userInfo.setLoginTime("");
userInfo.setIdleTime("");
userList.add(userInfo);
} else if(isActiveUser) {
userList.add(userInfo);
}
}
});
```
Upvotes: 0 <issue_comment>username_2: You can create the stream, then check if the user is active or is in the session, then if the user is in the session use the setters and finally collect.
```
List userList = personList.stream()
.map(UserInfo::new)
.filter(userInfo -> isUserActive || sessionUserIds.contains(userInfo.getUserId()))
.map(userInfo -> {
if(sessionUserIds.contains(userInfo.getUserId())) {
userInfo.setLoginTime("");
userInfo.setIdleTime("");
}
return userInfo;
})
.collect(Collectors.toList());
```
Or if you don't mind to use the setters in the filter.
```
List userList = personList.stream()
.map(UserInfo::new)
.filter(userInfo -> {
if (sessionUserIds.contains(userInfo.getUserId())) {
userInfo.setLoginTime("");
userInfo.setIdleTime("");
return true;
}
return isUserActive;
})
.collect(Collectors.toList());
```
Upvotes: 1 <issue_comment>username_3: Do you want to adjust the data for all those users which are in the session user id list, but not for the others? Maybe the following is ok for you (assuming, as you said elsewhere, that `getPersonId()` returns the same as `getUserId()`):
```
List userList;
userList = personList.stream()
.filter(p -> isActiveUser || sessionUserIds.contains(p.getPersonId()))
.map(UserInfo::new)
.collect(Collectors.toList());
userList.stream()
.filter(u -> sessionUserIds.contains(u.getUserId())
.forEach(u -> {
u.setLoginTime(""); // Get LoginTime from upstream
u.setIdleTime(""); // Get IdleTime from Upstream
});
```
Be sure to read the [`foreach` javadoc](https://docs.oracle.com/javase/8/docs/api/java/util/stream/Stream.html#peek-java.util.function.Consumer-).
Alternatively using only one stream:
```
List userList;
userList = personList.stream()
.filter(p -> isActiveUser || sessionUserIds.contains(p.getPersonId()))
.map(p -> {
UserInfo u = new UserInfo(p);
if (sessionUserIds.contains(u.getUserId())) {
u.setLoginTime(""); // Get LoginTime from upstream
u.setIdleTime(""); // Get IdleTime from Upstream
}
return u;
})
.collect(Collectors.toList());
```
I can recommend to filter before you map. For me, it's more readable. That's definitely not true for everyone. When mentally debugging through that code, at least I can filter out already some of the elements and concentrate on the things that are important in the further processing.
Upvotes: 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.