date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
---|---|---|---|
2018/03/20 | 390 | 1,592 | <issue_start>username_0: I am new in selenium and I have configured Cucumber with Maven and create one feature file but feature file's green icon is not displayed. Can I missed some configuration?
[Cucumber feature file](https://i.stack.imgur.com/198nk.png)<issue_comment>username_1: There is a plugin called **Natural 0.7.6** and it has a cucumber editor. If you download and install it from Eclipse Marketplace, then you will see the green icon for Cucumber feature file.
If you want to check the detailed steps, then please refer this article - [Add Cucumber feature file](http://www.automationtestinghub.com/add-cucumber-feature-file/)
Upvotes: 1 <issue_comment>username_2: That is because your version of IntelliJ is not recognizing the feature extension. For that, in IntelliJ, go to File --> Settings --> Editor --> File Types --> Mark "Cucumber Scenario" and add Registered Patters the \*.feature

The Cucumber green icon will be displayed the next tim you create a feature file.
Upvotes: 0 <issue_comment>username_3: after installing Natural plugin, open .feature file and right click on it.
then choice option as >> open with > others > from internal editor please select the cucumber editor > then above the ok button checked mark on "use it for all "\*.feature" files.
then you can see the all **.feature** file display green.
Upvotes: 0 <issue_comment>username_4: inside feature folder , Right click on feature file->open with -> editior
Colour will appear if rest settings are correct. For me , it worked
Upvotes: 0 |
2018/03/20 | 1,011 | 3,063 | <issue_start>username_0: I'm trying to use `openpyxl` to read 1 column out of a Excel-file until it hits an empty cell, then it needs to stop. But i don't get it working. This is my code so far:
```
import openpyxl
import os
def main():
filePath = os.getcwd() + "\file.xlsx"
wb = openpyxl.load_workbook(filename=filePath, read_only=True)
sheet = wb["Sheet1"]
for row in range(sheet.max_row):
if(sheet.cell(row+1,1).value == None):
break
print(sheet.cell(row+1,1).value)
if __name__ == "__main__":
main()
```
But this results in the following error:
>
> Traceback (most recent call last):
>
> File "someProgram.py", line 27, in main()
> File "someProgram.py", line 15, in main
>
> if(sheet.cell(row+1,1).value == None):
>
> File "C:\Python34\lib\openpyxl\worksheet\worksheet.py", line 349,
>
> in cell coordinate = coordinate.upper().replace('$', '')
>
> AttributeError: 'int' object has no attribute 'upper'
>
>
><issue_comment>username_1: You can use an iterator which will be handier in such situations.
```
wb = load_workbook(pathToYourFile, use_iterators=True)
sheet = wb.worksheets["Sheet1"]
row_count = sheet.max_row
for row in range(sheet.max_row):
if(sheet.cell(row+1,1).value == None):
break
print(sheet.cell(row+1,1).value)
```
Upvotes: -1 <issue_comment>username_2: I see that I am able to print all the values in the column using the same code but with slight modification to below line: (basically, escape '\' character by adding one more \
```
import openpyxl
import os
def main():
filePath = os.getcwd() + "\\file.xlsx"
print(filePath)
print(os.getcwd)
wb = openpyxl.load_workbook(filename=filePath, read_only=True)
sheet = wb["Sheet1"]
for row in range(sheet.max_row):
if(sheet.cell(row+1,1).value == None):
break
print(sheet.cell(row+1,1).value)
if __name__ == "__main__":
main()
```
Upvotes: 0 <issue_comment>username_3: The problem is with this line:
```
if(sheet.cell(row+1,1).value == None):
```
`sheet.cell` is expecting to have a `str` cell name such as `A1` and not of type `int` for one parameter function.
You need to specify the `row` and `column` keys such as:
```
sheet.cell(row=row+1, column=1).value
```
if you specify an `int` type `row` and `column` variables
Upvotes: 2 [selected_answer]<issue_comment>username_4: I've created a small xlsx file myself with only one column filled with integers in the following order: 5, 4, 3, 2, 10 and 11. The code below seems to work for, for me at least:
```
column_index = 0 # 0 = A, 1 = B, ...
sheet_name = "Sheet1"
sheet = wb[sheet_name]
for r in sheet.rows:
value = r[column_index].value
print("value",value)
if value is None:
break
```
Output:
```
value 5
value 4
value 3
value 2
value 10
value 11
```
If I delete one intermediate value in that column, say 2, then the loop stops, like so:
```
value 5
value 4
value 3
value None
```
I hope this helps with your problem.
Upvotes: 0 |
2018/03/20 | 1,247 | 3,145 | <issue_start>username_0: I want to wrap after second td of my table.
Here is my code:
```
| | | | |
| --- | --- | --- | --- |
| Row 1 col 1 | Row 1 col 2 | Row 1 col 3 | Row 1 col 4 |
| Row 2 col 1 | Row 2 col 2 | Row 2 col 3 | Row 2 col 4 |
| Row 3 col 1 | Row 3 col 2 | Row 3 col 3 | Row 3 col 4 |
| Row 4 col 1 | Row 4 col 2 | Row 4 col 3 | Row 4 col 4 |
```
I want to see my code like this:
```
| | |
| --- | --- |
| Row 1 col 1 | Row 1 col 2 |
| Row 1 col 3 | Row 1 col 4 |
| Row 2 col 1 | Row 2 col 2 |
| Row 2 col 3 | Row 2 col 4 |
| Row 3 col 1 | Row 3 col 2 |
| Row 3 col 3 | Row 3 col 4 |
| Row 4 col 1 | Row 4 col 2 |
| Row 4 col 3 | Row 4 col 4 |
```
I tried this way:
```
$(".test tr td:eq(2)").wrap('|
');
```
But its not working properly.<issue_comment>username_1: One option is to loop thru your `|`s and wrap it by 2. And using `.html()` update the table `tbody`
```js
$(function() {
var td = $(".test td");
var html = "";
for (var x = 0; x < td.length; x += 2) {
html += "|";
html += $(td[x]).prop('outerHTML');
html += $(td[x + 1]).prop('outerHTML');
html += "
";
}
$(".test tbody").html(html);
});
```
```html
| | | | |
| --- | --- | --- | --- |
| Row 1 col 1 | Row 1 col 2 | Row 1 col 3 | Row 1 col 4 |
| Row 2 col 1 | Row 2 col 2 | Row 2 col 3 | Row 2 col 4 |
| Row 3 col 1 | Row 3 col 2 | Row 3 col 3 | Row 3 col 4 |
| Row 4 col 1 | Row 4 col 2 | Row 4 col 3 | Row 4 col 4 |
```
Upvotes: 1 [selected_answer]<issue_comment>username_2: So our final goal is to add `|
| |` after the second `|` ends. If you try the code like `$(".test tr td:eq(1)").after('|');`, it won't work. This will add the element like below.
[](https://i.stack.imgur.com/6FBA4.jpg)
Because using JQuery you can't add the closing as it as directly like above. You can get more article if you google it for why you can't add closing tag in JQuery. Now coming to our problem, how we can fix the issue.
Thanks to <NAME> giving the solution for the same kind of situation for `ul` and `li` elements [here](https://stackoverflow.com/a/9886743/3607064). In JQuery, you can use [nextAll()](http://api.jquery.com/nextAll/) with [andSelf()](http://api.jquery.com/andSelf/) to get the elements you want to move, then create a new `|
| |` and use [append()](http://api.jquery.com/append/) to relocate the elements. I refactor his solution to fix our issue like below.
```js
$("table.test tr td:nth-child(3)").each(function(){
$("|").insertAfter($(this).parent())
.append($(this).nextAll().andSelf());
});
```
```css
body {
font-family: Verdana, Arial, sans-serif;
font-size: 12px;
}
```
```html
| | | | |
| --- | --- | --- | --- |
| Row 1 col 1 | Row 1 col 2 | Row 1 col 3 | Row 1 col 4 |
| Row 2 col 1 | Row 2 col 2 | Row 2 col 3 | Row 2 col 4 |
| Row 3 col 1 | Row 3 col 2 | Row 3 col 3 | Row 3 col 4 |
| Row 4 col 1 | Row 4 col 2 | Row 4 col 3 | Row 4 col 4 |
```
[**Here**](http://jsfiddle.net/stLW4/219/) is the JSFiddle version, if you want to workout some other stuff.
Upvotes: 1 |
2018/03/20 | 587 | 2,324 | <issue_start>username_0: ```
class CustomManager(models.Manager):
def get_query_set(self):
queryset = super(CustomManager, self).get_query_set()
return queryset.filter(
models.Q(expiration_date__gte=datetime.date.today()) |
models.Q(
expiration_date__gte=datetime.date.today() - datetime.timedelta(days=40),
is_invoice_emailed=True
)
)
class Subscription(models.Model):
....
objects = CustomManager()
default = models.Manager()
```
when I access `Subscription.objects.all()` it's returning all the records in db without filtering. but, If I use below query
```
queryset = Subscription.objects.all()
queryset.filter(
models.Q(expiration_date__gte=datetime.date.today()) |
models.Q(
expiration_date__gte=datetime.date.today() - datetime.timedelta(days=40),
is_invoice_emailed=True
)
)
```
It is returning filtered results. Why?
I'm using django==1.11.11, python2.7 and db Postgresql
Please help. Thanks.<issue_comment>username_1: The method all() on a manager just delegates to get\_queryset(), and it is design to return all the objects on dB.
If you would use a filter method like filter() or exclude(), you would already have the QuerySet, and it return certain object which matches the condition of filter.
You can learn about queryset on
[django queryset documentation here](https://docs.djangoproject.com/en/2.0/topics/db/queries/)
Upvotes: 0 <issue_comment>username_2: Your `get_query_set()` method should be typed like `get_queryset()`
You may also use QuerySet directly instead of Manager:
```
class CustomQuerySet(models.QuerySet):
def get_result(self):
return self.filter(
models.Q(expiration_date__gte=datetime.date.today()) |
models.Q(
expiration_date__gte=datetime.date.today() - datetime.timedelta(days=40),
is_invoice_emailed=True
)
)
class Subscription(models.Model):
...
objects = CustomQuerySet().as_manager()
```
The pros of above is that you no longer have to provide Manager class.
From now you can use it like: `Subscription.objects.get_result()`
Upvotes: 2 [selected_answer] |
2018/03/20 | 423 | 1,299 | <issue_start>username_0: I'm trying to put an array to textarea, one element in a line. But every element is an array too and I need to show it without commas. I only can show every row without commas
```js
var arr = [
[1, 2],
[3, 4],
[5, 6]
];
outputText = document.getElementById('outputField');
outputText.value = arr.join('\n');
```<issue_comment>username_1: You could join the inner arrays with space.
```js
var arr = [[1, 2], [3, 4], [5, 6]];
outputText = document.getElementById('outputField');
outputText.value = arr.map(a => a.join(' ')).join('\n');
```
Upvotes: 2 <issue_comment>username_2: Join inner arrays with space using `Array.prototype.map()` on the outer array:
```js
var arr = [
[1, 2],
[3, 4],
[5, 6]
];
outputText = document.getElementById('outputField');
outputText.value = arr.map(inner => inner.join(' ')).join('\n');
```
Upvotes: 1 <issue_comment>username_3: Another solution would be to use [`Array#forEach`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/forEach) and to `join` your sub-arrays with a single space :
```js
var arr = [
[1, 2],
[3, 4],
[5, 6]
];
outputText = document.getElementById('outputField');
arr.forEach(e => outputText.value += e.join(' ') + '\n');
```
Upvotes: 0 |
2018/03/20 | 688 | 2,277 | <issue_start>username_0: I am writing more than 6 functions and save them in my R.project. Every time, to start working using my project, I need to run each function manually one by one. Is there a way that I can load all these functions automatically?<issue_comment>username_1: You have two options:
1. Create your own package and load it on the start-up, you will have all the function available. [A tutorial](https://cran.r-project.org/doc/contrib/Leisch-CreatingPackages.pdf)
2. Customize R start-up loading automatically your R files containing your functions. [A tutorial](https://www.r-bloggers.com/fun-with-rprofile-and-customizing-r-startup/) and [an example](https://stackoverflow.com/questions/36914701/run-r-script-with-the-start-up-of-r)
Upvotes: 4 [selected_answer]<issue_comment>username_2: We can create a package in `R`
1. Bundle the functions and create a package -`yourpackage` and then load the package
```
library(yourpackage)
```
2. One example is [here](https://support.rstudio.com/hc/en-us/articles/200486488-Developing-Packages-with-RStudio)
3. Another resource is [here](https://hilaryparker.com/2014/04/29/writing-an-r-package-from-scratch/)
4. Yet another is [here](http://kbroman.org/pkg_primer/pages/build.html)
Upvotes: 1 <issue_comment>username_3: If you don't wish to take the package approach (which I agree is the best approach), you could stack all your functions on top of one another in an R script and source it on startup. One step instead of 6. End up with all the functions in your .GlobalEnv
Put this in an R script:
```
###Put in a script
eeee <- function(){
cat("yay I'm a function")
}
ffff <- function(){
cat("Aaaaaah a talking function")
}
```
If you use RStudio, the code would be as below. Otherwise change the source location. Do this in console (or in a script):
```
###Do this
source('~/.active-rstudio-document')
```
Then you can do:
```
eeee()
yay I'm a function
ffff()
Aaaaaah a talking function
```
Upvotes: 0 <issue_comment>username_4: You can run the below script before starting your work:
```
source_code_dir <- "./R/" #The directory where all source code files are saved.
file_path_vec <- list.files(source_code_dir, full.names = T)
for(f_path in file_path_vec){source(f_path)}
```
Upvotes: 0 |
2018/03/20 | 777 | 3,628 | <issue_start>username_0: *Effective Java 3rd Edition, Item 18: Favor composition over inheritance* describes an issue with using inheritance to add behavior to a class:
>
> A related cause of fragility in subclasses is that their superclass can acquire new methods in subsequent releases. Suppose a program depends for its security on the fact that all elements inserted into some collection satisfy some predicate. This can be guaranteed by subclassing the collection and overriding each method capable of adding an element to ensure that the predicate is satisfied before adding the element. This works fine until a new method capable of inserting an element is added to the superclass in a subsequent release. Once this happens, it becomes possible to add an "illegal" element merely by invoking the new method, which is not overridden in the subclass.
>
>
>
The recommended solution:
>
> Instead of extending an existing class, give your new class a private field that references an instance of the existing class... Each instance method in the new class invokes the corresponding method on the contained instance of the existing class and returns the results. This is known as *forwarding*, and the methods in the new class are known as *forwarding methods*... adding new methods to the existing class will have no impact on the new class... It's tedious to write forwarding methods, but you have to write the reusable forwarding class for each interface only once, and forwarding classes may be provided for you. For example, Guava provides forwarding classes for all of the collection interfaces.
>
>
>
My question is, doesn't the risk remain that methods could also be added **to the forwarding class**, thereby breaking the invariants of the subclass? How could an external library like Guava ever incorporate newer methods in forwarding classes without risking the integrity of its clients?<issue_comment>username_1: >
> doesn't the risk remain that methods could also be added to the forwarding class, thereby breaking the invariants of the subclass?
>
>
>
Composition is an alternative to inheritance, so when you use composition, there is no sub-class. If you add new public methods to the forwarding class (which may access methods of the contained instance), that means you want these methods to be used.
Upvotes: 2 <issue_comment>username_2: Because you are the owner of the forwarding class, only you can add new methods to it, thus maintaining the invariant.
Upvotes: 0 <issue_comment>username_3: The tacit assumption seems to be that *you* are the one writing the forwarding class, therefore you are in control of whether anything gets added to it. That's the common way of using composition over inheritance, anyway.
The Guava example seems to refer to [the Forwarding Decorators](https://github.com/google/guava/wiki/CollectionHelpersExplained#forwarding-decorators), which are explicitly designed to be inherited from. But they are just helpers to make it simpler to create these forwarding classes without having to define every method in the interface; they explicitly *don't* shield you from any methods being added in the future that you might need to override as well:
>
> Remember, by default, all methods forward directly to the delegate, so overriding `ForwardingMap.put` will not change the behavior of `ForwardingMap.putAll`. Be careful to override every method whose behavior must be changed, and make sure that your decorated collection satisfies its contract.
>
>
>
So, if I understood all this correctly, Guava is not such a great example.
Upvotes: 3 [selected_answer] |
2018/03/20 | 3,610 | 9,077 | <issue_start>username_0: I have this code:
```
let result = Object.values(response.data.reduce((r,{ PO_NO, PO_LINE_NO, MATERIAL_NO, MATERIAL_NAME, PO_QTY, GRPO_QTY, GRPO_SHIPDATE }) => {
r[PO_NO] = r[PO_NO] || { PO_NO, LINES: [] }
r[PO_NO].LINES.push({
LINE_NO: PO_LINE_NO,
PO_QTY: PO_QTY,
MATERIAL_NO: MATERIAL_NO,
MATERIAL_NAME: MATERIAL_NAME,
GRPO_QTY: GRPO_QTY,
GRPO_SHIPDATE: GRPO_SHIPDATE
})
return r
},{}))
```
Which results into an array of nested objects. However, in the **LINES.push()** part, there are items that have the same line\_no, material\_no, material\_name, and po\_qty. The difference are grpo\_qty and grpo\_shipdate.
Is it possible to do remove the shipdate and get the sum of grpo\_qty with the same line\_no for every po\_no so that I only have a single row per line\_no of every po\_no?
Example of response.data content:
```
{
"PO_NO": 35159,
"LINES": [
{
"LINE_NO": 15,
"PO_QTY": 500000,
"MATERIAL_NO": "130227",
"MATERIAL_NAME": "T3-0381 Base Mold φ10 M2",
"GRPO_QTY": 160000,
"GRPO_SHIPDATE": "September, 21 2017 00:00:00"
},
{
"LINE_NO": 15,
"PO_QTY": 500000,
"MATERIAL_NO": "130227",
"MATERIAL_NAME": "T3-0381 Base Mold φ10 M2",
"GRPO_QTY": 320800,
"GRPO_SHIPDATE": "October, 07 2017 00:00:00"
},
{
"LINE_NO": 15,
"PO_QTY": 500000,
"MATERIAL_NO": "130227",
"MATERIAL_NAME": "T3-0381 Base Mold φ10 M2",
"GRPO_QTY": 19200,
"GRPO_SHIPDATE": "October, 20 2017 00:00:00"
},
{
"LINE_NO": 16,
"PO_QTY": 500000,
"MATERIAL_NO": "130227",
"MATERIAL_NAME": "T3-0381 Base Mold φ10 M2",
"GRPO_QTY": 60000,
"GRPO_SHIPDATE": "September, 13 2017 00:00:00"
},
{
"LINE_NO": 16,
"PO_QTY": 500000,
"MATERIAL_NO": "130227",
"MATERIAL_NAME": "T3-0381 Base Mold φ10 M2",
"GRPO_QTY": 440000,
"GRPO_SHIPDATE": "October, 20 2017 00:00:00"
}
]
},
```<issue_comment>username_1: Well I think you can just iterate over each entry and call a reduce function on the lines array. In this function you can create a unique key with properties that stay the same and then sum up the grpo\_qty values.
It could look something like this
```
result.forEach(entry => {
entry.LINES = Object.values(entry.LINES.reduce((result, current) => {
const uniqueKey = `${current.LINE_NO}-${current.MATERIAL_NO}-${current.PO_QTY}-${current.MATERIAL_NAME}`;
if (!result[uniqueKey]) {
result[uniqueKey] = {
LINE_NO: current.LINE_NO,
PO_QTY: current.PO_QTY,
MATERIAL_NO: current.MATERIAL_NO,
MATERIAL_NAME: current.MATERIAL_NAME,
GRPO_QTY: 0,
};
}
result[uniqueKey].GRPO_QTY += current.GRPO_QTY;
return result;
}, {}));
});
```
Maybe you can provide a jsfiddle so it is easier to test.
You could also imagine to do this in your original reduce. Creating the unique key there and also sum up the values. Then you later just need to convert the object back to an array.
Theoretically it would also be possible to directly write everything in an array and find the existing entry with array.find(), but personally I'd suggest using one of the reduce. It is a lot cleaner, more performant and easier to read
Upvotes: 0 <issue_comment>username_2: Writing functions `matchLine` and `combineLine` help us break `groupPoLines` down into an easier task - note, each function here will NOT mutate its inputs
```
const matchLine = (a, b) =>
a.LINE_NO === b.LINE_NO
&& a.PO_QTY === b.PO_QTY
&& a.MATERIAL_NO === b.MATERIAL_NO
const combineLine = ({ GRPO_SHIPDATE:_, ...a }, b) =>
({ ...a, GRPO_QTY: a.GRPO_QTY + b.GRPO_QTY })
const groupPoLines = ({ LINES, ...po }) => ({
...po,
LINES: LINES.reduce ((r, x) => {
const i = r.findIndex (y => matchLine (x, y))
if (i < 0)
return [ ...r, x ]
else
return Object.assign (r, { [i]: combineLine (r[i], x) })
}, [])
})
console.log (groupPoLines (data))
// { PO_NO: 35159,
// LINES:
// [ { LINE_NO: 15,
// PO_QTY: 500000,
// MATERIAL_NO: '130227',
// MATERIAL_NAME: 'T3-0381 Base Mold φ10 M2',
// GRPO_QTY: 500000 },
// { LINE_NO: 16,
// PO_QTY: 500000,
// MATERIAL_NO: '130227',
// MATERIAL_NAME: 'T3-0381 Base Mold φ10 M2',
// GRPO_QTY: 500000 } ] }
```
If you have an array of POs, you can simply `map` our new function over it
```
console.log (poList.map (po => groupPoLines (po)))
// [ { PO_NO: 1, LINES: [ ... ] }, { PO_NO: 2, LINES: [ ... ] } ]
```
Expand the snippet to verify it works
```js
const data = {
"PO_NO": 35159,
"LINES": [
{
"LINE_NO": 15,
"PO_QTY": 500000,
"MATERIAL_NO": "130227",
"MATERIAL_NAME": "T3-0381 Base Mold φ10 M2",
"GRPO_QTY": 160000,
"GRPO_SHIPDATE": "September, 21 2017 00:00:00"
},
{
"LINE_NO": 15,
"PO_QTY": 500000,
"MATERIAL_NO": "130227",
"MATERIAL_NAME": "T3-0381 Base Mold φ10 M2",
"GRPO_QTY": 320800,
"GRPO_SHIPDATE": "October, 07 2017 00:00:00"
},
{
"LINE_NO": 15,
"PO_QTY": 500000,
"MATERIAL_NO": "130227",
"MATERIAL_NAME": "T3-0381 Base Mold φ10 M2",
"GRPO_QTY": 19200,
"GRPO_SHIPDATE": "October, 20 2017 00:00:00"
},
{
"LINE_NO": 16,
"PO_QTY": 500000,
"MATERIAL_NO": "130227",
"MATERIAL_NAME": "T3-0381 Base Mold φ10 M2",
"GRPO_QTY": 60000,
"GRPO_SHIPDATE": "September, 13 2017 00:00:00"
},
{
"LINE_NO": 16,
"PO_QTY": 500000,
"MATERIAL_NO": "130227",
"MATERIAL_NAME": "T3-0381 Base Mold φ10 M2",
"GRPO_QTY": 440000,
"GRPO_SHIPDATE": "October, 20 2017 00:00:00"
}
]
}
const matchLine = (a, b) =>
a.LINE_NO === b.LINE_NO
&& a.PO_QTY === b.PO_QTY
&& a.MATERIAL_NO === b.MATERIAL_NO
const combineLine = ({ GRPO_SHIPDATE:_, ...a }, b) =>
({ ...a, GRPO_QTY: a.GRPO_QTY + b.GRPO_QTY })
const groupPoLines = ({ LINES, ...po }) => ({
...po,
LINES: LINES.reduce ((r, x) => {
const i = r.findIndex (y => matchLine (x, y))
if (i < 0)
return [ ...r, x ]
else
return Object.assign (r, { [i]: combineLine (r[i], x)})
}, [])
})
console.log (groupPoLines (data))
console.log ('---')
console.log ([data, data, data].map(d => groupPoLines (d)))
```
Upvotes: 2 [selected_answer]<issue_comment>username_3: I would iterate over the array and sum the quantities:
```
const newLines = [],
lineNumbers = [];
lines.forEach(line => {
delete line.GRPO_SHIPDATE; /* delete shipdate */
if (!lineNumbers.includes(line.LINE_NO)) {
lineNumbers.push(line.LINE_NO); /* store current LINE_NO */
newLines.push(line);
} else {
let toChange = newLines.filter(ln => { /* get current LINE_NO */
return ln.LINE_NO === line.LINE_NO
});
toChange[0].GRPO_QTY = toChange[0].GRPO_QTY + line.GRPO_QTY;
}
});
```
```js
const lines = [{
"LINE_NO": 15,
"PO_QTY": 500000,
"MATERIAL_NO": "130227",
"MATERIAL_NAME": "T3-0381 Base Mold φ10 M2",
"GRPO_QTY": 160000,
"GRPO_SHIPDATE": "September, 21 2017 00:00:00"
},
{
"LINE_NO": 15,
"PO_QTY": 500000,
"MATERIAL_NO": "130227",
"MATERIAL_NAME": "T3-0381 Base Mold φ10 M2",
"GRPO_QTY": 320800,
"GRPO_SHIPDATE": "October, 07 2017 00:00:00"
},
{
"LINE_NO": 15,
"PO_QTY": 500000,
"MATERIAL_NO": "130227",
"MATERIAL_NAME": "T3-0381 Base Mold φ10 M2",
"GRPO_QTY": 19200,
"GRPO_SHIPDATE": "October, 20 2017 00:00:00"
},
{
"LINE_NO": 16,
"PO_QTY": 500000,
"MATERIAL_NO": "130227",
"MATERIAL_NAME": "T3-0381 Base Mold φ10 M2",
"GRPO_QTY": 60000,
"GRPO_SHIPDATE": "September, 13 2017 00:00:00"
},
{
"LINE_NO": 16,
"PO_QTY": 500000,
"MATERIAL_NO": "130227",
"MATERIAL_NAME": "T3-0381 Base Mold φ10 M2",
"GRPO_QTY": 440000,
"GRPO_SHIPDATE": "October, 20 2017 00:00:00"
}
]
const newLines = [],
lineNumbers = [];
lines.forEach(line => {
delete line.GRPO_SHIPDATE; /* delete shipdate */
if (!lineNumbers.includes(line.LINE_NO)) {
lineNumbers.push(line.LINE_NO); /* store current LINE_NO */
newLines.push(line);
} else {
let toChange = newLines.filter(ln => { /* get current LINE_NO */
return ln.LINE_NO === line.LINE_NO
});
toChange[0].GRPO_QTY = toChange[0].GRPO_QTY + line.GRPO_QTY;
}
});
console.log(newLines)
```
Upvotes: 0 |
2018/03/20 | 548 | 2,191 | <issue_start>username_0: I'm building protractor tests for my Angular 5 app (built with Angular CLI). My problem is that to build a test takes a lot of time - each time I run `ng e2e` I need to wait until the app is compiled. That happens lot of times, since there are lot of mistakes with incorrect selectors in my code.
I have feeling that I'm doing something wrong. There must be a way to do protractor tests faster... am I right?<issue_comment>username_1: It's completely normal. E2E tests do not need to be run at every commit.
The best is to run them before a release or tagging.
Once in place, you'll only change them when your code evolves.
On my project, they take about 3 min to pass with 80 tests.
Upvotes: 0 <issue_comment>username_2: To skip the angular application compilation process, install protractor as global:
```
npm install -g protractor
webdriver-manager update
```
serve the application as normal **ng serve** and run protractor inside project folder, command line:
```
proctractor
```
Also you can modify file **package.json** in section 'scripts' adding line as:
```
"scripts": {
...
"protractor": "protractor"
},
```
then you can run your protractor tests in other command line prompt as:
```
npm run protractor
```
In addition: To run tests that matches specific name you can invoke:
```
protractor --grep "test name"
```
also if you want to be more strict with test names and suite names you can use `^` and `$` with `--grep` option, but you should know: suite name and test name is concatenated with space. So to run specific tests from different suites run command as:
```
protractor --grep "^Suite name1 test name1$|^Suite name2 other test name2$"
```
Upvotes: 2 <issue_comment>username_3: In a situation where you have a live environment, as in a running webapp, you can set up protractor to run against the live app. I'm in a situation where I never need to ng serve and run against the compiled app because we have multiple environments and I am not running tests against the production environment. In this situation is usually takes no time at all to develop and run tests.
Upvotes: 0 |
2018/03/20 | 741 | 2,504 | <issue_start>username_0: So this is my text file and i want to calculate the average for every year:
```
1969 324.000
1970 330.190
1970 326.720
1970 327.130
1971 326.970
1971 331.200
1971 329.430
1971 335.770
1971 337.600
```
And this is my Code that i got from another question but it keeps erroring:
```
In[2]: result = {}
with open('filename.txt', 'r') as f:
for line in f:
year, num = line.split()
year = int(year)
num = float(num)
try:
result[year].append(num)
except KeyError:
result[year] = [num]
In[3]: for k, v in sorted(result.items()):
print('Year: {}\tAverage: {:.2f}'.format(k, sum(v) / len(v)))
```
My error: Expected a indent block<issue_comment>username_1: It's completely normal. E2E tests do not need to be run at every commit.
The best is to run them before a release or tagging.
Once in place, you'll only change them when your code evolves.
On my project, they take about 3 min to pass with 80 tests.
Upvotes: 0 <issue_comment>username_2: To skip the angular application compilation process, install protractor as global:
```
npm install -g protractor
webdriver-manager update
```
serve the application as normal **ng serve** and run protractor inside project folder, command line:
```
proctractor
```
Also you can modify file **package.json** in section 'scripts' adding line as:
```
"scripts": {
...
"protractor": "protractor"
},
```
then you can run your protractor tests in other command line prompt as:
```
npm run protractor
```
In addition: To run tests that matches specific name you can invoke:
```
protractor --grep "test name"
```
also if you want to be more strict with test names and suite names you can use `^` and `$` with `--grep` option, but you should know: suite name and test name is concatenated with space. So to run specific tests from different suites run command as:
```
protractor --grep "^Suite name1 test name1$|^Suite name2 other test name2$"
```
Upvotes: 2 <issue_comment>username_3: In a situation where you have a live environment, as in a running webapp, you can set up protractor to run against the live app. I'm in a situation where I never need to ng serve and run against the compiled app because we have multiple environments and I am not running tests against the production environment. In this situation is usually takes no time at all to develop and run tests.
Upvotes: 0 |
2018/03/20 | 552 | 2,108 | <issue_start>username_0: When trying to run `grunt`, I get an error message:
```
Warning: Task "default" not found. Use --force to continue.
Aborted due to warnings.
```
I have already found several posts on this topic, and in each of them the problem was a missing comma. But in my case I have no idea what's wrong, I think I didn't miss any comma (btw, this content was copy/pasted from the internet).
```
module.exports = (grunt) => {
grunt.initConfig({
execute: {
target: {
src: ['server.js']
}
},
watch: {
scripts: {
files: ['server.js'],
tasks: ['execute'],
},
}
});
grunt.loadNpmTasks('grunt-contrib-watch');
grunt.loadNpmTasks('grunt-execute');
};
```
What could be the problem?<issue_comment>username_1: You didn't registered the default task. Add this after the last loadNpmTask
`grunt.registerTask('default', ['execute']);`
The second parameter is what you want to be executed from the config, you can put there more tasks.
Or you can run run existing task by providing the name as parameter in cli.
`grunt execute`
With you config you can use `execute` and `watch`. See <https://gruntjs.com/api/grunt.task> for more information.
Upvotes: 2 [selected_answer]<issue_comment>username_2: If you run *grunt* in your terminal it is going to search for a "default" task, so you have to register a task to be executed with Grunt defining it with the *grunt.registerTask* method, with a first parameter which is the name of your task, and a second parameter which is an array of subtasks that it will run.
In your case, the code could be something like that:
```
...
grunt.loadNpmTasks('grunt-contrib-watch');
grunt.loadNpmTasks('grunt-execute');
grunt.registerTask("default", ["execute", "watch"]);
...
```
In this way the "default" task will run rispectively the "execute" and the "watch" commands.
However [here](https://gruntjs.com/creating-tasks) you can find the documentation to create tasks with Grunt.
Hope it was helpful.
Upvotes: 0 |
2018/03/20 | 489 | 1,852 | <issue_start>username_0: I am trying to analyze the social network data which contains `follower` and `followee` pairs. I want to find the **top 10 users** who have the most followees using MapReduce.
I made pairs of `userID` and `number_of_followee` with one MapReduce step.
With this data, however, I am not sure how to sort them in distributed systems.
I am not sure how `priority queue` can be used in either of Mappers and Reducers since they have the distributed data.
Can someone explain me how I can use data structures to sort the massive data?
Thank you very much.<issue_comment>username_1: You didn't registered the default task. Add this after the last loadNpmTask
`grunt.registerTask('default', ['execute']);`
The second parameter is what you want to be executed from the config, you can put there more tasks.
Or you can run run existing task by providing the name as parameter in cli.
`grunt execute`
With you config you can use `execute` and `watch`. See <https://gruntjs.com/api/grunt.task> for more information.
Upvotes: 2 [selected_answer]<issue_comment>username_2: If you run *grunt* in your terminal it is going to search for a "default" task, so you have to register a task to be executed with Grunt defining it with the *grunt.registerTask* method, with a first parameter which is the name of your task, and a second parameter which is an array of subtasks that it will run.
In your case, the code could be something like that:
```
...
grunt.loadNpmTasks('grunt-contrib-watch');
grunt.loadNpmTasks('grunt-execute');
grunt.registerTask("default", ["execute", "watch"]);
...
```
In this way the "default" task will run rispectively the "execute" and the "watch" commands.
However [here](https://gruntjs.com/creating-tasks) you can find the documentation to create tasks with Grunt.
Hope it was helpful.
Upvotes: 0 |
2018/03/20 | 548 | 1,821 | <issue_start>username_0: I installed torch using
```
git clone https://github.com/torch/distro.git ~/torch --recursive
```
If in China, git does not seem to work so I've to download the source as a zip file. What's the difference between the two?<issue_comment>username_1: "clone" uses git software on your computer to download the source code and it's entire version history.
"download zip" creates a zip file of just the current version of the source code for you to download - the project history is not included.
Upvotes: 1 <issue_comment>username_2: In China, `git` always works.
`github` was banned several years ago but now it works too.
There are two possibilities that `git` doesn't seem to work:
1) You are working behind some proxy.
In this case you need to configure your git or your system to enable your proxy.
Also, in some system, such as Ubuntu 16.04, `git` behind proxy may have some bug: <https://askubuntu.com/questions/186847/error-gnutls-handshake-failed-when-connecting-to-https-servers/187199#187199>
To solve this issue, you just need to remove the `git` and reinstall it from source.
2) `git clone` from `github` is too slow.
This is because there is a unstable network connection between China and `github`.
You can download a project using your browser. @username_1 has told you about the difference.
Upvotes: 0 <issue_comment>username_3: Configure Git to use a proxy. [Shadowsocks](https://github.com/shadowsocks/shadowsocks) is a fast tunnel proxy that helps you bypass firewalls in China.
On Linux & Windows
* `git config --global http.proxy 'socks5://127.0.0.1:1080'`
* `git config --global https.proxy 'socks5://127.0.0.1:1080'`
On Mac
* `git config --global http.proxy 'socks5://127.0.0.1:1086'`
* `git config --global https.proxy 'socks5://127.0.0.1:1086'`
Upvotes: 0 |
2018/03/20 | 2,584 | 8,499 | <issue_start>username_0: I am trying to generate job using jenkins JOB-DSL and i am unable to find a way to trigger allure report as publisher in freestylejob
```
job('ci') {
publishers {
allure([includeProperties: false, jdk: '', results: [[path: 'Result']]])
}
}
```
I even tried to search on <https://jenkinsci.github.io/job-dsl-plugin> also tried on <https://job-dsl.herokuapp.com/> for getting this work. but no luck.
But getting the following error:
```
javaposse.jobdsl.dsl.DslScriptException: (script, line 5) No signature of
method: javaposse.jobdsl.dsl.helpers.publisher.PublisherContext.allure()
is applicable for argument types: (java.util.LinkedHashMap) values:
[[includeProperties:false, jdk:, results:[[path:Result]]]]
Possible solutions: mailer(java.lang.String), use([Ljava.lang.Object;),
asType(java.lang.Class)
at javaposse.jobdsl.dsl.AbstractDslScriptLoader.runScriptEngine(AbstractDslScriptLoader.groovy:112)
at javaposse.jobdsl.dsl.AbstractDslScriptLoader$_runScripts_closure1.doCall(AbstractDslScriptLoader.groovy:59)
at javaposse.jobdsl.dsl.AbstractDslScriptLoader.runScripts(AbstractDslScriptLoader.groovy:46)
at javaposse.jobdsl.dsl.AbstractDslScriptLoader$runScripts$0.callCurrent(Unknown Source)
at javaposse.jobdsl.dsl.AbstractDslScriptLoader.runScript(AbstractDslScriptLoader.groovy:85)
at javaposse.jobdsl.dsl.AbstractDslScriptLoader$runScript.call(Unknown Source)
at com.sheehan.jobdsl.DslScriptExecutor.execute(DslScriptExecutor.groovy:27)
at com.sheehan.jobdsl.ScriptExecutor$execute.call(Unknown Source)
at Ratpack$_run_closure1$_closure3$_closure7$_closure8.doCall(Ratpack.groovy:32)
at com.sun.proxy.$Proxy10.execute(Unknown Source)
at ratpack.exec.internal.DefaultPromise$1.success(DefaultPromise.java:42)
at ratpack.exec.Promise.lambda$null$9(Promise.java:304)
at ratpack.exec.Downstream$1.success(Downstream.java:73)
at ratpack.exec.Promise.lambda$null$9(Promise.java:304)
at ratpack.exec.Downstream$1.success(Downstream.java:73)
at ratpack.exec.internal.DefaultExecution$2.lambda$success$1(DefaultExecution.java:161)
at ratpack.exec.internal.DefaultExecution$SingleEventExecStream.exec(DefaultExecution.java:419)
at ratpack.exec.internal.DefaultExecution.exec(DefaultExecution.java:246)
at ratpack.exec.internal.DefaultExecution.intercept(DefaultExecution.java:240)
at ratpack.exec.internal.DefaultExecution.drain(DefaultExecution.java:220)
at ratpack.exec.internal.DefaultExecution.access$100(DefaultExecution.java:45)
at ratpack.exec.internal.DefaultExecution$SingleEventExecStream.resume(DefaultExecution.java:452)
at ratpack.exec.internal.DefaultExecution$2.success(DefaultExecution.java:161)
at ratpack.server.internal.RequestBody.complete(RequestBody.java:125)
at ratpack.server.internal.RequestBody.add(RequestBody.java:76)
at ratpack.server.internal.NettyHandlerAdapter.channelRead(NettyHandlerAdapter.java:84)
at io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
at io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:163)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:155)
at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
at io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
at io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:163)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:155)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111)
at io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
at io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:163)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:155)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:276)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:263)
at io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:83)
at io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:163)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:155)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:950)
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:818)
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:338)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:254)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:742)
at ratpack.exec.internal.DefaultExecController$ExecControllerBindingThreadFactory.lambda$newThread$0(DefaultExecController.java:113)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
Caused by: groovy.lang.MissingMethodException: No signature of method: javaposse.jobdsl.dsl.helpers.publisher.PublisherContext.allure() is applicable for argument types: (java.util.LinkedHashMap) values: [[includeProperties:false, jdk:, results:[[path:Result]]]]
Possible solutions: mailer(java.lang.String), use([Ljava.lang.Object;), asType(java.lang.Class)
at javaposse.jobdsl.dsl.AbstractExtensibleContext.methodMissing(AbstractExtensibleContext.groovy:19)
at javaposse.jobdsl.dsl.AbstractContext.invokeMethod(AbstractContext.groovy)
at script$_run_closure1$_closure2.doCall(script:5)
at script$_run_closure1$_closure2.doCall(script)
at javaposse.jobdsl.dsl.ContextHelper.executeInContext(ContextHelper.groovy:16)
at javaposse.jobdsl.dsl.ContextHelper$executeInContext.call(Unknown Source)
at javaposse.jobdsl.dsl.ContextHelper$executeInContext.call(Unknown Source)
at javaposse.jobdsl.dsl.Job.publishers(Job.groovy:628)
at script$_run_closure1.doCall(script:3)
at javaposse.jobdsl.dsl.JobParent.processItem(JobParent.groovy:104)
at javaposse.jobdsl.dsl.JobParent.freeStyleJob(JobParent.groovy:46)
at javaposse.jobdsl.dsl.JobParent$freeStyleJob$0.callCurrent(Unknown Source)
at javaposse.jobdsl.dsl.JobParent$freeStyleJob$0.callCurrent(Unknown Source)
at javaposse.jobdsl.dsl.JobParent.job(JobParent.groovy:38)
at javaposse.jobdsl.dsl.DslFactory$job.callCurrent(Unknown Source)
at script.run(script:1)
at script$run.call(Unknown Source)
at script$run.call(Unknown Source)
at javaposse.jobdsl.dsl.AbstractDslScriptLoader.runScript(AbstractDslScriptLoader.groovy:132)
at javaposse.jobdsl.dsl.AbstractDslScriptLoader.runScriptEngine(AbstractDslScriptLoader.groovy:106)
... 49 more
```
Can some one give me any snippet for allure?<issue_comment>username_1: I found it. i need to use the Dynamic DSL for this.
Note: this can not be tested in command line or playground. but it works on jenkins only.
here is the dynamic dsl for freestyle job with allure report configuration.
```
publisher {
allure {
results {
resultsConfig {
path('Result')
}
}
}
}
```
Upvotes: 0 <issue_comment>username_2: <https://docs.qameta.io/allure/#_job_dsl_plugin>
```
// default
publishers {
allure(['allure-results'])
}
// advanced
publishers {
allure(['first-results', 'second-results']) {
jdk('java7')
commandline('1.4.18')
buildFor('UNSTABLE')
includeProperties(true)
property('allure.issues.tracker.pattern', 'http://tracker.company.com/%s')
property('allure.tests.management.pattern', 'http://tms.company.com/%s')
}
}
```
Upvotes: 1 <issue_comment>username_3: To be able to use Allure with the Job DSL I had to add the `allure-jenkins-plugin` to my `build.gradle` as follows:
```
dependencies {
testPlugins 'ru.yandex.qatools.allure:allure-jenkins-plugin:2.30.2'
}
```
Note: what can help is making sure Allure is available on your Jenkins, which you can check by going to <https://url-of-your-jenkins.org/plugin/job-dsl/api-viewer/index.html> and looking for "allure".
Hope this helps anyone who struggled with the error!
Upvotes: 0 |
2018/03/20 | 1,722 | 5,427 | <issue_start>username_0: I just wanted to know if there is a way to improve this for loop, by skipping those if somehow.
Var `String` can have more parameters and their order can be casual.
In order to replace the param with a real value, I need :
1. To split it
2. To check what parameter is and in which position
Here is a synthetic example on how I thought it:
```
String = "{FullDate}_{Month}_{Day}_{Year}_{ElementID}_{ElementCD}"
String_split = String.split("_")
for params in range(len(String_split)):
if "FullDate" in String_split[params]:
# Do something
elif "Name" in String_split[params]:
# Do something
elif "ElementID" in String_split[params]:
# Do something
elif "ElementCD" in String_split[params]:
# Do something
elif "Year" in String_split[params]:
# Do something
elif "Day" in String_split[params]:
# Do something
elif "Month" in String_split[params]:
# Do something
```
UPDATE: That's what i would like to accomplish
```
# Default values
FullDate = now().format("yyyy-MM-dd_HH:mm:ss")
Name = "John"
ElementID = "Apple"
ElementCD = "01Appxz"
Year = now().format("yyyy")
Day = now().format("dd")
Month = now().format("MM")
############################
String = "{FullDate}_{Month}_{Day}_{Year}_{ElementID}_{ElementCD}"
String_split = String.split("_")
for params in range(len(String_split)):
if "FullDate" in String_split[params]:
Report_Name = Report_Name + FullDate + "_"
elif "Name" in String_split[params]:
Report_Name = Report_Name + Name + "_"
elif "ElementID" in String_split[params]:
Report_Name = Report_Name + ElementID + "_"
elif "ElementCD" in String_split[params]:
Report_Name = Report_Name + ElementCD + "_"
elif "Year" in String_split[params]:
Report_Name = Report_Name + Year + "_"
elif "Day" in String_split[params]:
Report_Name = Report_Name + Day + "_"
elif "Month" in String_split[params]:
Report_Name = Report_Name + Month + "_"
# Report_Name must return default values, ordered by String variable (eg: FullDate, 1st position; Month 2nd position etc..)
# >> "1999-01-01_10:10:29_01_01_1999_Apple_01Appxz"
# if the String variable changes the params order to
# String = "{Year}_{Month}_{ElementCD}_{FullDate}_{ElementID}_{Day}"
# Report_Name should return
# >> "1999_01_01Appxz_1999-01-01_10:10:29_Apple_01"
```<issue_comment>username_1: ---
**Before reading**:
- This is a solution that, as you mentionned, don't use dictionary
---
**Solution**
With:
```
#Default values
FullDate = '2010-01-01_00:00:00'
Name = "John"
ElementID = "Apple"
ElementCD = "01Appxz"
Year = '2010'
Day = '01'
Month = '01'
#--
String = "{FullDate}_{Month}_{Day}_{Year}_{ElementID}_{ElementCD}"
```
You can do this without `for` loop, just replacing as required:
```
Report_Name = '.RName_' + String
if "FullDate" in Report_Name:
Report_Name = Report_Name.replace('{FullDate}',FullDate)
if "Name" in Report_Name:
Report_Name = Report_Name.replace('{Name}',Name)
#...
if "ElementCD" in Report_Name:
Report_Name = Report_Name.replace('{ElementCD}',ElementCD)
print(Report_Name)
.RName_2010-01-01_00:00:00_..._01Appxz
```
---
**[Be carefull] Another solution**
Or maybe you can use `.eval()` (see [documentation](https://docs.python.org/3/library/functions.html#eval)) to evaluate variable from its name. It requires that `parameters` and `variables` name are the same.
Here is a way to do this:
```
import re
Parameters = [re.sub('[{-}]+', '', s) for s in String.split('_')]
Report_Name = '.RName_' + String
for p in Parameters:
Report_Name = Report_Name.replace('{%s}'%p,eval(p))
print(Report_Name)
.RName_2010-01-01_00:00:00_01_01_2010_Apple_01Appxz
```
Be aware that you should use `.eval()` **carefully** - [see Why is using eval a bad practice](https://stackoverflow.com/a/1832957/3941704)
Check alternatives to this solution - [using `globals/locals/vars`](https://stackoverflow.com/a/7969953/3941704) for instance - if :
* You want a similar behaviour
* You think it is not safe enough for your problem
Upvotes: -1 [selected_answer]<issue_comment>username_2: * Use `for name in names` to remove all your `String_split[params]` noise.
* Remove the `{}` from your variables so you can use `==` rather than `in`.
* Use `+=`.
This gets:
```
names = "FullDate Month Day Year ElementID ElementCD".split()
for name in names:
if "FullDate" == name:
Report_Name += FullDate + "_"
elif "Name" == name:
Report_Name += Name + "_"
elif "ElementID" == name:
Report_Name += ElementID + "_"
elif "ElementCD" == name:
Report_Name += ElementCD + "_"
elif "Year" == name:
Report_Name += Year + "_"
elif "Day" == name:
Report_Name += Day + "_"
elif "Month" == name:
Report_Name += Month + "_"
```
You should also learn how to use format strings and the `**` operator. If you change your `FullDate` stuff to a dictionary then you can use:
```
REPORT_FORMAT = '{FullDate}_{Month}_{Day}_{Year}_{ElementID}_{ElementCD}'
report = {
'FullDate': now().format("yyyy-MM-dd_HH:mm:ss")
'Name': "John"
'ElementID': "Apple"
'ElementCD': "01Appxz"
'Year': now().format("yyyy")
'Day': now().format("dd")
'Month': now().format("MM")
}
report_name = REPORT_FORMAT.format(**report)
```
Upvotes: 0 |
2018/03/20 | 575 | 2,103 | <issue_start>username_0: I'm trying to add an IBOutlet in swift but I only have the option to add an action.
**Here is an image of what I'm talking about.**
]
Is there any way I can fix this?
[I can't change connection type either.](https://i.stack.imgur.com/Slmk4.png)<issue_comment>username_1: Simply change the connection type or one more thing you can do. you can do this. By opening connection inspector and drag the outlet into view controller like shown in image..
[](https://i.stack.imgur.com/7qJGo.png)
To Change Connection Type
[](https://i.stack.imgur.com/CyLko.png)
Upvotes: 0 <issue_comment>username_2: Try ctrl dragging it from the text on the left ([](https://i.stack.imgur.com/kkn6z.png))
Upvotes: 0 <issue_comment>username_3: Restart your Xcode and try to connect @IBOutlet
Upvotes: 0 <issue_comment>username_4: This is happening because you are trying to connect an outlet from the Interface Builder to a non-corresponding View Controller file, so that's why it only gives you the option to add an Exit action. Just make sure that you are in the same View Controller both in the IB and in the Assistant Editor.
P.S. I recommend you renaming properly every view controller so it will be easier to avoid this (make sure to change the name not only in the class but also in the IB). You can use the cmd + click when selecting the class name on the code and click on `Rename...` to change all at once. If you already changed it in your code, you have also to do it then manually in the IB, selecting the corresponding one from the drop-down menu of the Identity Inspector:
[](https://i.stack.imgur.com/hopK3m.png)
Upvotes: 4 [selected_answer]<issue_comment>username_5: You should use the same class in your view controller (call it View Controller.swift)
Upvotes: 0 |
2018/03/20 | 1,098 | 2,657 | <issue_start>username_0: I have an hive table like below,
```
hive> describe eslg_transaction_01;
OK
a1 string
a2 date
a3 string
a4 string
a5 string
a6 bigint
a7 double
a8 double
a9 double
a10 bigint
a11 bigint
a12 bigint
a13 bigint
a14 bigint
a15 bigint
a16 bigint
a17 string
a18 string
Time taken: 0.723 seconds, Fetched: 18 row(s)
```
I am trying to upload data into this table using,
```
hive> LOAD DATA INPATH '/user/hadoop/data/2502.txt' INTO TABLE eslg_transaction_01;
```
I am getting the following error:
>
> FAILED: SemanticException Line 1:17 Invalid path ''/user/hadoop/data/2502.txt'': No files matching path hdfs://sandbox-hdp.hortonworks.com:8020/user/data/2502.txt
>
>
>
My data is present in the location and I am able to see it:
```
[root@sandbox-hdp ~]# hadoop fs -cat /user/hadoop/data/2502.txt | head -5
-200879548|2018-02-18|1485|384672|1787329|1|8.69|0|50|0|0|0|1|0|0|0||NULL
-192188296|2018-02-07|508|321131|9713410|1|0.68|0|30|0|0|0|2|0|0|1|1|2018_303
-198424071|2018-02-15|93|404120|97223|1|2|0.89|0|0|0|1|0|0|0|1|1|2018_4
-185483553|2018-01-29|131|336347|1070990|1|1.3|0.88|0|0|0|0|0|1|0|1|1|2018_3
-205064252|2018-02-23|516|21118|2610945|1|0.89|0.6|0|0|0|0|0|1|0|1|1|2018_5
```
can somebody help. I am stuck here. I am new to hadoop/hive<issue_comment>username_1: execute below steps, I hope, It will work.
(1) Put file in hdfs
```
hadoop fs -put /home/Desktop/2502.txt /user
```
(2) show file in hdfs
```
hadoop fs -ls /user
```
(3) load data into hive table
```
LOAD DATA INPATH '/user/2502.txt' INTO TABLE eslg_transaction_01;
```
Upvotes: 1 <issue_comment>username_2: If you see in the error it is taking path as hdfs://sandbox-hdp.hortonworks.com:8020/user/data/2502.txt which is not correct 'hadoop' folder is missing in the path. So, i believe it should be some thing permission issue. Otherwise it looks what you are doing is correct. For your work , copy the data to default 'warehouse' directory and copy that to hive table. Once you load that file to hive table, then that file will no more available in the 'warehouse' directory as it is copied to hive table directory.
Upvotes: 1 <issue_comment>username_3: You don't really need to use `LOAD DATA` if you instead define an EXTERNAL TABLE with a LOCATION pointing at the original HDFS directory.
```
CREATE EXTERNAL TABLE IF NOT EXISTS
eslg_transaction_01
....
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '|'
LOCATION '/user/hadoop/data/'
```
Then any file you place into that data directory will be immediately queryable by Hive
Upvotes: 1 [selected_answer] |
2018/03/20 | 601 | 2,166 | <issue_start>username_0: Trying to create folder in internal storage but code working only in oppo handset not in other brand handsets like samsung,mi etc
```
public void createPDF()
{
TextView dttt = (TextView)findViewById(R.id.dttt);
String da = dttt.getText().toString();
final Cursor cursor = db.getDateWise(da);
Document doc = new Document();
try {
String path = Environment.getExternalStorageDirectory().getAbsolutePath() + "/CollactionApp"+ "/PDF";
File dir = new File(path);
if(!dir.exists())
dir.mkdirs();
Log.d("PDFCreator", "PDF Path: " + path);
int i = 1;
File file = new File(dir, "Datewise" + da + ".pdf" );
FileOutputStream fOut = new FileOutputStream(file);
PdfWriter.getInstance(doc, fOut);
//open the document
doc.open();
```
}<issue_comment>username_1: execute below steps, I hope, It will work.
(1) Put file in hdfs
```
hadoop fs -put /home/Desktop/2502.txt /user
```
(2) show file in hdfs
```
hadoop fs -ls /user
```
(3) load data into hive table
```
LOAD DATA INPATH '/user/2502.txt' INTO TABLE eslg_transaction_01;
```
Upvotes: 1 <issue_comment>username_2: If you see in the error it is taking path as hdfs://sandbox-hdp.hortonworks.com:8020/user/data/2502.txt which is not correct 'hadoop' folder is missing in the path. So, i believe it should be some thing permission issue. Otherwise it looks what you are doing is correct. For your work , copy the data to default 'warehouse' directory and copy that to hive table. Once you load that file to hive table, then that file will no more available in the 'warehouse' directory as it is copied to hive table directory.
Upvotes: 1 <issue_comment>username_3: You don't really need to use `LOAD DATA` if you instead define an EXTERNAL TABLE with a LOCATION pointing at the original HDFS directory.
```
CREATE EXTERNAL TABLE IF NOT EXISTS
eslg_transaction_01
....
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '|'
LOCATION '/user/hadoop/data/'
```
Then any file you place into that data directory will be immediately queryable by Hive
Upvotes: 1 [selected_answer] |
2018/03/20 | 1,085 | 3,614 | <issue_start>username_0: I am trying to kill a process by name which i will pass as a variable to a system command.
Below is what i have:
```
my $processName=$ARGV[0];
print "$processName\n";
system(q/kill -9 `ps -ef | grep '$processName' | grep -v grep | awk '{print $2}'`/);
```
The above script throws an error:
```
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
```
However, if i directly enter the process name inside the system command, it works.
Can someone help me on this?<issue_comment>username_1: You have the different variables `$processName` and `$jobName`, so the whole `` `` expression ends up empty. [`use strict`](http://p3rl.org/strict) would have pointed that out; it's useful even for three-line scripts. The `kill` error message is what you get when you run it without any arguments.
Pro tip: [`pkill`](https://gitlab.com/procps-ng/procps/#procps) exists, you can replace the whole problematic hack with one command.
Upvotes: 2 <issue_comment>username_2: One problem is that the command line that invokes the script has the name of the process which you then query, by design; so one thing found will be the script itself.
This can be guarded against, but why go out to the system with that big pipeline?
Query the process in Perl, for example with [Proc::ProcessTable](http://search.cpan.org/~durist/Proc-ProcessTable-0.39/ProcessTable.pm)
```
use warnings;
use strict;
use Proc::ProcessTable;
my $proc_name = shift // die "Usage: $0 process-name\n"; #/
my $pid;
my $pt = Proc::ProcessTable->new();
foreach my $proc (@{$pt->table}) {
next if $proc->cmndline =~ /$0.*$proc_name/; # not this script
if ($proc->cmndline =~ /\Q$proc_name/) {
$pid = $proc->pid;
last;
}
}
die "Not found process with '$proc_name' in its name\n" if not defined $pid;
kill 9, $pid; # must it be 9 (SIGKILL)?
my $gone_pid = waitpid $pid, 0; # then check that it's gone
```
Note that it is far nicer to "kill" a process by sending it `SIGTERM` (usually 15), not `SIGKILL`.
Looking for a process "name" can be perilous as there are many processes with long command lines running on a modern system that may contain that word. The code uses your requirement, to simply query by submitted name, but I recommend strengthening that with additional checks.
For many process details that the module can query see its [stub module](http://search.cpan.org/~durist/Proc-ProcessTable-0.39/Process/Process.pm).
Even if you'd rather parse the process table yourself I would get only `ps -ef` and then parse it in the script. That gives far more flexibility in ensuring that you really terminate what you want.
Upvotes: 3 <issue_comment>username_3: ```
my $pid = `ps -ef | grep '$processName' | grep -v grep | awk '{print \$2}'`;
print $pid;
system("kill -9 $pid");
```
This one works!!!
Upvotes: 1 <issue_comment>username_4: You can kill a process by name without invoking the shell by using [`IPC::System::Simple`](http://search.cpan.org/~pjf/IPC-System-Simple-1.25/lib/IPC/System/Simple.pm):
```
use IPC::System::Simple qw(systemx EXIT_ANY);
systemx(EXIT_ANY, 'pkill', '-9', '--', $processName);
```
Or you can collect a list of process IDs first:
```
use IPC::System::Simple qw(capturex EXIT_ANY);
kill 9, capturex(EXIT_ANY, 'pgrep', '--', $processName));
```
Then you don't have to worry about [`String::ShellQuote`](https://metacpan.org/pod/String::ShellQuote). See [this answer](https://stackoverflow.com/questions/3212128/perl-equivalent-of-phps-escapeshellarg) by username_1.
Upvotes: 0 |
2018/03/20 | 1,171 | 3,871 | <issue_start>username_0: I'm trying STL in c++ now and found a question that I didn's understand.
I imagine a problem that needs to show each element and how many times that it occured in a vector. Since I knew how to use vector to finish this problem, I tried to use "set" or "map" to solve the problem.
```
vector v{ 'a', 'b', 'f', 'b', 'd', 'c', 'b', 'f', 's', 'v', 'x'};
map m; //
for (char n : v)
m.insert(pair(n, m[n]+1));
```
I thought that it will make counter plus each time when key was found, however, counter didn't work and each counter is 0.
After some change:
```
vector v{ 'a', 'b', 'f', 'b', 'd', 'c', 'b', 'f', 's', 'v', 'x'};
map m; //
for (char n : v)
m.insert(pair(n, m[n]++));
```
Then it work. I don't know why.<issue_comment>username_1: Have a close look at [insert](http://en.cppreference.com/w/cpp/container/map/insert) function: It only inserts an element, if it is not yet found!
The [index operator](http://en.cppreference.com/w/cpp/container/map/operator_at), in contrast, updates an element, if found, and creates a new one with default value otherwise.
So:
```
m.insert(pair(n, m[n]+1));
```
The first time you call this, `m[n]` creates already an element with default value 0. Then, you call `insert` on the map, but as the key is already found, no pair is inserted. So you get 0 all the time...
```
m.insert(pair(n, m[n]++));
```
Actually, from sight of `insert`, there is no difference: The entry in the map is created the first time you visit the map at position `n`, and you increment the entry directly. `insert` itself *never* actually inserts a pair...
So if you cut off the "do nothing" piece of code, all that remains is:
```
for (char n : v)
m[n]++;
```
For better understanding, a variant using `insert` equivalent to my code sample above:
```
for (char n : v)
(m.insert(std::pair(n, 0)).first->second)++;
```
Interesting, isn't it? Breaking it into pieces:
```
for (char n : v)
{
std::pair::iterator, bool> result = m.insert(std::pair(n, 0));
std::pair& entry = \*result.first;
entry.second++;
}
```
The core part of is this line:
```
std::pair::iterator, bool> result = m.insert(std::pair(n, 0));
```
The bool of the pair is set to true, if the element actually was inserted and to false otherwise. As not interested in this piece of information in the specific case, not used any further.
The iterator points to the element in the map - either the newly created one (bool part is true), or the one already found (bool part is false).
Upvotes: 2 [selected_answer]<issue_comment>username_2: First of all, your attempts to update map entries by calling `insert` are all ignored, because `insert` doesn't do anything if an entry already exists. To quote from the [C++ reference](http://en.cppreference.com/w/cpp/container/map/insert):
>
> Inserts element(s) into the container, if the container doesn't already contain an element with an equivalent key.
>
>
>
It gets a little clearer if you rewrite your code a little:
```
for (char n : v)
{
int& count = m[n];
int new_count = count + 1;
pair p(n, new\_count);
m.insert(p); // ignored because the key already exists
}
```
The second version, though, directly modifies the count that's inside the map, due to the `operator[]` returning a *reference* to the stored value, and the `operator++` directly manipulating the `int`. Your second version rewitten would look something like this:
```
for (char n : v)
{
int& count = m[n];
count++; // operates on the value that is stored in the map
pair p(n, count);
m.insert(p); // ignored because the key already exists
}
```
Even though your attempt to insert a new pair into the map is ignored again, since you directly modify the value that's stored in the map, the second version does what you wanted your code to do in the first place.
Upvotes: 2 |
2018/03/20 | 626 | 2,288 | <issue_start>username_0: ```
#include
#define SIZE 5
void verify(int a[],int,int);
int main()
{
int a[SIZE],target,k=0;
printf("enter the array elemans:\n");
for(int i=0;i
```
when i try to find a number not in the list/array i don't get the else statement execute but rather it displays segmentation error 11 pls find the mistake in my code<issue_comment>username_1: You must have some bound check on the array index. (Array access out of bound is undefined behavior) Here you have accessed array index out of bound.
First check if the access would be out of range - if that's the case you return from there, else you access it and check it. And then have the next recursive call.
```
if(k >= SIZE){
printf("target not found !!!");
}
else{
if(a[k] == target){
...
}
//next call
}
```
The `count` variable is not there when you return from the function. It is not needed or it's use not clear in this case.
The complete code can be as simple as this:
```
void verify(int a[],int target,int k)
{
if(k < SIZE)
{
if(a[k] == target)
printf("target found:%d at index= %d\n",a[k],k);
else
verify(a,target,k+1);
}
else
{
printf("target not found !!!");
}
}
```
Upvotes: 0 <issue_comment>username_2: ```
if(a[k]==target&&count
```
are always true as count is always zero in each iteration of recursion, you should use `k` for instead of `count` in your conditions
Upvotes: 0 <issue_comment>username_3: You are getting segmentation fault because `count` is a local variable of `verify()` function and in every recursive call to `verify()` function, the `count` initializes to `0` and the condition `count will always be `true`.
In every recursive call to `verify()`, you are passing `k+1` and comparing the element at `k`th location of array `a` with the `target` --> `if(a[k]==target&&count. At one stage the `k` will be having a value which is beyond the size of the array `a`. Your program is accessing the element beyond the array size which is undefined behavior which includes program may give segmentation fault.``
You dont need `count` variable at all. Just compare the value of `k` to with `SIZE` to ensure that it should not go beyond the array size.
Upvotes: 3 [selected_answer] |
2018/03/20 | 842 | 2,781 | <issue_start>username_0: I try to add the `plpython3` extension to my `timescaledb`/`postgres` (based on linux alpine) image:
```
FROM timescale/timescaledb:0.9.0-pg10
RUN set -ex \
&& apk add --no-cache --virtual .plpython3-deps --repository http://nl.alpinelinux.org/alpine/edge/testing \
postgresql-plpython3
```
When I try to create the extension I get the following error:
```
postgres=# CREATE EXTENSION plpython3u;
ERROR: could not open extension control file "/usr/local/share/postgresql/extension/plpython3u.control": No such file or directory
```
But when I search for the files inside my container I can find them within a different directory:
```
/ # find / -name '*plpy*'
/usr/lib/postgresql/plpython3.so
/usr/share/postgresql/extension/plpython3u.control
/usr/share/postgresql/extension/plpython3u--1.0.sql
/usr/share/postgresql/extension/plpython3u--unpackaged--1.0.sql
```
How can I install `postgresql-plpython3` to a different directory or configure `postgres` to recognize my added extension?
**Update**
When I just `mv` the files to `/usr/local/share/postgresql/extension` I get the error:
```
postgres=# CREATE EXTENSION plpython3u;
ERROR: could not access file "$libdir/plpython3": No such file or directory
```
**Update 2**
So the issue with `$libdir` was that `pg_config --pkglibdir` points to `/usr/local/lib/postgresql` but `plpython3.so` is inside `/usr/lib/postgresql`. When I move everything to the according `/usr/local` directories I can successfully create the extension.
This leads to the question where I hope to find an answer. How can I install `postgresql-plpython3` to `/usr/local/...` instead of `/usr/...`?<issue_comment>username_1: I am fairly certain that if you use prebuilt packages you are stuck with hardcoded installation paths.
The easiest way around your problem is to create symlinks after installation:
```
FROM timescale/timescaledb:0.9.0-pg10
RUN set -ex \
&& apk add --no-cache --virtual .plpython3-deps --repository http://nl.alpinelinux.org/alpine/edge/testing \
postgresql-plpython3 \
&& ln -s /usr/lib/postgresql/plpython3.so /usr/local/lib/postgresql/plpython3.so \
&& ln -s /usr/share/postgresql/extension/plpython3u.control /usr/local/share/postgresql/extension/plpython3u.control \
&& ln -s /usr/share/postgresql/extension/plpython3u--1.0.sql /usr/local/share/postgresql/extension/plpython3u--1.0.sql \
&& ln -s /usr/share/postgresql/extension/plpython3u--unpackaged--1.0.sql /usr/local/share/postgresql/extension/plpython3u--unpackaged--1.0.sql
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: Just configure your `postgresql.conf` to load the extension shared libraries from the expected path.
```
plpython3u.library_path = '/usr/lib'
```
Upvotes: 1 |
2018/03/20 | 933 | 3,052 | <issue_start>username_0: I am suing restassured for automating my apis, here is my jsonResponse:-
```
{
"al": [{
"aid": 1464,
"_r": "Bus Stand,",
"_l": "spaze it park2,",
"_c": ",",
"_s": "Haryana,",
"co": "India,",
"pc": "122001",
"fa": "Sona Road, spaze it park, Gurgaon, Haryana,",
"fn": "225,",
"lm": "omax mall",
"pa": null,
"at": 1
},
{
"aid": 1462,
"_r": "Bus Stand,",
"_l": "spaze it park2,",
"_c": "Gurgaon,",
"_s": "Haryana,",
"co": "India,",
"pc": "122001",
"fa": "Sona Road, spaze it park, Gurgaon, Haryana,",
"fn": "225,",
"lm": "omax mall",
"pa": null,
"at": 1
},
{
"aid": 1461,
"_r": null,
"_l": null,
"_c": "Gurgaon1",
"_s": "",
"co": null,
"pc": "122003",
"fa": "Gurgaon, HRyana, 122003",
"fn": "",
"lm": "",
"pa": null,
"at": -1
},
{
"aid": 1460,
"_r": "Bus Stand,",
"_l": "spaze it park2,",
"_c": "Gurgaon,",
"_s": "Haryana,",
"co": "India,",
"pc": "122001",
"fa": "Sona Road, spaze it park, Gurgaon, Haryana,",
"fn": "225,",
"lm": "omax mall",
"pa": null,
"at": 2
}
]
}
```
Now i want to put assertions for jasonarray having "aid": 1460, like what's the value of lm,at etc parameters. How we can do the same in restassured. Also want to know the index position of jsonarray having "aid": 1460.
Please help me!!<issue_comment>username_1: It's fairly easy to test JSON response with REST Assured in built assertions but in your case you also require the index number of array that I'm not sure if it can find out.I think something along these lines will help
```
Response res = given().
get(url).
then().
statusCode(200).
extract().
response();
JsonArray arr=new JsonParser().parse(res.jsonPath().get("al").toString()).getAsJsonArray();
```
You can then use JSONArray to find out the index and also the corresponding elements.
Upvotes: 1 <issue_comment>username_2: Regarding the assertions you can use the RestAssured built-in [GPath](http://james-willett.com/2017/05/rest-assured-gpath-json/) query syntax:
```
RestAssured.given()
.baseUri("http://your.server.com")
.accept(ContentType.JSON)
.get("/jsonFile")
.then()
.statusCode(200)
.body("al.findIndexOf { it.aid == 1461 }", is(2)) // findIndexOf returns zero-based list index
.body("al.find { it.aid == 1461 }._c", is("Gurgaon1"))
.body("al.find { it.aid == 1461 }.pc", is("122003"));
```
Upvotes: 0 |
2018/03/20 | 592 | 2,806 | <issue_start>username_0: I'm looking for best practices for the following design: I have an abstract class and every concrete class extending that abstract class should be a singleton.
Background: The concrete classes are collectors that compile and log statistics about the operation of a complex legacy system. Collectors are accessible via a static registry, so there's no need to pass dependencies. The abstract class provides the interface to the registry.
I'm aware that there's no perfect solution that gives guarantees, properties have to be maintained by conventions. Nevertheless, there might be best practices for this case.<issue_comment>username_1: Technically you cannot prevent the concrete class to allow the creation of more than one instance of it.
But you have ways to try to conceptually enforce it :
* document clearly the interface
* set a constraint that require that these subclasses be beans managed by a dependency injection container
Upvotes: 1 <issue_comment>username_2: There is a solution to implement this, it is based on the following:
1. The constructors of the abstract class must have private visibility, so that it could be instantiated from inside only and to prevent it from being extended elsewhere.
2. The implementing singleton classes must be nested within the abstract class and must not be extendable.
3. The instances of the implementing nested classes are static members of the abstract class.
The drawback of this approach is that it can quickly become messy, due to nesting, especially if there are many implementations for singletons.
This might look as follows:
```
public abstract class AbstractEntity {
public static final AbstractEntity SINGLETON1 = new EntityImpl1(); // might be also instantiated lazily
/**
* Private accessor will not allow this class to be extended from outside.
*/
private AbstractEntity() {
}
static final class EntityImpl1 extends AbstractEntity {
private EntityImpl1() {
}
}
// other implementations...
}
```
Upvotes: 0 <issue_comment>username_3: Here is my current solution
* singleton property is managed by the registry. JavaDoc clearly states the correct and intended use
* a concrete instance is registered by a static `register(Instance.class, Instance::new)` method that has the class object and a supplier of instances of that class as arguments
* the registry maintains a map of String to instance objects, with the `toString()` result of the class object as key
Advantages
* instances can be private and can be placed close to the classes that using them (i.e., in the same package)
* instances are created once at first registry
Disadvantages
* instances are created by a constructor with empty parameter list
* the registry method needs two arguments
Upvotes: 0 |
2018/03/20 | 561 | 1,961 | <issue_start>username_0: I am trying to implement a timer for calling the function for printing the queue after specific time .I am also able to cancel the timer if the queue gets filled before that specified time and print the queue.But after that my timer object behaves abruptly causing timers to overlap for example if the queue gets filled in 2 sec then it prints the queue in 2,8,2,8... time interval instead of 2,10 time interval
```
connection = pika.BlockingConnection(pika.ConnectionParameters(host='127.0.0.1'))
channel = connection.channel()
channel.queue_declare(queue='final', durable=True)
global msg_queue
global t
msg_queue=queue.Queue(maxsize=6)
def empty_queue():
print(time.time())
l=[]
i=int(msg_queue.qsize())
while i!=0:
l.append(msg_queue.get())
i-=1
t=threading.Timer(10,empty_queue)
print(l)
t.start()
t=threading.Timer(10,empty_queue)
print(time.time())
t.start()
while True:
if int(msg_queue.qsize())<6:
consume_generator = channel.consume(queue='final', no_ack=True)
result=next(consume_generator)
msg_queue.put(json.loads(result[2].decode("utf-8")))
else:
print("more",time.time())
t.cancel()
empty_queue()
```<issue_comment>username_1: You could nest a if statement inside the timer that says if queue is full then disable this if statement then let the timer continue until its done without affecting the queue. I don't think it will cause conflict with your program because the timer is probably a closure.
Upvotes: -1 <issue_comment>username_2: I have solved the issue by cancelling the timer to prevent its own duplicacy
```
def empty_queue():
global t
print(time.time())
l=[]
i=int(msg_queue.qsize())
while i!=0:
l.append(msg_queue.get())
i-=1
if t.isAlive():
t.cancel()
t=threading.Timer(10,empty_queue)
print(l)
t.start()
```
Upvotes: 1 [selected_answer] |
2018/03/20 | 1,305 | 4,343 | <issue_start>username_0: I am learning Go and I am on channels now. I have written a simple program using channels. I have created two channels and the channels are passed to a function which is called concurrently.
My expectation is to print output form both the channels but in reality only one channel output is getting printed:
```
package main
import "fmt"
func square(dat int, ch chan<- int) {
ch <- dat * dat
}
func main() {
resp1 := make(chan int)
resp2 := make(chan int)
go square(20, resp1)
go square(10, resp2)
select {
case msg1 := <-resp1:
fmt.Println(msg1)
case msg2 := <-resp2:
fmt.Println(msg2)
}
}
```
Either the message from `resp1` is printed or that from `resp2` during each execution. Channels should block until something is pushed into it, right?<issue_comment>username_1: >
> [The Go Programming Language Specification](https://golang.org/ref/spec)
>
>
> [Select statements](https://golang.org/ref/spec#Select_statements)
>
>
> A "select" statement chooses which of a set of possible send or
> receive operations will proceed.
>
>
>
Select chooses one of a set. For example,
```
package main
import "fmt"
func square(dat int, ch chan<- int) {
ch <- dat * dat
}
func main() {
resp1 := make(chan int)
resp2 := make(chan int)
go square(20, resp1)
go square(10, resp2)
// Choose one
select {
case msg1 := <-resp1:
fmt.Println(msg1)
case msg2 := <-resp2:
fmt.Println(msg2)
}
// Choose the other
select {
case msg1 := <-resp1:
fmt.Println(msg1)
case msg2 := <-resp2:
fmt.Println(msg2)
}
}
```
Playground: <https://play.golang.org/p/TiThqcXDa6o>
Output:
```
100
400
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: You want `select` to select both? That's kind of the opposite of what it's for. From the [Go Spec](https://golang.org/ref/spec#Select_statements):
>
> A "select" statement chooses which of a set of possible send or receive operations will proceed.
>
>
> If one or more of the communications can proceed, a single one that can proceed is chosen via a uniform pseudo-random selection.
>
>
>
If you want to read from both channels, rather than having `select` figure out which one is ready to be read from (or select one pseudo-randomly if both can be read from), don't use `select`. Just read from both:
```
msg1 := <-resp1:
fmt.Println(msg1)
msg2 := <-resp2:
fmt.Println(msg2)
```
**Update**
If, as @username_1 suggests, your goal is to read from both channels, starting with whichever is ready first, I'd think something like this would be a reasonable approach:
```
func main() {
resp1 := make(chan int)
resp2 := make(chan int)
readsWanted := 0
readsWanted += 1
go square(20, resp1)
readsWanted += 1
go square(10, resp2)
for i := 0; i < readsWanted; i++ {
select {
case msg1 := <-resp1:
fmt.Println(msg1)
case msg2 := <-resp2:
fmt.Println(msg2)
}
}
}
```
You could of course hard-code the loop to only run twice, but I have an aversion to such things, although in this simple example it doesn't much matter.
Upvotes: 2 <issue_comment>username_3: As per your code in the select whenever any case matches will execute and main function will terminate .That is why it is only printing the single channel value.
You can fix this by doing something like this :
```
package main
import (
"fmt"
"os"
"time"
)
func square(dat int, ch chan<- int) {
ch <- dat * dat
}
func main() {
resp1 := make(chan int)
resp2 := make(chan int)
go square(20, resp1)
go square(10, resp2)
time.Sleep(1 * time.Second)
for {
select {
case msg1 := <-resp1:
fmt.Println(msg1)
case msg2 := <-resp2:
fmt.Println(msg2)
default:
close(resp1)
close(resp2)
fmt.Println("no value recieved")
os.Exit(0)
}
}
}
```
Output
```
100
400
no value recieved
```
See in playground: <https://play.golang.org/p/T9mkfrO4wNF>
Upvotes: 0 <issue_comment>username_4: Selects works more like a switch in another languages (ex: Java), you need to execute that part N times as you need.
Upvotes: 0 |
2018/03/20 | 923 | 3,501 | <issue_start>username_0: In my project I have a Polymorphic relation. This relation is as followes:
```
Plugins
id - integer
composer_package - string
Themes
id - integer
composer_package - string
Products
id - integer
name - text
...
commentable_id - integer
commentable_type - string
```
When I need to display all the products that are a Theme I do the following:
`$themes = DB::table('products')->orderBy('id', 'asc')->where('productable_type', Theme::class)->get();`
The code above provides me with a list of all the products where the `productable_type` field is filled with `App/Theme`, I display them with a foreach loop like this `@foreach($themes as $theme)`. Now my problem is.
I need to get the `composer_package` from the themes table that belongs to that product. So say for instance. I get a product from the products table where the `productable_id` is 3. I want the value of the `composer_package` field in the themes table where the ID is 3. When I do `{{ $theme->products->composer_package }}` or `{{ $theme->composer_package }}` It gives me the error `Undefined property` What is causing this?
This is my product model
```
public function product()
{
return $this->morphTo();
}
public function order_items()
{
return $this->hasMany(Orderitems::class);
}
```
And this is my theme model
```
public function webshops()
{
return $this->hasMany(Webshop::class);
}
public function products()
{
return $this->morphMany(Product::class, 'productable');
}
```
Thanks in advance!<issue_comment>username_1: Your table has "commentable", but your model is looking for "productable". You're also missing a method.
in your **product** model add:
```
public function commentable(){
return $this->morphTo();
}
```
in your **theme** model modify the products() method to:
```
public function products(){
return $this->morphMany(Product::class, 'commentable');
}
```
You could also update your `commentable_type` and `commentable_id` columns in your table to `productable_type` and `productable_id`, in which case you need to rename `commentable()` in the code for the product model above to `productable()`, and `'commentable'` to `'productable'` in the code for the theme model above.
(Source: <https://laravel.com/docs/5.6/eloquent-relationships#polymorphic-relations>)
Upvotes: 0 <issue_comment>username_2: Is there any reason you are using the `DB` facade to pull data instead of the actual models?
One issue your are having is that your polymorphic relationship is stored in the commentable\_id / commentable\_type field, while your function name is product. Laravel does some of its magic just by naming alone, so let's make sure the column names and polymorphic function names match up in the product model:
```
public function commentable()
{
return $this->morphTo();
}
```
After that, try something like this in the controller:
```
$themeProducts = Product::where('commentable_type', Theme::class)->get();
```
And this in the view
```
@foreach ($themeProducts as $themeProduct)
{{ $themeProduct->commentable->composer_package }}
@endforeach
```
Upvotes: 2 [selected_answer]<issue_comment>username_3: Create relations with the model as mentioned in above answers, then you can retrieve relations using with helper.
E.g.:
Product::where('commentable\_type', Theme::class)->with('commentable')->get();
and then can directly use as mentioned above.
e.g.: $themeProduct->commentable
Upvotes: 0 |
2018/03/20 | 2,761 | 8,727 | <issue_start>username_0: I am trying to create a neural network model to predict whether a signature is authentic or fake. I created the data set with 1044 signatures with authentic and fake signatures. This is the code for preprocessing the images
```
DATA = '../DATASET/DATA/'
IMG_BREDTH = 150
IMG_HEIGHT = 70
# helper functions
def label_img(img):
word_label = img.split('.')[-2]
if (word_label == '1') or (word_label == '2'): return [1,0]
elif word_label == 'F': return [0,1]
def create_data_set():
data = []
for img in tqdm(os.listdir(DATA)):
if img == '.DS_Store': continue
label = label_img(img)
path = os.path.join(DATA, img)
img = cv2.resize(cv2.imread(path, cv2.IMREAD_GRAYSCALE), (IMG_HEIGHT, IMG_BREDTH))
data.append([np.array(img), label])
shuffle(data)
np.save('data.npy', data)
return np.array(data)
```
I then split the data into training and test set using this code
```
data = create_data_set()
train_x = data[:835, 0]
train_y = data[:835, 1]
test_x = data[835:, 0]
test_y = data[835:, 1]
```
train\_x now contains 835 images and train\_y contains the respective labels ([1,0] for authentic and [0,1] for fake). the shape of each image inside train\_x is (150, 70). the shpae of train\_y is (835, )
I then created the neural network with this code
```
n_nodes_hl1 = 500
n_nodes_hl2 = 500
n_nodes_hl3 = 500
n_classes = 2
batch_size = 100
x = tf.placeholder(tf.float32, [None, len(train_x[0])])
y = tf.placeholder(tf.float32)
# neural network model
def neural_network_model(data):
hidden_layer_1 = {'weights': tf.Variable(tf.random_normal([len(train_x[0]), n_nodes_hl1])),
'biases': tf.Variable(tf.random_normal([n_nodes_hl1]))}
hidden_layer_2 = {'weights': tf.Variable(tf.random_normal([n_nodes_hl1, n_nodes_hl2])),
'biases': tf.Variable(tf.random_normal([n_nodes_hl2]))}
hidden_layer_3 = {'weights': tf.Variable(tf.random_normal([n_nodes_hl2, n_nodes_hl3])),
'biases': tf.Variable(tf.random_normal([n_nodes_hl3]))}
output_layer = {'weights': tf.Variable(tf.random_normal([n_nodes_hl3, n_classes])),
'biases': tf.Variable(tf.random_normal([n_classes]))}
l1 = tf.add(tf.matmul(data, hidden_layer_1['weights']), hidden_layer_1['biases'])
l1 = tf.nn.relu(l1)
l2 = tf.add(tf.matmul(l1, hidden_layer_2['weights']), hidden_layer_2['biases'])
l2 = tf.nn.relu(l2)
l3 = tf.add(tf.matmul(l2, hidden_layer_3['weights']), hidden_layer_3['biases'])
l3 = tf.nn.relu(l3)
output = tf.matmul(l3, output_layer['weights']) + output_layer['biases']
return output
def train_neural_network(x):
prediction = neural_network_model(x)
cost = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(logits=prediction,labels=y) )
optimizer = tf.train.AdamOptimizer().minimize(cost)
hm_epochs = 10
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch in range(hm_epochs):
epoch_loss = 0
i = 0
while i < len(train_x):
start = i
end = i + batch_size
batch_x = np.array(train_x[start:end], object)
batch_y = np.array(train_y[start:end], object)
assert batch_x.shape == (100, )
_, c = sess.run(fetches=[optimizer, cost], feed_dict={x: batch_x, y: batch_y})
epoch_loss += c
i += batch_size
print('Epoch', epoch, 'completed out of', hm_epochs, 'loss', epoch_loss)
correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct, 'float'))
print('Accuracy: ', accuracy.eval({x: test_x, y: test_y}))
```
The shape of batch\_x is (100, ) and the shape of batch\_y is (100, ). when I run the program I get the following error
```
train_neural_network(x)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
in ()
----> 1 train\_neural\_network(x)
in train\_neural\_network(x)
20 print(batch\_y.shape)
21 assert batch\_x.shape == (100, )
---> 22 \_, c = sess.run(fetches=[optimizer, cost], feed\_dict={x: batch\_x, y: batch\_y})
23 epoch\_loss += c
24
~/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed\_dict, options, run\_metadata)
776 try:
777 result = self.\_run(None, fetches, feed\_dict, options\_ptr,
--> 778 run\_metadata\_ptr)
779 if run\_metadata:
780 proto\_data = tf\_session.TF\_GetBuffer(run\_metadata\_ptr)
~/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in \_run(self, handle, fetches, feed\_dict, options, run\_metadata)
952 np\_val = subfeed\_val.to\_numpy\_array()
953 else:
--> 954 np\_val = np.asarray(subfeed\_val, dtype=subfeed\_dtype)
955
956 if (not is\_tensor\_handle\_feed and
~/anaconda3/lib/python3.6/site-packages/numpy/core/numeric.py in asarray(a, dtype, order)
529
530 """
--> 531 return array(a, dtype, copy=False, order=order)
532
533
ValueError: setting an array element with a sequence.
```
What am i doing wrong? Note that i am a novice developer and have just started learning about neural networks. I looked online on the particular error and found these following links.
["ValueError: setting an array element with a sequence." TensorFlow](https://stackoverflow.com/questions/44992860/valueerror-setting-an-array-element-with-a-sequence-tensorflow)
[Value Error while feeding in Neural Network](https://stackoverflow.com/questions/47062932/value-error-while-feeding-in-neural-network)
[ValueError: setting an array element with a sequence](https://stackoverflow.com/questions/4674473/valueerror-setting-an-array-element-with-a-sequence)
I tried doing what they specified in the answers but it did not work for me.
Can someone please help me out
Thank you in advance
**EDIT 1:
just after posting this i loked at another link with a similar problem.
[Tensorflow "ValueError: setting an array element with a sequence." in sess.run()](https://stackoverflow.com/questions/43989934/tensorflow-valueerror-setting-an-array-element-with-a-sequence-in-sess-run)
I tried making the changes in the answer but now am getting a different error.**
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
in ()
----> 1 train\_neural\_network(x)
in train\_neural\_network(x)
20 print(batch\_y.shape)
21 assert batch\_x.shape == (100, )
---> 22 \_, c = sess.run(fetches=[optimizer, cost], feed\_dict={x: list(batch\_x), y: list(batch\_y)})
23 epoch\_loss += c
24
~/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed\_dict, options, run\_metadata)
776 try:
777 result = self.\_run(None, fetches, feed\_dict, options\_ptr,
--> 778 run\_metadata\_ptr)
779 if run\_metadata:
780 proto\_data = tf\_session.TF\_GetBuffer(run\_metadata\_ptr)
~/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in \_run(self, handle, fetches, feed\_dict, options, run\_metadata)
959 'Cannot feed value of shape %r for Tensor %r, '
960 'which has shape %r'
--> 961 % (np\_val.shape, subfeed\_t.name, str(subfeed\_t.get\_shape())))
962 if not self.graph.is\_feedable(subfeed\_t):
963 raise ValueError('Tensor %s may not be fed.' % subfeed\_t)
ValueError: Cannot feed value of shape (100, 150, 70) for Tensor 'Placeholder\_2:0', which has shape '(?, 150)'
```
What am i doing wrong?<issue_comment>username_1: The error message simply states that you are feeding the wrong dimensions to the placeholder y while running the optimization algorithm and cost function (through feed\_dict). Check whether the dimensions are correct.
Upvotes: 2 <issue_comment>username_2: Without the data to run the code myself I have to guess. But the `ValueError` indicates that the dimension of your input from `x_batch` (100, 150, 70) does not match the shape of the placeholder (None, 150).
If I understand your code correctly, each image you are trying to classify has 150x70 pixels. If that's true then I would flatten each of those into a vector and use that vector's length as the placeholder `x`'s dimension (None, 150x70).
Also, it seems, `y` doesn't have a specified shape. In this case it should be (None, 2). If there is no particular reason why you encode your two labels "fake" and "authentic" as two dimensional vectors, you could also use a just one dimensional vector.
Upvotes: 2 [selected_answer] |
2018/03/20 | 272 | 951 | <issue_start>username_0: [](https://i.stack.imgur.com/w0rGt.png)
I'm trying to filter out rows on a particular column being blank. It doesn't seem to be delegable. I'm using sharepoint list as a datasource.<issue_comment>username_1: According to the docs, sharepoint list does not support IsBlank() as a delegable function. Bummer, might need to switch to SQL server as a data source.
[](https://i.stack.imgur.com/CyWWq.png)
[Delegable functions and data sources breakdown](https://learn.microsoft.com/en-us/powerapps/delegation-list)
Upvotes: 0 <issue_comment>username_2: It may be worth to try this:
`Time_Out=Blank()`
I have found in some cases that this may be delegated whereas IsBlank() is not.
This is not documented anywhere, PowerApps documentation has a long way to go.
Upvotes: 2 [selected_answer] |
2018/03/20 | 402 | 1,454 | <issue_start>username_0: I'm trying to 'translate' a SQL query to Mongoose, but it fails to return any results.
Here's the SQL one:
```
WHERE (( a = 1 AND b = 1) OR ( c = 1 AND d = 1)) AND (date = '2018-03-20')
```
**Here the mongo query version:**
```
$and: [
{ $or: [ {$and: [{searchCode: 123},
{searchNumber: 987}
]
},
{$and: [{Code: 123},
{Number: 987}
]
}
]
},
{date: '2018-03-20'}
]
```
There are values in the database which should be returned, but the query fails to return results.
Please advise.<issue_comment>username_1: According to the docs, sharepoint list does not support IsBlank() as a delegable function. Bummer, might need to switch to SQL server as a data source.
[](https://i.stack.imgur.com/CyWWq.png)
[Delegable functions and data sources breakdown](https://learn.microsoft.com/en-us/powerapps/delegation-list)
Upvotes: 0 <issue_comment>username_2: It may be worth to try this:
`Time_Out=Blank()`
I have found in some cases that this may be delegated whereas IsBlank() is not.
This is not documented anywhere, PowerApps documentation has a long way to go.
Upvotes: 2 [selected_answer] |
2018/03/20 | 905 | 3,406 | <issue_start>username_0: I am a bit confused about new rules about copy elision and actually I am not even sure if it applies in this case. I have this:
```
template struct foo {
T t;
foo(const T& t) : t(t) {}
~foo() { std::cout << "destructor \n"; }
}
template foo make\_foo(const T& t) { return {t}; }
```
where `make_foo` is only to allow deducing the parameter (in the real code `t` is a lambda, but I left it out here for the sake of simplicity, or rather for the sake of confusion, sorry for that) as in
```
auto x = make_foo(123);
```
Now I need to be absolutely sure that `foo`s destructor is called exactly once: when `x` goes out of scope. I am afraid this is a unclear-what-you-are-asking question, but if it is that obvious that there wont be any temporary `foo`, that would be answer enough ;).
**In C++11, can I be certain that there wont be a temporary `foo` in `make_foo` that will be destroyed? The destructor should be called only when `x` goes out of scope.**
As correctly pointed out in a comment, this question is the Y part of a XY question and the X part is that I want to implement some end of scope functionality. The destructor of `foo` has some side effects (in the example the `cout`) that should be called at the end of scope of `x` but not in `make_foo` in case there would be some temporary `foo`.<issue_comment>username_1: Since C++17 there is guaranteed to be no temporary.
In C++14 and earlier, there must be an accessible copy/move constructor, and it is optional to the compiler whether or not there is actually a temporary.
As far as I'm aware, the only compiler that would actually manifest a temporary is older versions of MSVC in Debug mode.
Upvotes: 1 <issue_comment>username_2: In C++11 copy elision even for nameless temporary only allowed not mandated. It's described here [copy elision](http://en.cppreference.com/w/cpp/language/copy_elision). It is mandated since C++17.
Also in C++17 you will have automatic deduction guides, so you will not need such a construction.
And you can test your compiler, because most of modern compilers will elide copying here.
Upvotes: 3 [selected_answer]<issue_comment>username_3: In your case to be sure that destructor will not be called for unnamed object you can bind return value to **const reference**.
To clarify what happens if we not rely on copy elision:
```
template foo make\_foo(const T& t) { return {t}; }
```
In this function, return object will not be constructed in scope of that function. It will create **temporary unnamed object** out of the scope.
If you will bind return value to a **new named object** move constructor will be called(or copy if move is not defined) to create you **new object** from **returned temporary**. However if you bind returned temporary to **const reference** it will be strictly bound to that reference and no new objects will be constructed and temporary will not be destructed till that reference is out of the scope.
EDIT:
To not mislead you. Constructor for temporary called in function scope, but lifetime of that temporary will indeed be prolonged to lifetime of const reference
If you need more info you can check this answer. It refers to the C++ standard.
[Does a const reference prolong the life of a temporary?](https://stackoverflow.com/questions/2784262/does-a-const-reference-prolong-the-life-of-a-temporary/2784304#2784304)
Upvotes: 1 |
2018/03/20 | 251 | 1,058 | <issue_start>username_0: I want to save different revisions of a file from MKS Integrity using the Command Line Interface. The aim is to save these two files locally and use a script I wrote to compare them.
When I check the member history I can double click on the Revision Number and the file opens up in the chosen Editor. This file is saved in the TEMP Folder locally.
How can I do it using the command line arguments. Till now I have found the ***si edit*** command. This opens the file locally and creates a copy in the TEMP Folder but with a weird name.
If someone could help me with a better command that is available and I have overlooked in the documentation or if someone could tell me how the name is created?<issue_comment>username_1: Try using the `projectco` command.
```
si projectco --nolock --project= --targetFile=
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: To clarify that answer, you should add quotes for project, outputfile and projectmember.
si projectco --nolock --project="" --targetFile="" ""
Upvotes: -1 |
2018/03/20 | 177 | 619 | <issue_start>username_0: I'm trying to get the total price after the value of the quantity is changed, but it reads NaN at the console
JS
```
getTotal(){
this.totalPrice= (this.price) * (this.quantity) ;
console.log(this.totalPrice)
}
```
HTML
```
Price
Quantity
Total
```<issue_comment>username_1: Try using the `projectco` command.
```
si projectco --nolock --project= --targetFile=
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: To clarify that answer, you should add quotes for project, outputfile and projectmember.
si projectco --nolock --project="" --targetFile="" ""
Upvotes: -1 |
2018/03/20 | 967 | 3,361 | <issue_start>username_0: I have an array of people with certain information about them.
I want to filter this array according to religion, gender, and record number at the same time.
Is there a way to do?
Here is **PEOPLE** Array:
```
{
"Id": 1270,
"FirstName": "name",
"LastName": "<NAME>",
"Religion": "religion",
"RecordNumber": 1,
"Gender": "male",
"Contacted": false,
"NeedsTransportation": false,
"WhenContacted": null,
"ContactedByWho": null,
},
{
"Id": 1383,
"FirstName": "name",
"LastName": "<NAME>",
"Religion": "religion",
"RecordNumber": 1,
"Gender": "male",
"Contacted": false,
"NeedsTransportation": false,
"WhenContacted": null,
"ContactedByWho": null
},
{
"Id": 1394,
"FirstName": "name",
"LastName": "<NAME>",
"Religion": "religion",
"RecordNumber": 1,
"Gender": "male",
"Contacted": false,
"NeedsTransportation": false,
"WhenContacted": null,
"ContactedByWho": null,
}
```
Thanks for any help.<issue_comment>username_1: Use `filter`:
```
const people: Person[] = [...];
people.filter(p => {
return p.Religion === (...) // condition for religion
&& p.Gender === (...) // condition for gender
&& p.RecordNumber === (...); // condition for record number
});
```
Of course the conditions needn't be equality comparisons, and neither you've got to use AND operators. The idea is that the `filter` method will return a new array containing only those items for which the function passed as an argument returns `true`. Inside you can make any checks you need.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Why cant you just do `filter`.
```
var filteredResult = x.filter((x)=> (x.Id == 1394) && (x.Religion =="religion") && (x.Gender=="male") )
```
Of course, you can use any operator which can give you a boolean true for the item you are iterating to get that item in the result array
Upvotes: 1 <issue_comment>username_3: ```js
let people = [{
"Id": 1270,
"FirstName": "name",
"LastName": "<NAME>",
"Religion": "religion",
"RecordNumber": 1,
"Gender": "male",
"Contacted": false,
"NeedsTransportation": false,
"WhenContacted": null,
"ContactedByWho": null,
},
{
"Id": 1383,
"FirstName": "name",
"LastName": "<NAME>",
"Religion": "religion",
"RecordNumber": 1,
"Gender": "male",
"Contacted": false,
"NeedsTransportation": false,
"WhenContacted": null,
"ContactedByWho": null
},
{
"Id": 1394,
"FirstName": "name",
"LastName": "<NAME>",
"Religion": "christian",
"RecordNumber": 1,
"Gender": "male",
"Contacted": false,
"NeedsTransportation": false,
"WhenContacted": null,
"ContactedByWho": null,
}]
const FILTER = (RELIGION, GENDER, RECORD_NUMBER) => {
return people.filter(({Religion, Gender, RecordNumber}) => Religion === RELIGION && Gender === GENDER && +RecordNumber === +RECORD_NUMBER)
}
let filteredPeople = FILTER('christian', 'male', 1)
console.log(filteredPeople)
```
I prepend the `+` operator before the `RecordNumber && RECORD_NUMBER` arguments incase it is passed as a String e.g `+'1' === 1`
Upvotes: 1 |
2018/03/20 | 6,529 | 17,245 | <issue_start>username_0: I have a problem with integrating Google Play Services into my Unity game. I am sure the setup is correctly done on the google play developer console/google play API, and even Unity.
Using ADB Logcat, here is my error log text : <https://hastebin.com/varilupaxi.scala> (on short, the errors are "Could not register one or more required Java classes." and "InvalidOperationException: There was an error creating a GameServices object.").
Any idea how to fix this ? I also have the latest version of the Google Play Services(0.9.50).
Here it's my Unity C# code : <https://hastebin.com/iqajeguvop.cs>
Thanks a lot !
Here are the errors if link doesn't work :
`03-20 10:23:26.173: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/common/ConnectionResult: an exception occurred.
03-20 10:23:26.174: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/common/GooglePlayServicesUtil: an exception occurred.
03-20 10:23:26.174: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/common/api/Api: an exception occurred.
03-20 10:23:26.174: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/common/api/Api$ApiOptions: an exception occurred.
03-20 10:23:26.175: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/common/api/Api$ApiOptions$HasOptions: an exception occurred.
03-20 10:23:26.175: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/common/api/GoogleApiClient: an exception occurred.
03-20 10:23:26.175: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/common/api/GoogleApiClient$Builder: an exception occurred.
03-20 10:23:26.175: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/common/api/PendingResult: an exception occurred.
03-20 10:23:26.175: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/common/api/Result: an exception occurred.
03-20 10:23:26.176: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/common/api/ResultCallback: an exception occurred.
03-20 10:23:26.176: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/common/data/DataBufferUtils: an exception occurred.
03-20 10:23:26.176: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/Games: an exception occurred.
03-20 10:23:26.177: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/Games$GamesOptions: an exception occurred.
03-20 10:23:26.177: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/Games$GamesOptions$Builder: an exception occurred.
03-20 10:23:26.177: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/Player: an exception occurred.
03-20 10:23:26.177: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/PlayerBuffer: an exception occurred.
03-20 10:23:26.177: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/PlayerLevel: an exception occurred.
03-20 10:23:26.178: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/PlayerLevelInfo: an exception occurred.
03-20 10:23:26.178: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/Players: an exception occurred.
03-20 10:23:26.178: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/Players$LoadPlayersResult: an exception occurred.
03-20 10:23:26.178: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/achievement/Achievement: an exception occurred.
03-20 10:23:26.178: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/achievement/Achievements$LoadAchievementsResult: an exception occurred.
03-20 10:23:26.179: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/event/Event: an exception occurred.
03-20 10:23:26.179: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/event/EventBuffer: an exception occurred.
03-20 10:23:26.179: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/event/Events: an exception occurred.
03-20 10:23:26.179: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/event/Events$LoadEventsResult: an exception occurred.
03-20 10:23:26.179: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/achievement/AchievementBuffer: an exception occurred.
03-20 10:23:26.180: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/achievement/Achievements: an exception occurred.
03-20 10:23:26.180: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/leaderboard/Leaderboard: an exception occurred.
03-20 10:23:26.180: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/leaderboard/LeaderboardBuffer: an exception occurred.
03-20 10:23:26.180: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/leaderboard/Leaderboards: an exception occurred.
03-20 10:23:26.180: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/leaderboard/LeaderboardScore: an exception occurred.
03-20 10:23:26.181: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/leaderboard/LeaderboardScoreBuffer: an exception occurred.
03-20 10:23:26.181: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/leaderboard/LeaderboardVariant: an exception occurred.
03-20 10:23:26.181: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/leaderboard/Leaderboards$LeaderboardMetadataResult: an exception occurred.
03-20 10:23:26.181: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/leaderboard/Leaderboards$LoadScoresResult: an exception occurred.
03-20 10:23:26.182: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/multiplayer/Invitation: an exception occurred.
03-20 10:23:26.182: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/multiplayer/InvitationBuffer: an exception occurred.
03-20 10:23:26.182: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/multiplayer/Invitations: an exception occurred.
03-20 10:23:26.182: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/multiplayer/Invitations$LoadInvitationsResult: an exception occurred.
03-20 10:23:26.182: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/multiplayer/Multiplayer: an exception occurred.
03-20 10:23:26.183: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/multiplayer/Participant: an exception occurred.
03-20 10:23:26.183: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/multiplayer/ParticipantResult: an exception occurred.
03-20 10:23:26.183: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/multiplayer/turnbased/LoadMatchesResponse: an exception occurred.
03-20 10:23:26.183: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/multiplayer/turnbased/TurnBasedMatch: an exception occurred.
03-20 10:23:26.183: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/multiplayer/turnbased/TurnBasedMatchBuffer: an exception occurred.
03-20 10:23:26.184: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/multiplayer/turnbased/TurnBasedMatchConfig: an exception occurred.
03-20 10:23:26.184: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/multiplayer/turnbased/TurnBasedMatchConfig$Builder: an exception occurred.
03-20 10:23:26.184: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/multiplayer/turnbased/TurnBasedMultiplayer: an exception occurred.
03-20 10:23:26.184: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/multiplayer/turnbased/TurnBasedMultiplayer$CancelMatchResult: an exception occurred.
03-20 10:23:26.184: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/multiplayer/turnbased/TurnBasedMultiplayer$InitiateMatchResult: an exception occurred.
03-20 10:23:26.185: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/multiplayer/turnbased/TurnBasedMultiplayer$LeaveMatchResult: an exception occurred.
03-20 10:23:26.185: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/multiplayer/turnbased/TurnBasedMultiplayer$LoadMatchesResult: an exception occurred.
03-20 10:23:26.185: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/multiplayer/turnbased/TurnBasedMultiplayer$LoadMatchResult: an exception occurred.
03-20 10:23:26.185: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/multiplayer/turnbased/TurnBasedMultiplayer$UpdateMatchResult: an exception occurred.
03-20 10:23:26.185: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/quest/Quest: an exception occurred.
03-20 10:23:26.186: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/quest/QuestBuffer: an exception occurred.
03-20 10:23:26.186: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/quest/Quests: an exception occurred.
03-20 10:23:26.186: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/quest/Milestone: an exception occurred.
03-20 10:23:26.186: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/quest/Quests$LoadQuestsResult: an exception occurred.
03-20 10:23:26.186: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/quest/Quests$AcceptQuestResult: an exception occurred.
03-20 10:23:26.187: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/quest/Quests$ClaimMilestoneResult: an exception occurred.
03-20 10:23:26.187: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/snapshot/Snapshot: an exception occurred.
03-20 10:23:26.187: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/snapshot/SnapshotContents: an exception occurred.
03-20 10:23:26.187: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/snapshot/SnapshotMetadata: an exception occurred.
03-20 10:23:26.187: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/snapshot/SnapshotMetadataBuffer: an exception occurred.
03-20 10:23:26.188: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/snapshot/Snapshots: an exception occurred.
03-20 10:23:26.188: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/snapshot/Snapshots$CommitSnapshotResult: an exception occurred.
03-20 10:23:26.188: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/snapshot/Snapshots$LoadSnapshotsResult: an exception occurred.
03-20 10:23:26.188: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/snapshot/Snapshots$OpenSnapshotResult: an exception occurred.
03-20 10:23:26.188: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/snapshot/SnapshotMetadataChange: an exception occurred.
03-20 10:23:26.189: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/snapshot/SnapshotMetadataChange$Builder: an exception occurred.
03-20 10:23:26.189: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/stats/PlayerStats: an exception occurred.
03-20 10:23:26.189: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/stats/Stats: an exception occurred.
03-20 10:23:26.189: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/stats/Stats$LoadPlayerStatsResult: an exception occurred.
03-20 10:23:26.189: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/multiplayer/realtime/RealTimeMessageReceivedListener: an exception occurred.
03-20 10:23:26.189: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/multiplayer/realtime/RealTimeMultiplayer: an exception occurred.
03-20 10:23:26.190: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/multiplayer/realtime/RealTimeMessage: an exception occurred.
03-20 10:23:26.190: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/multiplayer/realtime/Room: an exception occurred.
03-20 10:23:26.190: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/multiplayer/realtime/RoomConfig: an exception occurred.
03-20 10:23:26.190: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/multiplayer/realtime/RoomConfig$Builder: an exception occurred.
03-20 10:23:26.190: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/nearby/Nearby: an exception occurred.
03-20 10:23:26.191: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/nearby/connection/AppIdentifier: an exception occurred.
03-20 10:23:26.191: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/nearby/connection/AppMetadata: an exception occurred.
03-20 10:23:26.191: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/nearby/connection/Connections: an exception occurred.
03-20 10:23:26.191: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/nearby/connection/Connections$StartAdvertisingResult: an exception occurred.
03-20 10:23:26.191: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/video/CaptureState: an exception occurred.
03-20 10:23:26.192: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/video/VideoCapabilities: an exception occurred.
03-20 10:23:26.192: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/video/Videos: an exception occurred.
03-20 10:23:26.192: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/video/Videos$CaptureAvailableResult: an exception occurred.
03-20 10:23:26.192: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/video/Videos$CaptureCapabilitiesResult: an exception occurred.
03-20 10:23:26.192: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/video/Videos$CaptureStateResult: an exception occurred.
03-20 10:23:26.193: E/GamesNativeSDK(25961): Can't register class com/google/android/gms/games/video/Videos$CaptureOverlayStateListener: an exception occurred.
03-20 10:23:26.250: I/Unity(25961): Building GPG services, implicitly attempts silent auth
03-20 10:23:26.250: I/Unity(25961):
03-20 10:23:26.250: I/Unity(25961): (Filename: ./artifacts/generated/common/runtime/DebugBindings.gen.cpp Line: 51)
03-20 10:23:26.253: E/GamesNativeSDK(25961): Could not register one or more required Java classes.
03-20 10:23:26.266: E/Unity(25961): InvalidOperationException: There was an error creating a GameServices object. Check for log errors from GamesNativeSDK
03-20 10:23:26.266: E/Unity(25961): at GooglePlayGames.Native.PInvoke.GameServicesBuilder.Build (GooglePlayGames.Native.PInvoke.PlatformConfiguration configRef) [0x00000] in :0
03-20 10:23:26.266: E/Unity(25961): at GooglePlayGames.Native.NativeClient.InitializeGameServices () [0x00000] in :0
03-20 10:23:26.266: E/Unity(25961): at GooglePlayGames.Native.NativeClient.Authenticate (System.Action`2 callback, Boolean silent) [0x00000] in :0
03-20 10:23:26.266: E/Unity(25961): at GooglePlayGames.PlayGamesPlatform.Authenticate (System.Action`2 callback, Boolean silent) [0x00000] in :0
03-20 10:23:26.266: E/Unity(25961): at GooglePlayGames.PlayGamesPlatform.Authenticate (System.Action`1 callback, Boolean silent) [0x00000] in :0
03-20 10:23:26.266: E/Unity(25961): at GooglePlayGames.PlayGamesPlatform.Authenticate (System.Action`1 callback) [0x00000] in :0
03-20 10:23:26.266: E/Unity(25961): at GooglePlayGames.PlayGamesLocalUser.Authenticate (System.Action`1 callback) [0x00000] in :0
03-20 10:23:26.266: E/Unity(25961): at GooglePlayGame`<issue_comment>username_1: For anyone coming here for a solution, I fixed it by adding to my Proguard File : -keep class com.google.android.gms.\*\*{\*;}
Upvotes: 1 <issue_comment>username_2: I've been hitting this issue recently (August 2021) after trying to integrate the most recent version of Google Play Games Services into an Android app. I was getting:
```
E/GamesNativeSDK: Can't register class com/google/android/gms/games/multiplayer/*: an exception occurred.
E/GamesNativeSDK: Can't register class com/google/android/gms/nearby/connection/*: an exception occurred.
E/GamesNativeSDK: Could not register one or more required Java classes.
```
With 25 different classes for the first \* (all of which were deprecated earlier by Google and removed in the latest release) and 5 for the second. The latter 5 were fixed by adding:
`implementation 'com.google.android.gms:play-services-nearby:18.0.0'`
to my module's build.gradle dependencies, but for the former 25 I had to downgrade the version of Play Games Services to the last version those 25 classes were still present:
`implementation 'com.google.android.gms:play-services-games:20.0.0'`
It looks like a bug in GPGS 21.0.0.
Upvotes: 0 |
2018/03/20 | 555 | 2,162 | <issue_start>username_0: I am newbie i want to remove the selected item from the `spinner` and also add the new item to the `spinner`.How can i do that.What i am trying is
```
adapter = ArrayAdapter.createFromResource(this,R.array.slot , android.R.layout.simple_spinner_item);
adapter.setDropDownViewResource(android.R.layout.simple_spinner_dropdown_item);
slotTime.setAdapter(adapter);
slotTime.setOnItemSelectedListener(new AdapterView.OnItemSelectedListener() {
@Override
public void onItemSelected(AdapterView adapterView, View view, int i, long l) {
selectedTime = adapterView.getSelectedItem().toString();
adapter.remove((String) slotTime.getSelectedItem());
adapter.notifyDataSetChanged();
}
```
I got the error like this....
```
java.lang.UnsupportedOperationException
```
Anyone kindly help me to overcome this problem<issue_comment>username_1: For anyone coming here for a solution, I fixed it by adding to my Proguard File : -keep class com.google.android.gms.\*\*{\*;}
Upvotes: 1 <issue_comment>username_2: I've been hitting this issue recently (August 2021) after trying to integrate the most recent version of Google Play Games Services into an Android app. I was getting:
```
E/GamesNativeSDK: Can't register class com/google/android/gms/games/multiplayer/*: an exception occurred.
E/GamesNativeSDK: Can't register class com/google/android/gms/nearby/connection/*: an exception occurred.
E/GamesNativeSDK: Could not register one or more required Java classes.
```
With 25 different classes for the first \* (all of which were deprecated earlier by Google and removed in the latest release) and 5 for the second. The latter 5 were fixed by adding:
`implementation 'com.google.android.gms:play-services-nearby:18.0.0'`
to my module's build.gradle dependencies, but for the former 25 I had to downgrade the version of Play Games Services to the last version those 25 classes were still present:
`implementation 'com.google.android.gms:play-services-games:20.0.0'`
It looks like a bug in GPGS 21.0.0.
Upvotes: 0 |
2018/03/20 | 622 | 2,520 | <issue_start>username_0: I am using google maps and drawing polylines on it by passing array of coordinates on each location update. After plotting the polyline i first remove the previous one and then i am plotting new polyline based on updated coordinates array list. Problem is, google maps is plotting a separate polyline from starting to end point on each location update.
I know mistake is somewhere in the hierarachy of calling things. Can anybody help me figuring this out.
This is the method in which i am updating the camera focus and plotting polylines.
```
private void moveCamera(Location location) {
if (location != null) {
if (polyline != null) {
polyline.remove();
}
mMap.moveCamera(CameraUpdateFactory.newLatLngZoom(
new LatLng(location.getLatitude(), location.getLongitude()), 16));
coordList.add(new LatLng(location.getLatitude(), location.getLongitude()));
// Create polyline options with existing LatLng ArrayList
polylineOptions.addAll(coordList);
polylineOptions
.width(5)
.color(Color.RED);
// Adding multiple points in map using polyline and arraylist
polyline = mMap.addPolyline(polylineOptions);
}
}
```<issue_comment>username_1: For anyone coming here for a solution, I fixed it by adding to my Proguard File : -keep class com.google.android.gms.\*\*{\*;}
Upvotes: 1 <issue_comment>username_2: I've been hitting this issue recently (August 2021) after trying to integrate the most recent version of Google Play Games Services into an Android app. I was getting:
```
E/GamesNativeSDK: Can't register class com/google/android/gms/games/multiplayer/*: an exception occurred.
E/GamesNativeSDK: Can't register class com/google/android/gms/nearby/connection/*: an exception occurred.
E/GamesNativeSDK: Could not register one or more required Java classes.
```
With 25 different classes for the first \* (all of which were deprecated earlier by Google and removed in the latest release) and 5 for the second. The latter 5 were fixed by adding:
`implementation 'com.google.android.gms:play-services-nearby:18.0.0'`
to my module's build.gradle dependencies, but for the former 25 I had to downgrade the version of Play Games Services to the last version those 25 classes were still present:
`implementation 'com.google.android.gms:play-services-games:20.0.0'`
It looks like a bug in GPGS 21.0.0.
Upvotes: 0 |
2018/03/20 | 981 | 2,935 | <issue_start>username_0: I am facing this error while making request to fetch json from api.
I can get json data using the "/v1/articles' path.
```
conn = http.client.HTTPSConnection("api.xxxx.com.tr")
headers = {
'accept': "application/json",
'apikey': "<KEY>"
}
filter = "daily"
conn.request("GET", "/v1/articles", headers=headers)
reader = codecs.getreader("utf-8")
res = conn.getresponse()
data = json.load(reader(res))
json.dumps(data)
return data
```
But i am having JSONDecodeError if i set filter. Code:
```
conn = http.client.HTTPSConnection("api.xxxx.com.tr")
headers = {
'accept': "application/json",
'apikey': "<KEY>"
}
conn.request("GET", "/v1/articles?$filter=Path eq '/daily/'", headers=headers)
reader = codecs.getreader("utf-8")
res = conn.getresponse()
data = json.load(reader(res))
json.dumps(data)
return data
```
I tried same filter using Postman with no error and i can get Json data.
Returned Json data from Postman:
```
[
{
"Id": "40778196",
"ContentType": "Article",
"CreatedDate": "2018-03-20T08:28:05.385Z",
"Description": "İspanya'da 2016 yılında çalınan lüks otomobil, şasi numarası değiştirilerek Bulgaristan üzerinden getirildiği Türkiye'de bulundu.",
"Files": [
{
"FileUrl": "http://i.xxxx.com/i/xxxx/98/620x0/5ab0c6a9c9de3d18a866eb54.jpg",
"Metadata": {
"Title": "",
"Description": ""
}
}
],
"ModifiedDate": "2018-03-20T08:32:12.001Z",
"Path": "/gundem/",
"StartDate": "2018-03-20T08:32:12.001Z",
"Tags": [
"ispanya",
"Araç",
"Hırsız",
"Dolandırıcı"
],
"Title": "İspanya'da çalınan lüks araç Türkiye'de bulundu!",
"Url": "http://www.xxxx.com.tr/gundem/ispanyada-calinan-luks-arac-turkiyede-bulundu-40778196"
}
]
```
I can not figure out the problem. It would be great if anyone help me about this issue. Thank you.<issue_comment>username_1: The problem is in the following line
```
data = json.load(reader(res))
```
when your response is not a json string, `JSONDecodeError` occurs. so, add an additional logic to see if the response is `None` or a `json` string. First thing, print the `reader(res)` and see what the return is
Upvotes: 2 <issue_comment>username_2: I finally figured out the problem! Using the `requests` library have solved my problem now I can filter the api request.
```
data = requests.get('https://api.xxxxx.com.tr/v1/articles', headers =
headers, params={"$filter":"Path eq '/xxxxxx/'"}).json()
```
I am leaving this answer here for anyone else who can need this solution in the future.
Thanks for all your suggestions.
Upvotes: 3 [selected_answer] |
2018/03/20 | 854 | 2,942 | <issue_start>username_0: I am trying to take mysql dump with command:
```
mysqldump -u xxxx -p dbxxx > xxxx270613.sql
```
what is command to take mysqldump with UTF8 ?<issue_comment>username_1: Hi please try the following.
```none
mysqldump -u [username] –p[password] --default-character-set=utf8 -N --routines --skip-triggers --databases [database_name] > [dump_file.sql]
```
Upvotes: 6 [selected_answer]<issue_comment>username_2: `--default-character-set=utf8` is the option you are looking for the one can be used together with these others:
```
mysqldump --events \
--routines \
--triggers \
--add-drop-database \
--compress \
--hex-blob \
--opt \
--skip-comments \
--single-transaction \
--skip-set-charset \
--default-character-set=utf8 \
--databases dbname > my.dump
```
Also, check the `--hex-blob` it helps to dump binary strings in hexadecimal format, so I can guaranty (be more portable) making the import to work.
>
> The `--databases` option causes all names on the command line to be treated as database names. Without this option, `mysqldump` treats the first name as a database name and those following as table names.
>
>
> With `--all-databases` or `--databases`, `mysqldump` writes `CREATE DATABASE` and `USE` statements prior to the dump output for each database. This ensures that when the dump file is reloaded, it creates each database if it does not exist and makes it the default database so database contents are loaded into the same database from which they came. If you want to cause the dump file to force a drop of each database before recreating it, use the --add-drop-database option as well. In this case, `mysqldump` writes a `DROP DATABASE` statement preceding each `CREATE DATABASE` statement.
>
>
>
This helps to restore using:
```
# mysql < dump.sql
```
Instead of:
```
# mysql dbname < dump.sql
```
Upvotes: 4 <issue_comment>username_3: I had the problem, that even with applied utf-8 flags when creating the dump I could not avoid broken characters importing a dump that was created from a DB with many text columns using latin1.
Some googling and especially [this site](https://www.whitesmith.co/blog/latin1-to-utf8/) helped me to finally figure it out.
1. mysqldump with **--skip-set-charset --default-character-set=latin1** flags,
to avoid MySQL attempt of reconversion and setting a charset.
2. fix the dump by replacing the charset strings using sed on terminal
sed -i 's/latin1\_swedish\_ci/utf8mb4/g' mysqlfile.sql
sed -i 's/latin1/utf8mb4/g' mysqlfile.sql
to make sure you don't miss anything you can do grep -i 'latin1' mysqlfile.sql before step 2 - and then come up with more sed orders. Introduction to sed [here](https://www.cyberciti.biz/faq/how-to-use-sed-to-find-and-replace-text-in-files-in-linux-unix-shell/)
3. create a clean DB
CREATE DATABASE mydatabase CHARACTER SET utf8mb4 COLLATE utf8mb4\_0900\_ai\_ci;
4. apply fixed dump
Upvotes: 4 |
2018/03/20 | 345 | 1,315 | <issue_start>username_0: Is there an update all command for xcodebuild command line for a svn project?
I am able to perform clean build and install and launch the app.
Is there a Xcode command line buildaction to perform update all so as to get the latest changes made in the source code, updated in my local workspace from svn.<issue_comment>username_1: Go to Your project from command line and type below command
```
svn update
```
this command will fetch all the changes done on svn server repository.
Upvotes: 1 [selected_answer]<issue_comment>username_2: You should check command list using command `svn help`.
Run command `svn help` in your terminal (mac app) and see available list of commands (Command list/code depends upon, type of **SVN** you've installed in your system. In general cases, all types of SVN has same commands but its better to check, available commands for your SVN)
If you've installed [`Apache - Subversion`](http://subversion.apache.org/), then here are all available commands for it.
[](https://i.stack.imgur.com/Sz86F.png)
To update your svn files: Run command: `svn update` or `svn up`
[](https://i.stack.imgur.com/zPx0h.png)
Upvotes: 1 |
2018/03/20 | 1,506 | 5,678 | <issue_start>username_0: I'm wondering how to do the following:
So, I have here an example table:
```
ID: Name: Occupation: Startdate: Enddate:
1 John Journalist 01/01/2000 01/01/2000
2 John Baker 01/01/2002 01/01/2004
3 John Butcher 01/01/2004 (null)
4 Mark Baker 05/03/2000 (null)
5 Petrus Lawyer 01/01/2001 01/01/2002
6 Petrus Baker 01/01/2002 (null)
7 Andre Journalist 01/01/1999 01/01/2000
8 Andre Baker 01/01/2000 (null)
```
So, here's what I want: I want to find the names of all the persons who have first been a journalist and then switched to being a baker. So, I don't want to find people who were just bakers, nor do I wanna find people who have first been a journalist, then a baker, and then went on to being a butcher. So, basically, I want the query to return the two records of Andre.
Is there a way to do this?
EDIT: I should mention that this db is not to be edited, so no inserts or anything that would alter the db in any way, what I want is as simple a select query as possible, if possible.<issue_comment>username_1: This uses a trick of your data, which is that the `enddate` of the previous occupation matches the `startdate` of the next one:
```
with candidates as (
select name, enddate from your_table
where occupation = 'Journalist'
intersect
select name, startdate from your_table
where occupation = 'Baker'
and enddate is null -- this is current record
)
select name
from candidates
/
```
This won't work for people who started as journalists, became butchers, then became bakers. But according to your comment that's not something you'd want to do.
---
In real life bounded date ranges commonly don't overlap. That is, the `startdate` should be `enddate+1`. After all, people generally don't start a new job at lunchtime!
Upvotes: 1 <issue_comment>username_2: ```
select *
from
(
select tab.*,
count(*) over (partition by Name) as cnt,
first_value(Occupation) over (partition by Name order by StartDate) as occ#1
max(Occupation) over (partition by Name) as occ#2 -- simpler than LAST_VALUE
from tab
) st
where cnt = 2 -- excatly 2 rows
and occ#1 ='Journalist' -- 1st occupation
and occ#2 = 'Baker' -- 2nd occupation
```
Upvotes: 1 <issue_comment>username_3: Check this:
```
WITH tbl
AS (SELECT '1' id,
'John' name,
'Journalist' occupation,
'01/01/2000' startdate,
'01/01/2000' enddate
FROM DUAL
UNION ALL
SELECT '2', 'John', 'Baker', '01/01/2002', '01/01/2004' FROM DUAL
UNION ALL
SELECT '3',
'John',
'Butcher',
'01/01/2004',
NULL
FROM DUAL
UNION ALL
SELECT '4',
'Mark',
'Baker',
'05/03/2000',
NULL
FROM DUAL
UNION ALL
SELECT '5', 'Petrus', 'Lawyer', '01/01/2001', '01/01/2002' FROM DUAL
UNION ALL
SELECT '6',
'Petrus',
'Baker',
'01/01/2002',
NULL
FROM DUAL
UNION ALL
SELECT '7',
'Andre',
'Journalist',
'01/01/1999',
'01/01/2000'
FROM DUAL
UNION ALL
SELECT '8',
'Andre',
'Baker',
'01/01/2000',
NULL
FROM DUAL)
SELECT id,
name,
occupation,
startdate,
enddate
FROM (SELECT t.*,
ROW_NUMBER () OVER (PARTITION BY name ORDER BY startdate) rn
FROM tbl t) mn_tbl
WHERE name IN
(SELECT name
FROM (SELECT t.*,
ROW_NUMBER ()
OVER (PARTITION BY name ORDER BY startdate)
rn
FROM tbl t)
WHERE rn = 1 AND occupation = 'Journalist') -- First occupation was Journalist
AND EXISTS
(SELECT name
FROM (SELECT t.*,
ROW_NUMBER ()
OVER (PARTITION BY name ORDER BY startdate)
rn
FROM tbl t)
WHERE rn = 2 AND name = mn_tbl.name AND occupation = 'Baker') -- Second occupation was Baker
AND NOT EXISTS
(SELECT name
FROM (SELECT t.*,
ROW_NUMBER ()
OVER (PARTITION BY name ORDER BY startdate)
rn
FROM tbl t)
WHERE rn > 2 AND name = mn_tbl.name) -- No more occupations
```
Upvotes: 0 <issue_comment>username_4: This is not pretty and it cannot be extended beyond two occupations... but it works :)
Counting distinct will get rid of all people who had more than 2 careers. Then the ugly trick is comparing the startdate of the journalist carer to that of the baker.
```
select name, count(*)
from candidates
group
by name
having count(distinct occupation) = 2 -- Only had two occupations
and min(case when occupation = 'Journalist' then startdate end) /* First Journalist */
< min(case when occupation = 'Baker' then startdate end) /* Then baker */
```
Upvotes: 0 |
2018/03/20 | 1,202 | 4,883 | <issue_start>username_0: Till now what I know from my previous usages is that,
```
git checkout -b branchName
```
creates a new branch and switches the branch to branchName
the new component origin/master is the part I have got no clue about.
Note: while solving a merge conflict gitHub suggested the following
```
git checkout -b master origin/master
```
Can anyone explain what is the role of this argument & what '/' does there?<issue_comment>username_1: This uses a trick of your data, which is that the `enddate` of the previous occupation matches the `startdate` of the next one:
```
with candidates as (
select name, enddate from your_table
where occupation = 'Journalist'
intersect
select name, startdate from your_table
where occupation = 'Baker'
and enddate is null -- this is current record
)
select name
from candidates
/
```
This won't work for people who started as journalists, became butchers, then became bakers. But according to your comment that's not something you'd want to do.
---
In real life bounded date ranges commonly don't overlap. That is, the `startdate` should be `enddate+1`. After all, people generally don't start a new job at lunchtime!
Upvotes: 1 <issue_comment>username_2: ```
select *
from
(
select tab.*,
count(*) over (partition by Name) as cnt,
first_value(Occupation) over (partition by Name order by StartDate) as occ#1
max(Occupation) over (partition by Name) as occ#2 -- simpler than LAST_VALUE
from tab
) st
where cnt = 2 -- excatly 2 rows
and occ#1 ='Journalist' -- 1st occupation
and occ#2 = 'Baker' -- 2nd occupation
```
Upvotes: 1 <issue_comment>username_3: Check this:
```
WITH tbl
AS (SELECT '1' id,
'John' name,
'Journalist' occupation,
'01/01/2000' startdate,
'01/01/2000' enddate
FROM DUAL
UNION ALL
SELECT '2', 'John', 'Baker', '01/01/2002', '01/01/2004' FROM DUAL
UNION ALL
SELECT '3',
'John',
'Butcher',
'01/01/2004',
NULL
FROM DUAL
UNION ALL
SELECT '4',
'Mark',
'Baker',
'05/03/2000',
NULL
FROM DUAL
UNION ALL
SELECT '5', 'Petrus', 'Lawyer', '01/01/2001', '01/01/2002' FROM DUAL
UNION ALL
SELECT '6',
'Petrus',
'Baker',
'01/01/2002',
NULL
FROM DUAL
UNION ALL
SELECT '7',
'Andre',
'Journalist',
'01/01/1999',
'01/01/2000'
FROM DUAL
UNION ALL
SELECT '8',
'Andre',
'Baker',
'01/01/2000',
NULL
FROM DUAL)
SELECT id,
name,
occupation,
startdate,
enddate
FROM (SELECT t.*,
ROW_NUMBER () OVER (PARTITION BY name ORDER BY startdate) rn
FROM tbl t) mn_tbl
WHERE name IN
(SELECT name
FROM (SELECT t.*,
ROW_NUMBER ()
OVER (PARTITION BY name ORDER BY startdate)
rn
FROM tbl t)
WHERE rn = 1 AND occupation = 'Journalist') -- First occupation was Journalist
AND EXISTS
(SELECT name
FROM (SELECT t.*,
ROW_NUMBER ()
OVER (PARTITION BY name ORDER BY startdate)
rn
FROM tbl t)
WHERE rn = 2 AND name = mn_tbl.name AND occupation = 'Baker') -- Second occupation was Baker
AND NOT EXISTS
(SELECT name
FROM (SELECT t.*,
ROW_NUMBER ()
OVER (PARTITION BY name ORDER BY startdate)
rn
FROM tbl t)
WHERE rn > 2 AND name = mn_tbl.name) -- No more occupations
```
Upvotes: 0 <issue_comment>username_4: This is not pretty and it cannot be extended beyond two occupations... but it works :)
Counting distinct will get rid of all people who had more than 2 careers. Then the ugly trick is comparing the startdate of the journalist carer to that of the baker.
```
select name, count(*)
from candidates
group
by name
having count(distinct occupation) = 2 -- Only had two occupations
and min(case when occupation = 'Journalist' then startdate end) /* First Journalist */
< min(case when occupation = 'Baker' then startdate end) /* Then baker */
```
Upvotes: 0 |
2018/03/20 | 940 | 3,812 | <issue_start>username_0: I need to upgrade an application from spring-boot-1.2.5.RELEASE to spring-boot-2.0.0.RELEASE.
I have this following code:
```
@Configuration
@ComponentScan
@EnableAutoConfiguration(exclude = {HibernateJpaAutoConfiguration.class, RedisAutoConfiguration.class})
public class NiceBootApplicationWithoutDB extends AbstractBootApplication {
public static final String APPLICATION_CONTEXT_XML = "classpath:/META-INF/application-context-nodb.xml";
public static void main(String[] args) {
SpringApplication.run(APPLICATION_CONTEXT_XML, getFullArgList(args));
}
}
```
The overload of `SpringApplication.run(APPLICATION_CONTEXT_XML, getFullArgList(args))` is:
```
/**
* Static helper that can be used to run a {@link SpringApplication} from the
* specified source using default settings.
* @param source the source to load
* @param args the application arguments (usually passed from a Java main method)
* @return the running {@link ApplicationContext}
*/
public static ConfigurableApplicationContext run(Object source, String... args) {
return run(new Object[] { source }, args);
}
/**
* Static helper that can be used to run a {@link SpringApplication} from the
* specified sources using default settings and user supplied arguments.
* @param sources the sources to load
* @param args the application arguments (usually passed from a Java main method)
* @return the running {@link ApplicationContext}
*/
public static ConfigurableApplicationContext run(Object[] sources, String[] args) {
return new SpringApplication(sources).run(args);
}
```
Both the overloads are not present in spring-boot-2.0.0.RELEASE.
**My question is -** How do I upgrade the above code?<issue_comment>username_1: You can use annotation `@ImportResource`for import configuration XML
```
@Configuration
@ComponentScan
@EnableAutoConfiguration(exclude = {HibernateJpaAutoConfiguration.class, RedisAutoConfiguration.class})
@ImportResource(APPLICATION_CONTEXT_XML)
public class NiceBootApplicationWithoutDB extends AbstractBootApplication {
public static final String APPLICATION_CONTEXT_XML = "classpath:/META-INF/application-context-nodb.xml";
public static void main(String[] args) {
SpringApplication.run(AbstractBootApplication.class, getFullArgList(args));
}
}
```
Upvotes: 1 <issue_comment>username_2: You are right : the API of the `SpringApplication` class in the version 2 of Spring Boot doesn't provide an equivalence.
So no direct way to provide a XML Spring configuration file.
According to this [answer](https://stackoverflow.com/questions/31677553/where-do-i-put-my-xml-beans-in-a-spring-boot-application), you could annotate your Spring Boot class with [`@ImportResource`](https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/context/annotation/ImportResource.html).
```
@ImportResource("classpath:/META-INF/application-context-nodb.xml")
```
It works as `@Import` but that it imports XML spring configuration files instead of class files.
**Javadoc information** :
>
> Indicates one or more resources containing bean definitions to import.
>
>
> Like `@Import`, this annotation provides functionality similar to the
> element in Spring XML. It is typically used when designing
> `@Configuration` classes to be bootstrapped by an
> `AnnotationConfigApplicationContext`, but where some XML functionality
> such as namespaces is still necessary.
>
>
>
Upvotes: 3 [selected_answer]<issue_comment>username_3: You can add a Configuration class like this, that will take in considération your spring xml confiiguration :)
```
@Configuration
@ImportResource({"classpath*:applicationContext.xml"})
public class XmlConfiguration {
}
```
Upvotes: 0 |
2018/03/20 | 1,142 | 3,193 | <issue_start>username_0: I need a regular expression or scripting code to select all the text between two outer brackets.
Example of message Body : some text(text here(possible text)text(possible text(more text)))end text
the result that i want Result: (text here(possible text)text(possible text(more text)))
the second example:
>
> (<NAME>)(Smith)(ti,ab(((Abbott near/10 (assay\* OR test\* OR analy\*
> OR *array*)) OR (Abbott p/1 Point P/1 Care) OR ARCHITECT OR (CELL p/0
> DYN)) OR ((Alere near/10 (assay\* OR test\* OR analy\* OR *array*)) OR
> (Alere NEAR/5 (Triage P/1 System)) OR INRatio OR Afinion) OR
> ((Beckman\* p/1 Coulter near/10 (assay\* OR test\* OR analy\* OR *array*))
> OR ((Beckman\* p/0 Coulter) near/2 AU????) OR (UniCel\* P/1 DxC) OR
> (UniCel\* p/1 DxI) OR ( Beckman\* near/5 Access) OR (Access\* p/1
> Systeme) OR (CytoFLEX OR (cyto p/0 flex)) OR (UniCel\* p/1 DxH) OR
> ((Coulter\* p/1 LH) OR Coulter*LH)) OR ((Ortho p/0 Clinical P/1
> Diagnostics) OR VITROS OR (vitros* p/1 System\*) OR (VITROS\* p/1 ECiQ)
> OR ORTHOTM OR (orthotm p/1 VISION) OR (ORTHO p/1 AutoVue\*)) OR
> ((Instrumentation p/0 Laboratories) OR HemosIL OR ACLTOP OR (ACL p/0
> ELITE) OR (GEM\* P/1 Premier) OR GEM*OPL) OR ((Radiometer near/10
> (assay* OR test\* OR analy\* OR *array*)) OR (AQT?? p/0 FLEX) OR (ABL??
> p/0 FLEX) OR HemoCue\*) OR ((Nova p/0 Biomedical) OR StatStrip OR (STAT
> p/0 PROFILE\*) OR ((Nova p/0 Biomedical) near/1 Prime) OR STATPROFILE\*)
> OR (((Siemens p/0 Healthcare) near/10 (assay\* OR test\* OR analy\* OR
> *array*)) OR (ADVIA p/0 Centaur) OR (Dimension p/0 Vista) OR RAPIDPOINT))) and (ud(>20170101)) (see attachment)
>
>
>
the same thing I want to extract the text from "()" (<NAME>)(Smith)(ti,ab(((... my output jack <NAME> ti,ab(........)
.)
```
var messageBody = message.getPlainBody();
var ssFile = DriveApp.getFileById(id);
DriveApp.getFolderById(folder.getId()).addFile(ssFile);
var ss = SpreadsheetApp.open(ssFile);
var sheet = ss.getSheets()[0];
sheet.insertColumnAfter(sheet.getLastColumn());
SpreadsheetApp.flush();
var sheet = ss.getSheets()[0];
var range = sheet.getRange(1, 1, sheet.getLastRow(), sheet.getLastColumn() + 1)
var values = range.getValues();
values[0][sheet.getLastColumn()] = "Search Strategy";
for (var i = 1; i < values.length; i++) {
var y = messageBody.match(/\(.*\)/ig); //my regexp to extract the the text between ()
if (y)
values[i][values[i].length - 1] = y.toString();
}
```
I've been trying for hours, I tired a lot of examples but failed.
any help will be gratefully received.
Thanks a lot.<issue_comment>username_1: Simply [match](https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Global_Objects/String/match) everything between the two brackets:
```js
const str = 'some text(text here(possible text)text(possible text(more text)))end text';
const match = str.match(/\((.*)\)/);
console.log(match[0])
```
Upvotes: 1 <issue_comment>username_2: ```js
var string = "some text(text here(possible text)text(possible text(more text)))end text";
console.log(string.match(/.*?(\(.*\))[^\)]*/)[1]);
```
Upvotes: 0 |
2018/03/20 | 779 | 2,356 | <issue_start>username_0: İ try to use tensorflow image retraining.
<https://www.tensorflow.org/tutorials/image_retraining>
train like that and it is OK:
```
D:\dev\Anaconda\python D:/dev/detect_objects/tensorflow-master/tensorflow/examples/image_retraining/retrain.py --image_dir D:/dev/detect_objects/flower_photos --bottleneck_dir D:/dev/detect_objects/tensorflow-master/retrain/bottleneck --architecture mobilenet_0.25_128 --output_graph D:/dev/detect_objects/tensorflow-master/retrain/output_graph/output.pb --output_labels D:/dev/detect_objects/tensorflow-master/retrain/output_labels/labels.txt --saved_model_dir D:/dev/detect_objects/tensorflow-master/retrain/saved_model_dir --how_many_training_steps 100
```
When predict new image like:
```
D:\dev\Anaconda\python D:/dev/detect_objects/tensorflow-master/tensorflow/examples/label_image/label_image.py --graph=D:/dev/detect_objects/tensorflow-master/retrain/output_graph/output.pb --labels=D:/dev/detect_objects/tensorflow-master/retrain/output_labels/labels.txt --image=D:/dev/detect_objects/flower_photos/daisy/21652746_cc379e0eea_m.jpg
```
It gives error
```
KeyError: "The name 'import/Mul' refers to an Operation not in the graph."
```
label\_image.py content:
```
input_height = 299
input_width = 299
input_mean = 0
input_std = 255
#input_layer = "input"
#output_layer = "InceptionV3/Predictions/Reshape_1"
input_layer = "Mul"
output_layer = "final_result"
```
What is the problem here?<issue_comment>username_1: Change this:
```
input_height = 299
input_width = 299
input_mean = 0
input_std = 255
#input_layer = "input"
#output_layer = "InceptionV3/Predictions/Reshape_1"
input_layer = "Mul"
output_layer = "final_result"
```
to this:
```
input_height = 128
input_width = 128
input_mean = 0
input_std = 128
input_layer = "input"
output_layer = "final_result"
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: >
> If there is no node in the graph called "import/Mul" and we don't know
> what the graph is or how it was produced, there's little chance that
> anyone will be able to guess the right answer.
>
>
>
You might try printing the list of operations of your graph using `graph.get_operations()` and attempting to locate an appropriate-sounding node (try the first one that is printed)
Upvotes: 0 |
2018/03/20 | 369 | 1,243 | <issue_start>username_0: In the (Ruby) documentation of Pact, there is the possibility to add a Provider base-state in the provider states. I'm using Pact.Net and use ProviderStateMiddleware, but I can't figure out how to set up the base-state with this implementation. Is it possible to do this and/or does anyone have any experience setting this up?
Thanks in advance!<issue_comment>username_1: Change this:
```
input_height = 299
input_width = 299
input_mean = 0
input_std = 255
#input_layer = "input"
#output_layer = "InceptionV3/Predictions/Reshape_1"
input_layer = "Mul"
output_layer = "final_result"
```
to this:
```
input_height = 128
input_width = 128
input_mean = 0
input_std = 128
input_layer = "input"
output_layer = "final_result"
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: >
> If there is no node in the graph called "import/Mul" and we don't know
> what the graph is or how it was produced, there's little chance that
> anyone will be able to guess the right answer.
>
>
>
You might try printing the list of operations of your graph using `graph.get_operations()` and attempting to locate an appropriate-sounding node (try the first one that is printed)
Upvotes: 0 |
2018/03/20 | 926 | 3,803 | <issue_start>username_0: Hello I'm trying to define relationships in my migrations
I'm using on delete restrict to prevent deletion of parent record when child is present. but its not working. For example I have this event table (parent) that has editions (child). I'm using event\_id in editions table
with `onDelete('restrict')` and have event\_id in my edition table..
It should restrict me on deleting from events table as long record has child record in edition tables right? but its not..
Here are the migrations of both tables
Events (parent)
```
php
use Illuminate\Support\Facades\Schema;
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Database\Migrations\Migration;
class CreateEventsTable extends Migration
{
/**
* Run the migrations.
*
* @return void
*/
public function up()
{
Schema::create('events', function (Blueprint $table) {
//master table
$table-increments('event_id');
$table->string('name');
$table->text('full_name');
$table->text('description');
$table->tinyInteger('status');
$table->integer('created_by');
$table->integer('updated_by');
$table->timestamps();
});
}
/**
* Reverse the migrations.
*
* @return void
*/
public function down()
{
Schema::dropIfExists('events');
}
}
```
Editions (Child)
```
php
use Illuminate\Support\Facades\Schema;
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Database\Migrations\Migration;
class CreateEditionsTable extends Migration
{
/**
* Run the migrations.
*
* @return void
*/
public function up()
{
Schema::create('editions', function (Blueprint $table) {
$table-increments('edition_id');
$table->integer('event_id')->unsigned();
$table->string('name');
$table->dateTime('start')->nullable();
$table->dateTime('end')->nullable();
$table->enum('stage', ['Archived', 'Cancelled', 'Closed', 'Live', 'On-site', 'Pre-event', 'Sold out'])->nullable()->default('Pre-event');
$table->tinyInteger('status');
$table->integer('created_by');
$table->integer('updated_by');
$table->timestamps();
});
Schema::table('editions', function($table) {
$table->foreign('event_id')
->references('event_id')->on('events')
->onDelete('restrict')->onUpdate('restrict');
});
}
/**
* Reverse the migrations.
*
* @return void
*/
public function down()
{
Schema::dropIfExists('editions');
}
}
```<issue_comment>username_1: One thing I see missing from your foreign key definition is adding an index on it, that is a requirement for foreign keys and it might be what's caussing issues for you.
Try changing
```
$table->integer('event_id')->unsigned();
```
to
```
$table->integer('event_id')->unsigned()->index();
```
Also, you can just add the foreign key definition immediately after the column definition, no need to put it in a different `Schema::table()` block.
Upvotes: 3 [selected_answer]<issue_comment>username_2: According to this [thread](https://github.com/michaeldyrynda/laravel-cascade-soft-deletes/issues/3) :
>
> If you are using the SoftDeletes trait, then calling the delete()
> method on your model will only update the deleted\_at field in your
> database, and the onDelete constraint will not be triggered, given
> that it is triggered at the database level i.e. when a DELETE query is
> executed.
>
>
>
So make sure that you use `DELETE` not `SoftDeletes` otherwise you can add the constraint manually.
Upvotes: 2 |
2018/03/20 | 591 | 1,643 | <issue_start>username_0: When I run the following code I get an KeyError: ('a', 'occurred at index a'). How can I apply this function, or something similar, over the Dataframe without encountering this issue?
Running python3.6, pandas v0.22.0
```
import numpy as np
import pandas as pd
def add(a, b):
return a + b
df = pd.DataFrame(np.random.randn(3, 3),
columns = ['a', 'b', 'c'])
df.apply(lambda x: add(x['a'], x['c']))
```<issue_comment>username_1: I think need parameter `axis=1` for processes by rows in [`apply`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html):
>
> **axis**: {0 or 'index', 1 or 'columns'}, default 0
>
>
> **0** or **index**: apply function to each column
>
> **1** or **columns**: apply function to each row
>
>
>
```
df = df.apply(lambda x: add(x['a'], x['c']), axis=1)
print (df)
0 -0.802652
1 0.145142
2 -1.160743
dtype: float64
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: You don't even need apply, you can directly add the columns. The output will be a series either way:
```
df = df['a'] + df['c']
```
for example:
```
df = pd.DataFrame({'a': [1, 2], 'b': [3, 4], 'c': [5, 6]})
df = df['a'] + df['c']
print(df)
# 0 6
# 1 8
# dtype: int64
```
Upvotes: 0 <issue_comment>username_3: you can try this
```
import numpy as np
import pandas as pd
def add(df):
return df.a + df.b
df = pd.DataFrame(np.random.randn(3, 3),
columns = ['a', 'b', 'c'])
df.apply(add, axis =1)
```
where of course you can substitute any function that takes as inputs the columns of df.
Upvotes: 0 |
2018/03/20 | 856 | 3,065 | <issue_start>username_0: I am newly with node js, and I would like to find recursively the closest package.json. Actually, continue finding package.json until will not hit it.
My folder tree
```
root/
-contarats/
-proto/
some.proto
-package.json
"script": {
"contracts": "generate-some-contracts contracts/proto contracts",
}
const input = process.argv[2]
const settings = require(path.resolve(input, 'package.json'))
```<issue_comment>username_1: Are you looking for a way to iterate through directories? If so heres a synchronous function that would do that
```
function search_sync(dir) {
var results = []
var list = fs.readdirSync(dir)
list.forEach(function(file) {
file = path.resolve(dir, file)
filename = file.split('\\');
filename = filename[filename.length-1]
var stat = fs.statSync(file)
if (stat && stat.isDirectory()) results = results.concat(search_sync(file))
else if(filename == 'package.json')results.push(file)
})
return results
```
}
That will return an array of any files that are named package.json with their full file path. EG:
```
search_sync('./')
[C:\Users\User\MyNodeProject\package.json,
C:\Users\User\MyNodeProject\npm\someDependency\package.json,
C:\Users\User\MyNodeProject\npm\someOtherDependency\package.json]
```
Personally, I'd then break each line by their '\' character and see which one is closer to my root folder
Upvotes: 2 <issue_comment>username_2: Looking at your tree of directories, the `package.json` file is not in `contracts/proto`, but in `contracts`. (I assume that `contaracts` is a typo.) Changing the first argument on the command line should help:
```
generate-some-contracts contracts contracts
```
Nevertheless, you ask about the recursive search for the nearest `package.json`. NPM does it, when looking for the package root. It starts in the current directory and then follows the ancestors, until it finds a `package.json`. A function reading and parsing that `package.json`, similarly to `require`, could look like this:
```js
const { readFile } = require('fs/promises')
const { join, resolve } = require('path')
async function loadPackageJson(cwd) {
const startDir = cwd || process.env.INIT_CWD || process.cwd()
let dir = startDir, prevDir
do {
try {
const path = join(dir, 'package.json')
const content = await readFile(path, 'utf8')
return JSON.parse(content)
} catch (err) {
if (err.code !== 'ENOENT') throw err
}
prevDir = dir
dir = resolve(dir, '..')
} while (prevDir !== dir)
throw new Error(`package.json not found in ${startDir} and its ancestors`)
}
```
Upvotes: 0 <issue_comment>username_3: ```
function findParent(dir) {
if (fs.existsSync(path.join(dir, 'package.json'))) {
return dir;
}
const parentDir = path.resolve(dir, '..');
if (dir === parentDir) {
return null;
}
return findParent(parentDir);
}
const parentDir = findParent(__dirname);
const filePath = path.normalize(`${parentDir}/package.json`);
const settings = require(filePath);
```
Upvotes: 0 |
2018/03/20 | 1,143 | 3,743 | <issue_start>username_0: I have a filemaker database and I want to show records based on the following criteria:
Show all records in which either the owner or the user is "John" and the place is "London", "Paris" or "Amsterdam"
The first part is easy:
```
Enter Find Mode [Pause:Off]
Set Field [mydb::Owner; "John"]
New Record/Request
Set Field [mydb::User"; "John"]
Perform Find[]
```
But what about the second part? So far I have not been able to implement this. I could of course go ahead and loop through the records found and omit the ones I don't want, but there should be a better way to do this.<issue_comment>username_1: There is. You can use the Constrain Found Set after you perform your initial find.
```
Enter Find Mode [Pause:Off]
Set Field [mydb::Owner; "John"]
New Record/Request
Set Field [mydb::User"; "John"]
Perform Find[]
#Constrain the found set
Enter Find Mode [Pause:Off]
Set Field [mydb::Place; "London"]
New Record/Request
Set Field [mydb::Place; "Paris"]
New Record/Request
Set Field [mydb::Place; "Amsterdam"]
Constrain Found Set []
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: One straightforward way to create FileMaker searches is to rephrase your search-requirements in a list like this:
```
(O and L)
or
(O and P)
or
(O and A)
or
(U and L)
or
(U and P)
or
(U and A)
but not
(B and C)
```
In other words:
1. For *each case* that you want to find, you write down the criteria seperated by 'and' in one line. You can add parentheses, which are optional, but they do make it easier to read & understand the logic correctly
2. Then you separate each line with an 'or'
3. And if you have cases that you want to *exclude* from the results you write 'but not' instead of 'or', and - very importantly - you add the exclusions at the END of the list!
In your example, you could write:
```
(Owner="John" and Place="London")
or
(Owner="John" and Place="Paris")
or
(Owner="John" and Place="Amsterdam")
or
(User="John" and Place="London")
or
(User="John" and Place="Paris")
or
(User="John" and Place="Amsterdam")
```
When you have this written down, all you then have to do is:
1. Replace each criteria with the equivalent Set Field step (and otherwise ignore the 'and's)
2. Replace 'or' with a New Record/Request step
3. Replace 'but not' with the two steps New Record/Request, Omit Record
4. Put all that in an Enter Find Mode [Pause:Off] / Perform Find[] sandwich, and you are ready to run!
The example code then looks like this:
```
Enter Find Mode [Pause:Off]
Set Field [mydb::Owner ; "John"]
Set Field [mydb::Place ; "London"]
New Record/Request
Set Field [mydb::Owner ; "John"]
Set Field [mydb::Place ; "Paris"]
New Record/Request
Set Field [mydb::Owner ; "John"]
Set Field [mydb::Place ; "Amsterdam"]
New Record/Request
Set Field [mydb::User ; "John"]
Set Field [mydb::Place ; "London"]
New Record/Request
Set Field [mydb::User ; "John"]
Set Field [mydb::Place ; "Paris"]
New Record/Request
Set Field [mydb::User ; "John"]
Set Field [mydb::Place ; "Amsterdam"]
Perform Find[]
```
Whether you use this multiple-request method or username_1's constrain method is a question of performance & the quantity of combinations you have got.
For the sake of completeness, here is an example of 'but not' - with a Brexit theme:
Say you are wanting to find FileMaker programmers in continental europe, you may have the following criteria:
```
(Job="FileMaker" and Place="Europe")
but not
(Land="UK")
```
Using the above method this would then be implemented as:
```
Enter Find Mode [Pause:Off]
Set Field [mydb::Job ; "FileMaker"]
Set Field [mydb::Place ; "Europe"]
New Record/Request
Omit Record
Set Field [mydb::Land ; "UK]
Perform Find[]
```
Happy FileMaking!
username_2
Upvotes: 1 |
2018/03/20 | 2,275 | 10,467 | <issue_start>username_0: In my android project, I created Generic RecyclerView's Adapter class & Viewholder class like below,
Adapter class,
```
public class BaseRecyclerViewAdapter extends RecyclerView.Adapter {
private ItemClickListener itemClickListener;
private List extends Object objectArrayList;
private int layout;
private BaseViewHolder baseViewHolder;
public BaseRecyclerViewAdapter(int layout, ItemClickListener itemClickListener, List extends Object objectArrayList) {
this.layout = layout;
this.itemClickListener = itemClickListener;
this.objectArrayList = objectArrayList;
}
public BaseRecyclerViewAdapter(BaseViewHolder baseViewHolder, int layout,
ItemClickListener itemClickListener, List extends Object objectArrayList) {
this.baseViewHolder = baseViewHolder;
this.layout = layout;
this.itemClickListener = itemClickListener;
this.objectArrayList = objectArrayList;
}
@NonNull
@Override
public BaseViewHolder onCreateViewHolder(@NonNull ViewGroup parent, int viewType) {
View itemView = LayoutInflater.from(parent.getContext()).inflate(layout, parent, false);
return new BaseViewHolder(itemView, itemClickListener);
}
@Override
public void onBindViewHolder(@NonNull BaseViewHolder holder, int position) {
}
@Override
public int getItemCount() {
return objectArrayList.size();
}
}
```
ViewHolder class
```
public class BaseViewHolder extends RecyclerView.ViewHolder implements View.OnClickListener, View.OnLongClickListener {
private CardView cardView;
private AppCompatImageView imgEdit;
private AppCompatImageView imgDelete;
private ItemClickListener itemClickListener;
public BaseViewHolder(View itemView, ItemClickListener itemClickListener) {
super(itemView);
cardView = itemView.findViewById(R.id.cardView);
imgEdit = itemView.findViewById(R.id.imgEdit);
imgDelete = itemView.findViewById(R.id.imgDelete);
imgDelete.setOnClickListener(this);
cardView.setOnClickListener(this);
cardView.setOnLongClickListener(this);
this.itemClickListener = itemClickListener;
}
@Override
public void onClick(View view) {
if (view.getId() == cardView.getId()) {
itemClickListener.onClick(view, getLayoutPosition(), ConstantCodes.ACTION_CLICK);
} else if (view.getId() == imgDelete.getId()) {
itemClickListener.onClick(view, getLayoutPosition(), ConstantCodes.ACTION_DELETE);
} else if (view.getId() == imgEdit.getId()) {
itemClickListener.onClick(view, getLayoutPosition(), ConstantCodes.ACTION_EDIT);
}
}
@Override
public boolean onLongClick(View view) {
itemClickListener.onLongClick(view, getLayoutPosition());
return false;
}
}
```
Here is the way I implement the above adapter & viewholder in my activity class
```
private class DispatchViewHolder extends BaseViewHolder {
AppCompatTextView txtInvoiceNo;
AppCompatTextView txtVehicleNo;
AppCompatTextView txtPartyName;
AppCompatTextView txtNoOfBags;
AppCompatTextView txtMaterialType;
AppCompatTextView txtWeight;
DispatchViewHolder(View itemView, ItemClickListener itemClickListener) {
super(itemView, itemClickListener);
}
}
private class DispatchMaterialAdapter extends BaseRecyclerViewAdapter {
ItemClickListener itemClickListener;
List extends Object objectArrayList;
DispatchMaterialAdapter(ItemClickListener itemClickListener, List extends Object objectArrayList) {
super(R.layout.dispatch_material_row, itemClickListener, objectArrayList);
this.itemClickListener = itemClickListener;
this.objectArrayList = objectArrayList;
}
@NonNull
@Override
public DispatchViewHolder onCreateViewHolder(@NonNull ViewGroup parent, int viewType) {
View itemView = LayoutInflater.from(parent.getContext()).inflate(R.layout.dispatch_material_row, parent, false);
return new DispatchViewHolder(itemView, itemClickListener);
}
@Override
public void onBindViewHolder(@NonNull BaseViewHolder holder, int position) {
try {
DispatchViewHolder dispatchViewHolder = (DispatchViewHolder) holder;
DispatchMaterialMd dispatchMaterialMd = (DispatchMaterialMd) objectArrayList.get(position);
dispatchViewHolder.txtInvoiceNo.setText(dispatchMaterialMd.getInvoiceNo());
dispatchViewHolder.txtVehicleNo.setText(dispatchMaterialMd.getVehicleNo());
dispatchViewHolder.txtPartyName.setText(dispatchMaterialMd.getPartyName());
dispatchViewHolder.txtNoOfBags.setText(dispatchMaterialMd.getNoOfBags());
dispatchViewHolder.txtMaterialType.setText(dispatchMaterialMd.getMaterialType());
dispatchViewHolder.txtWeight.setText(dispatchMaterialMd.getWeight());
} catch (Exception e) {
e.printStackTrace();
}
}
@Override
public int getItemCount() {
return dispatchMaterialMds.size();
}
}
```
I have data in my objectArrayList list. I can print it in console, but nothing is coming in recyclerview. In recyclerview, it is only displaying empty Textview.
Following way, implement recyclerview,
```
layoutManager = new LinearLayoutManager(this);
dispatchMaterialAdapter =
new DispatchMaterialAdapter(itemClickListener, dispatchMaterialMds);
recycleListDetail.setLayoutManager(layoutManager);
recycleListDetail.setAdapter(dispatchMaterialAdapter);
```
dispatchMaterialMds is an arraylist of model class which is filled from the database's data.
Can you please help me, why my data is not getting displayed ?<issue_comment>username_1: Please try this
```
@Override
public int getItemCount() {
return objectArrayList.size();
}
```
Upvotes: -1 <issue_comment>username_2: make change in set adapter code like below.
```
recycleListDetail.setLayoutManager(new LinearLayoutManager(this));
dispatchMaterialAdapter =
new DispatchMaterialAdapter(itemClickListener, dispatchMaterialMds);
recycleListDetail.setLayoutManager(layoutManager);
recycleListDetail.setAdapter(dispatchMaterialAdapter);
dispatchMaterialAdapter.notifyDataSetChanged();
```
and also check your adapter in bind view get the arraylist value and check arraylist type in getting and passing to adapter that same or not.
Upvotes: 0 <issue_comment>username_2: above discussion i am make demo for recycler view and is wroking using simple string array..
Adapter class.
public class RecyclerViewAdpater extends RecyclerView.Adapter {
List mStringList=new ArrayList<>();// hear you can pass any pojo class object.
Context mContext;
```
public RecyclerViewAdpater(List mStringList, Context mContext) {
this.mStringList = mStringList;
this.mContext = mContext;
}
@Override
public ItemViewHolder onCreateViewHolder(ViewGroup parent, int viewType) {
View view= LayoutInflater.from(parent.getContext()).inflate(R.layout.activity\_main,parent,false);
return new ItemViewHolder(view);
}
@Override
public void onBindViewHolder(ItemViewHolder holder, int position) {
String data=mStringList.get(position); // if you pass object of class then create that class object.
holder.textView.setText(data);
}
@Override
public int getItemCount() {
return mStringList.size();
}
public class ItemViewHolder extends RecyclerView.ViewHolder {
TextView textView;
public ItemViewHolder(View itemView) {
super(itemView);
textView=itemView.findViewById(R.id.timeData);
}
}
```
}
then implement activity like below code put in onCreate view method ...
```
recyclerViewAdpater=new RecyclerViewAdpater(mStingList,this);
mRvData.setLayoutManager(new LinearLayoutManager(this));
mRvData.setAdapter(recyclerViewAdpater);
chatAdapter.notifyDataSetChanged();
```
Upvotes: 0 <issue_comment>username_3: You made BaseViewHolder that did the inflation then redone it in the DispatchViewHolder not sure if this will solve the problem but there was repeated code.
```
private class DispatchViewHolder extends BaseViewHolder {
AppCompatTextView txtInvoiceNo;
AppCompatTextView txtVehicleNo;
AppCompatTextView txtPartyName;
AppCompatTextView txtNoOfBags;
AppCompatTextView txtMaterialType;
AppCompatTextView txtWeight;
DispatchViewHolder(View itemView, ItemClickListener itemClickListener) {
super(itemView, itemClickListener);
}
}
private class DispatchMaterialAdapter extends BaseRecyclerViewAdapter {
ItemClickListener itemClickListener;
List extends Object objectArrayList;
DispatchMaterialAdapter(ItemClickListener itemClickListener, List extends Object objectArrayList) {
super(R.layout.dispatch_material_row, itemClickListener, objectArrayList);
this.itemClickListener = itemClickListener;
this.objectArrayList = objectArrayList;
}
@Override
public DispatchViewHolder onCreateViewHolder(@NonNull ViewGroup parent, int viewType) {
View itemView = LayoutInflater.from(parent.getContext()).inflate(R.layout.dispatch_material_row, parent, false);
return new DispatchViewHolder(itemView, itemClickListener);
}
@Override
public void onBindViewHolder(@NonNull BaseViewHolder holder, int position) {
try {
DispatchViewHolder dispatchViewHolder = (DispatchViewHolder) holder;
DispatchMaterialMd dispatchMaterialMd = (DispatchMaterialMd) objectArrayList.get(position);
dispatchViewHolder.txtInvoiceNo.setText(dispatchMaterialMd.getInvoiceNo());
dispatchViewHolder.txtVehicleNo.setText(dispatchMaterialMd.getVehicleNo());
dispatchViewHolder.txtPartyName.setText(dispatchMaterialMd.getPartyName());
dispatchViewHolder.txtNoOfBags.setText(dispatchMaterialMd.getNoOfBags());
dispatchViewHolder.txtMaterialType.setText(dispatchMaterialMd.getMaterialType());
dispatchViewHolder.txtWeight.setText(dispatchMaterialMd.getWeight());
} catch (Exception e) {
e.printStackTrace();
}
}
}
```
In the DispatchViewHolder:
```
AppCompatTextView txtInvoiceNo;
AppCompatTextView txtVehicleNo;
AppCompatTextView txtPartyName;
AppCompatTextView txtNoOfBags;
AppCompatTextView txtMaterialType;
AppCompatTextView txtWeight;
DispatchViewHolder(View itemView, ItemClickListener itemClickListener) {
super(itemView, itemClickListener);
//Initiliazation of new view?
}
```
Upvotes: 2 [selected_answer] |
2018/03/20 | 1,077 | 3,521 | <issue_start>username_0: This is my code, we have database called "our\_new\_database".
The connection is fine, as well as the HTML Form and credentials and I still cannot insert information into the database.
Table is created, I can see the columns and lines in XAMPP / phpMyAdmin.
The only error I'm getting is the last echo of the If/Else Statement - "Could not register".
Tried everything I can and still cannot make this insertion to work normally.
Can someone advise me something?
```
php
include "app".DIRECTORY_SEPARATOR."config.php";
include "app".DIRECTORY_SEPARATOR."db-connection.php";
include "app".DIRECTORY_SEPARATOR."form.php";
$foo_connection = db_connect($host, $user_name, $user_password, $dbname);
$sql = "CREATE TABLE user_info(
user_name_one VARCHAR(30) NOT NULL,
user_name_two VARCHAR(30) NOT NULL,
user_email VARCHAR(70) NOT NULL UNIQUE
)";
if(mysqli_query($foo_connection, $sql)){
echo "Table created successfully";
}
else {
echo "Error creating table - table already exist.".mysqli_connect_error($foo_connection);
}
if($_SERVER['REQUEST_METHOD'] == 'POST'){
$user_name_one = $_POST["userOne"];
$user_name_two = $_POST["userTwo"];
$user_email = $_POST["userEmail"];
$sql = "INSERT INTO user_info (userOne,userTwo,userEmail) VALUES('".$_POST['userOne']."',('".$_POST['userTwo']."',('".$_POST['userEmail']."')";
if(mysqli_query($foo_connection,$sql))
{
echo "Successfully Registered";
}
else
{
echo "Could not register";
}
}
$foo_connection-close();
```<issue_comment>username_1: ```
$sql = "INSERT INTO user_info (userOne,userTwo,userEmail) VALUES('".$_POST['userOne']."','".$_POST['userTwo']."','".$_POST['userEmail']."')";
```
Upvotes: 0 <issue_comment>username_2: I reckon your parentheses on this line:
```
$sql = "INSERT INTO user_info (userOne,userTwo,userEmail) VALUES('".$_POST['userOne']."',('".$_POST['userTwo']."',('".$_POST['userEmail']."')";
```
Do not match, it should look like something like this:
```
$sql = "INSERT INTO user_info (userOne,userTwo,userEmail) VALUES('".$_POST['userOne']."','".$_POST['userTwo']."','".$_POST['userEmail']."')";
```
Cause for know your query is:
```
"INSERT INTO user_info (userOne,userTwo,userEmail) VALUES('value',('value1',('value2')"
```
As said above you might use:
echo $foo\_connection->error
To see some errors displayed
Upvotes: 0 <issue_comment>username_3: You need to change
```
$sql = "INSERT INTO user_info (userOne,userTwo,userEmail) VALUES('".$_POST['userOne']."',('".$_POST['userTwo']."',('".$_POST['userEmail']."')";
```
To
```
$sql = "INSERT INTO `user_info`(`user_name_one`,`user_name_two`,`user_emai`l) VALUES ('$user_name_one','$user_name_two','$user_email')";
```
remember you should use prepared query
```
$sql= $foo_connection->prepare("INSERT INTO user_info
(user_name_one,user_name_two,user_email))
VALUES(?,?,?)");
$sql->bind_param('sss', $user_name_one, $user_name_two, $user_email );
$sql->execute();
```
Upvotes: 1 [selected_answer]<issue_comment>username_4: You should avoid the direct use of variables in SQL statements, instead, you should use parameterized queries.
This also should avoid the need to string concatenation and manipulation problems.
```
$stmt = $foo_connection->prepare("INSERT INTO user_info
(user_name_one,user_name_two,user_email))
VALUES(?,?,?)");
$stmt->bind_param('sss', $user_name_one, $user_name_two, $user_email );
$stmt->execute();
```
Upvotes: 3 |
2018/03/20 | 1,246 | 4,480 | <issue_start>username_0: I am trying to trigger a Dataflow pipeline from a Cloud Function which itself is triggered upon upload of a new file in a GCS bucket.
When I upload a file, the Cloud function gets triggered properly but timesout after few seconds without any Dataflow being triggered.
Below is my function code:
```js
const google = require('googleapis');
const projectId = "iot-fitness-198120";
exports.moveDataFromGCStoPubSub = function(event, callback) {
const file = event.data;
if (file.resourceState === 'exists' && file.name) {
google.auth.getApplicationDefault(function (err, authClient, projectId) {
if (err) {
throw err;
}
if (authClient.createScopedRequired && authClient.createScopedRequired()) {
authClient = authClient.createScoped([
'https://www.googleapis.com/auth/cloud-platform',
'https://www.googleapis.com/auth/userinfo.email'
]);
}
console.log("File exists and client function is authenticated");
console.log(file);
const dataflow = google.dataflow({ version: 'v1b3', auth: authClient });
console.log(`Incoming data: ${file.name}`);
dataflow.projects.templates.create({
projectId: projectId,
resource: {
parameters: {
inputFile: `gs://${file.bucket}/${file.name}`,
outputTopic: `projects/iot-fitness-198120/topics/MemberFitnessData`
},
jobName: 'CStoPubSub',
gcsPath: 'gs://dataflow-templates/latest/GCS_Text_to_Cloud_PubSub',
staginglocation: 'gs://fitnessanalytics-tmp/tmp'
}
}, function(err, response) {
if (err) {
console.error("problem running dataflow template, error was: ", err);
}
console.log("Dataflow template response: ", response);
callback();
});
});
}
};
```
The execution doesn't even log the following line, console.log("File exists and client function is authenticated"); which tells me it is not even getting that far.
Here's the log output during execution:
2018-03-20 04:56:43.283 GST
DataflowTriggeringFunction
52957909906492
Function execution took 60097 ms, finished with status: 'timeout'
2018-03-20 04:55:43.188 GST
DataflowTriggeringFunction
52957909906492
Function execution started
Any idea why it's not triggering the Dataflow and yet not throwing an error message ?<issue_comment>username_1: Guess your cloud function execution fails doesn't satisfy your if statement,
```
if (file.resourceState === 'exists' && file.name)
```
I had similar issue when I started working on Cloud Function. Modify your index.js file `var {google} = require('googleapis');` as provided in the solution [here](https://stackoverflow.com/questions/49348220/google-cloud-functions-cannot-read-property-getapplicationdefault)
Upvotes: 0 <issue_comment>username_2: I have finally modified the code. Got some help from GCP support. Below is the right syntax that works:
```
var {google} = require('googleapis');
exports.moveDataFromGCStoPubSub = (event, callback) => {
const file = event.data;
const context = event.context;
console.log(`Event ${context.eventId}`);
console.log(` Event Type: ${context.eventType}`);
console.log(` Bucket: ${file.bucket}`);
console.log(` File: ${file.name}`);
console.log(` Metageneration: ${file.metageneration}`);
console.log(` Created: ${file.timeCreated}`);
console.log(` Updated: ${file.updated}`);
google.auth.getApplicationDefault(function (err, authClient, projectId) {
if (err) {
throw err;
}
console.log(projectId);
const dataflow = google.dataflow({ version: 'v1b3', auth: authClient });
console.log(`gs://${file.bucket}/${file.name}`);
dataflow.projects.templates.create({
gcsPath: 'gs://dataflow-templates/latest/GCS_Text_to_Cloud_PubSub',
projectId: projectId,
resource: {
parameters: {
inputFilePattern: `gs://${file.bucket}/${file.name}`,
outputTopic: 'projects/iot-fitness-198120/topics/MemberFitnessData2'
},
environment: {
tempLocation: 'gs://fitnessanalytics-tmp/tmp'
},
jobName: 'CStoPubSub',
//gcsPath: 'gs://dataflow-templates/latest/GCS_Text_to_Cloud_PubSub',
}
}, function(err, response) {
if (err) {
console.error("problem running dataflow template, error was: ", err);
}
console.log("Dataflow template response: ", response);
callback();
});
});
callback();
};
```
Upvotes: 2 |
2018/03/20 | 1,720 | 4,954 | <issue_start>username_0: Installed SSL in hosting. after installing it tried to change redirect in .htaccess for getting https:// with greenpad lock symbol but not working what i have tried is just forcing https in htaccess files it was not working. Below is the .htaccess code already present with my script. In that i have added `RewriteRule ^(.*)$ https://` It was not working.
```
RewriteEngine On
Options -Indexes
#RewriteBase /vrs7
RewriteCond %{REQUEST\_URI} !^/[0-9]+\..+\.cpaneldcv$
RewriteCond %{REQUEST\_URI} !^/[A-F0-9]{32}\.txt(?:\ Comodo\ DCV)?$
RewriteCond %{REQUEST\_URI} !^/\.well-known/pki-validation/[A-F0-9]{32}\.txt(?:\ Comodo\ DCV)?$
RewriteRule ^(.\*)/l([0-9]+)$ http://%{HTTP\_HOST}/$1/l.$2 [R=301,L]
RewriteCond %{REQUEST\_URI} !^/[0-9]+\..+\.cpaneldcv$
RewriteCond %{REQUEST\_URI} !^/[A-F0-9]{32}\.txt(?:\ Comodo\ DCV)?$
RewriteCond %{REQUEST\_URI} !^/\.well-known/pki-validation/[A-F0-9]{32}\.txt(?:\ Comodo\ DCV)?$
RewriteRule ^(.\*)/n([0-9]+)$ http://%{HTTP\_HOST}/$1/n.$2 [R=301,L]
RewriteCond %{REQUEST\_URI} !^/[0-9]+\..+\.cpaneldcv$
RewriteCond %{REQUEST\_URI} !^/[A-F0-9]{32}\.txt(?:\ Comodo\ DCV)?$
RewriteCond %{REQUEST\_URI} !^/\.well-known/pki-validation/[A-F0-9]{32}\.txt(?:\ Comodo\ DCV)?$
RewriteRule ^(.\*)/p([0-9]+)$ http://%{HTTP\_HOST}/$1/p.$2 [R=301,L]
#Adds trailing slash
#RewriteCond %{REQUEST\_FILENAME} !-f
#RewriteCond %{REQUEST\_URI} !(\.[a-zA-Z0-9]{1,5}|/|#(.\*))$
#RewriteRule ^(.\*)$ $1/ [R=301,L]
#Remove trailing slash
#RewriteRule ^(.\*)/$ $1 [R=301,L]
RewriteCond %{QUERY\_STRING} ^(.\*)?gclid=(.\*) [OR]
RewriteCond %{QUERY\_STRING} ^(.\*)?utm\_source=(.\*) [OR]
RewriteCond %{QUERY\_STRING} ^(.\*)?fb\_action\_ids=(.\*)
RewriteCond %{REQUEST\_URI} !^/[0-9]+\..+\.cpaneldcv$
RewriteCond %{REQUEST\_URI} !^/[A-F0-9]{32}\.txt(?:\ Comodo\ DCV)?$
RewriteCond %{REQUEST\_URI} !^/\.well-known/pki-validation/[A-F0-9]{32}\.txt(?:\ Comodo\ DCV)?$
RewriteRule ^(.\*)$ index.php?/$1 [L]
# Enforce www
RewriteCond %{HTTP\_HOST} !^www\.
RewriteCond %{REQUEST\_URI} !^/[0-9]+\..+\.cpaneldcv$
RewriteCond %{REQUEST\_URI} !^/[A-F0-9]{32}\.txt(?:\ Comodo\ DCV)?$
RewriteCond %{REQUEST\_URI} !^/\.well-known/pki-validation/[A-F0-9]{32}\.txt(?:\ Comodo\ DCV)?$
RewriteRule ^(.\*)$ https://www.%{HTTP\_HOST}/$1 [R=301,L]
RewriteCond %{REQUEST\_FILENAME} !-f
RewriteCond %{REQUEST\_FILENAME} !-d
RewriteCond %{REQUEST\_URI} !^/[0-9]+\..+\.cpaneldcv$
RewriteCond %{REQUEST\_URI} !^/[A-F0-9]{32}\.txt(?:\ Comodo\ DCV)?$
RewriteCond %{REQUEST\_URI} !^/\.well-known/pki-validation/[A-F0-9]{32}\.txt(?:\ Comodo\ DCV)?$
RewriteRule ^(.\*)$ index.php?/$1 [PT,QSA]
ErrorDocument 404 /404.shtml
```
I was trying this for an custom php website. It was vacation rental script it has no option inside to add https:// in admin pannel. So that i am trying to implement it with the help of htaccess file.<issue_comment>username_1: Guess your cloud function execution fails doesn't satisfy your if statement,
```
if (file.resourceState === 'exists' && file.name)
```
I had similar issue when I started working on Cloud Function. Modify your index.js file `var {google} = require('googleapis');` as provided in the solution [here](https://stackoverflow.com/questions/49348220/google-cloud-functions-cannot-read-property-getapplicationdefault)
Upvotes: 0 <issue_comment>username_2: I have finally modified the code. Got some help from GCP support. Below is the right syntax that works:
```
var {google} = require('googleapis');
exports.moveDataFromGCStoPubSub = (event, callback) => {
const file = event.data;
const context = event.context;
console.log(`Event ${context.eventId}`);
console.log(` Event Type: ${context.eventType}`);
console.log(` Bucket: ${file.bucket}`);
console.log(` File: ${file.name}`);
console.log(` Metageneration: ${file.metageneration}`);
console.log(` Created: ${file.timeCreated}`);
console.log(` Updated: ${file.updated}`);
google.auth.getApplicationDefault(function (err, authClient, projectId) {
if (err) {
throw err;
}
console.log(projectId);
const dataflow = google.dataflow({ version: 'v1b3', auth: authClient });
console.log(`gs://${file.bucket}/${file.name}`);
dataflow.projects.templates.create({
gcsPath: 'gs://dataflow-templates/latest/GCS_Text_to_Cloud_PubSub',
projectId: projectId,
resource: {
parameters: {
inputFilePattern: `gs://${file.bucket}/${file.name}`,
outputTopic: 'projects/iot-fitness-198120/topics/MemberFitnessData2'
},
environment: {
tempLocation: 'gs://fitnessanalytics-tmp/tmp'
},
jobName: 'CStoPubSub',
//gcsPath: 'gs://dataflow-templates/latest/GCS_Text_to_Cloud_PubSub',
}
}, function(err, response) {
if (err) {
console.error("problem running dataflow template, error was: ", err);
}
console.log("Dataflow template response: ", response);
callback();
});
});
callback();
};
```
Upvotes: 2 |
2018/03/20 | 798 | 2,796 | <issue_start>username_0: I want to change automatically my total when I change quantity or price.
I try to use this code, but nothing happens.
My products are: `this.products = this.ps.getProduct();`
```
this.form= this.fb.group({
'total': new FormControl('', [Validators.required, Validators.nullValidator]),
'products': this.fb.array([
]),
});
```
This is my function that calculate `Total = p.Unit_price * p.Quantity;`
```
totalFunc() {
let Total = 0;
for (let p of this.products) {
Total = p.Unit_price * p.Quantity;
}
return Total;
}
```
Now this function I want to display in html code, and for this I use ng-change
```
| | | |
```<issue_comment>username_1: Guess your cloud function execution fails doesn't satisfy your if statement,
```
if (file.resourceState === 'exists' && file.name)
```
I had similar issue when I started working on Cloud Function. Modify your index.js file `var {google} = require('googleapis');` as provided in the solution [here](https://stackoverflow.com/questions/49348220/google-cloud-functions-cannot-read-property-getapplicationdefault)
Upvotes: 0 <issue_comment>username_2: I have finally modified the code. Got some help from GCP support. Below is the right syntax that works:
```
var {google} = require('googleapis');
exports.moveDataFromGCStoPubSub = (event, callback) => {
const file = event.data;
const context = event.context;
console.log(`Event ${context.eventId}`);
console.log(` Event Type: ${context.eventType}`);
console.log(` Bucket: ${file.bucket}`);
console.log(` File: ${file.name}`);
console.log(` Metageneration: ${file.metageneration}`);
console.log(` Created: ${file.timeCreated}`);
console.log(` Updated: ${file.updated}`);
google.auth.getApplicationDefault(function (err, authClient, projectId) {
if (err) {
throw err;
}
console.log(projectId);
const dataflow = google.dataflow({ version: 'v1b3', auth: authClient });
console.log(`gs://${file.bucket}/${file.name}`);
dataflow.projects.templates.create({
gcsPath: 'gs://dataflow-templates/latest/GCS_Text_to_Cloud_PubSub',
projectId: projectId,
resource: {
parameters: {
inputFilePattern: `gs://${file.bucket}/${file.name}`,
outputTopic: 'projects/iot-fitness-198120/topics/MemberFitnessData2'
},
environment: {
tempLocation: 'gs://fitnessanalytics-tmp/tmp'
},
jobName: 'CStoPubSub',
//gcsPath: 'gs://dataflow-templates/latest/GCS_Text_to_Cloud_PubSub',
}
}, function(err, response) {
if (err) {
console.error("problem running dataflow template, error was: ", err);
}
console.log("Dataflow template response: ", response);
callback();
});
});
callback();
};
```
Upvotes: 2 |
2018/03/20 | 2,301 | 8,677 | <issue_start>username_0: I've followed the tutorial on the angular site (<https://angular.io/guide/http>) but I can't achieve what I want since I have an error that I don't understand.
I've put my text file in the assets doc and created a config.json file where I entered the code from the tutorial.
I get errors in my service.ts file and component.ts file aswell.
Please help me understand my errors and the tutorial.
**Component.ts**
```
import { Component, OnInit } from '@angular/core';
import {Observable} from 'rxjs/Observable';
import {HttpClient, HttpErrorResponse, HttpResponse} from '@angular/common/http';
import {GeneralService} from '../general.service';
@Component({
selector: 'app-general',
templateUrl: './general.component.html',
styleUrls: ['./general.component.css']
})
export interface General {
generalUrl: string;
textfile: string;
}
export class GeneralComponent implements OnInit {
title = 'Introduction';
generalUrl = 'assets/project_description.txt';
general: General;
generalService: GeneralService;
constructor(private http: HttpClient) { }
ngOnInit() {
}
getGeneral() {
return this.http.get(this.generalUrl);
}
showGeneral() {
this.generalService.getGeneral()
.subscribe(data => this.general = {
generalUrl: data['generalUrl'],
textfile: data['textfile']
});
}
showGeneralResponse() {
this.generalService.getGeneralResponse()
.subscribe(resp => {
const keys = resp.headers.keys();
this.headers = keys.map(key =>
`${key}: ${resp.headers.get(key)}`);
this.config = {
});
}
}
```
On my showGeneralResponse() function headers and config are unresolved variables
**Service.ts**
```
import { Injectable } from '@angular/core';
import {General} from './general/general.component';
import {HttpResponse} from '@angular/common/http';
import {Observable} from 'rxjs/Observable';
@Injectable()
export class GeneralService {
constructor() { }
getGeneral() {
return this.http.get(this.generalUrl);
}
getGeneralResponse(): Observable> {
return this.http.get(
this.generalUrl, { observe: 'response'}
);
}
}
```
In getGeneral() and getGeneralresponse() functions I have http and generalUrl who are unresolved variables
But in my config.json file I have no error
**config.json**
```
"generalUrl": "api/general",
"textfile": "assets/project_description.txt"
```
I also would like to know which function to put in my html file so my application would display me my file text<issue_comment>username_1: @<NAME>
1.- the json must be a json, see the "{" and the "
```
{
generalUrl: "api/general",
textfile: "assets/project_description.txt"
}
```
2.-You have defined the variable "generalUrl" in your component, but not in the service.
3.-If you want to use a Service in a component you must inject in the constructor
```
//In your component
//remove the line below
//generalService: GeneralService;
//And change the cosntructor
constructor(private http: HttpClient,private generalService:GeneralService) { }
```
4.-You must cal lthe function this.showGeneral() in your ngOnInit() or never happens
5.- Declare the export interface before the component
```
import { Component, OnInit } from '@angular/core';
...
//Move export inferface here
export interface General {
generalUrl: string;
textfile: string;
}
@Component({
selector: 'app-general',
templateUrl: './general.component.html',
styleUrls: ['./general.component.css']
})
export class GeneralComponent implements OnInit {
...
}
```
Upvotes: 1 <issue_comment>username_1: Lucie, I'm in hurry, (I supouse the problem is that Angular can't find the file data.json). I put the easer example I know:
```
//app.module.ts
import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import {HttpClientModule} from '@angular/common/http';
import { AppComponent } from './app.component';
@NgModule({
declarations: [
AppComponent
],
imports: [
BrowserModule,
HttpClientModule
],
providers: [DataService,],
bootstrap: [AppComponent]
})
export class AppModule { }
//DataService.ts
import { Injectable } from '@angular/core';
import { HttpClient } from '@angular/common/http';
@Injectable()
export class DataService {
constructor(private http: HttpClient) { }
getData() {
return this.http.get('../assets/data.json')
}
}
//app.compontent.ts
import { Component, OnInit } from '@angular/core';
import {DataService} from './data.service'
@Component({
selector: 'app-root',
templateUrl: './app.component.html',
styleUrls: ['./app.component.css']
})
export class AppComponent implements OnInit {
data:any;
constructor(private dataService:DataService){}
ngOnInit(){
this.dataService.getDataDos(null).subscribe(res=>{
this.data=res;
})
}
}
}
//app.component.html
{{data |json}}
//data.json (in folder assets/)
[
{"key":"uno"},
{"key":"dos"},
{"key":"tres"},
{"key":"cuatro"},
{"key":"cinco"}
]
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: There is a library which allows you to **use HttpClient with strongly-typed callbacks**.
The data and the error are available directly via these callbacks.
[A reason for existing](https://i.stack.imgur.com/XgO0e.png)
------------------------------------------------------------
When you use HttpClient with Observable, you have to use **.subscribe(x=>...)** in the rest of your code.
This is because **Observable<`HttpResponse`<`T`>>** is tied to **HttpResponse**.
This **tightly couples** the **http layer** with the **rest of your code**.
This library encapsulates the **.subscribe(x => ...)** part and exposes only the data and error through your Models.
**With strongly-typed callbacks, you only have to deal with your Models in the rest of your code.**
The library is called **angular-extended-http-client**.
[**angular-extended-http-client library on GitHub**](https://github.com/VeritasSoftware/angular-http-client-ext)
[**angular-extended-http-client library on NPM**](https://www.npmjs.com/package/angular-extended-http-client)
Very easy to use.
Sample usage
------------
The strongly-typed callbacks are
Success:
* **IObservable<`T`>**
* **IObservableHttpResponse**
* **IObservableHttpCustomResponse<`T`>**
Failure:
* **IObservableError<`TError`>**
* **IObservableHttpError**
* **IObservableHttpCustomError<`TError`>**
### Add package to your project and in your app module
```js
import { HttpClientExtModule } from 'angular-extended-http-client';
```
and in the @NgModule imports
```js
imports: [
.
.
.
HttpClientExtModule
],
```
### Your Models
```js
//Normal response returned by the API.
export class RacingResponse {
result: RacingItem[];
}
//Custom exception thrown by the API.
export class APIException {
className: string;
}
```
### Your Service
In your Service, you just create params with these callback types.
Then, pass them on to the **HttpClientExt**'s get method.
```js
import { Injectable, Inject } from '@angular/core'
import { RacingResponse, APIException } from '../models/models'
import { HttpClientExt, IObservable, IObservableError, ResponseType, ErrorType } from 'angular-extended-http-client';
.
.
@Injectable()
export class RacingService {
//Inject HttpClientExt component.
constructor(private client: HttpClientExt, @Inject(APP_CONFIG) private config: AppConfig) {
}
//Declare params of type IObservable and IObservableError.
//These are the success and failure callbacks.
//The success callback will return the response objects returned by the underlying HttpClient call.
//The failure callback will return the error objects returned by the underlying HttpClient call.
getRaceInfo(success: IObservable, failure?: IObservableError) {
let url = this.config.apiEndpoint;
this.client.get(url, ResponseType.IObservable, success, ErrorType.IObservableError, failure);
}
}
```
### Your Component
In your Component, your Service is injected and the **getRaceInfo** API called as shown below.
```js
ngOnInit() {
this.service.getRaceInfo(response => this.result = response.result,
error => this.errorMsg = error.className);
}
```
Both, **response** and **error** returned in the callbacks are strongly typed. Eg. **response** is type **RacingResponse** and **error** is **APIException**.
You only deal with your Models in these strongly-typed callbacks.
Hence, The rest of your code only knows about your Models.
Also, you can still use the traditional route and return Observable<`HttpResponse<`T`>`> from Service API.
Upvotes: 1 |
2018/03/20 | 984 | 3,672 | <issue_start>username_0: This is my **SearchForm.js** class, `experience` prop must be array of values with `id` and `name`
```
import React from 'react';
import ReactDOM from 'react-dom';
import axios from 'axios';
class SearchForm extends React.Component {
constructor(props) {
super(props)
this.state = {
position: '',
area: '',
date: '',
experience: {
type: Array,
default: () => []
}
}
}
componentDidMount() {
axios({
method: 'GET',
url: 'https://example.com/dictionaries/',
headers: {
'User-Agent': 'React App/1.0 (<EMAIL>)',
'HH-User-Agent': 'React App/1.0 (<EMAIL>)',
'Content-Type':'application/x-www-form-urlencoded',
}
})
.then(function (response) {
console.log(response.data.experience);
})
.catch(function (error) {
console.log(error);
});
}
render() {
return (
Experience
{/\*
{this.props.experience.name}
\*/}
)
}
}
export { SearchForm }
```
as result I get
[](https://i.stack.imgur.com/PGDsd.png)
How to put this values from `response.data.experience` in `experience` prop?<issue_comment>username_1: You can change the state like
```
axios({
method: 'GET',
url: 'https://example.com/dictionaries/',
headers: {
'User-Agent': 'React App/1.0 (<EMAIL>)',
'HH-User-Agent': 'React App/1.0 (<EMAIL>)',
'Content-Type':'application/x-www-form-urlencoded',
}
}).then((response) => {
console.log(response.data.experience);
this.setState({experience: response.data.experience})
}).catch(function (error) {
console.log(error);
});
```
And you can loop through the `experience` array by using this :
`{
this.state.experience.map((value, index) =>
{value.name}
)
}`
Upvotes: 2 <issue_comment>username_2: experience is in your state, so you have to use the setState method like :
```
this.setState({experience: response.data.experience})
```
Upvotes: 1 <issue_comment>username_3: Does this solution work for you ? Let me know!
Edit: Added conditional rendering for your if your `experience` is not yet defined
```
import React from "react";
import ReactDOM from "react-dom";
import axios from "axios";
class SearchForm extends React.Component {
constructor(props) {
super(props);
this.state = {
position: "",
area: "",
date: "",
experience: {
type: Array,
default: () => [],
},
};
}
componentWillReceiveProps = NextProps => {
if (NextProps.experience !== this.props.experience) {
this.setState({ experience: NextProps.experience });
}
};
componentDidMount() {
axios({
method: "GET",
url: "https://example.com/dictionaries/",
headers: {
"User-Agent": "React App/1.0 (<EMAIL>)",
"HH-User-Agent": "React App/1.0 (<EMAIL>)",
"Content-Type": "application/x-www-form-urlencoded",
},
})
.then(response => {
console.log(response.data.experience);
this.setState({ experience: response.data.experience });
})
.catch(function(error) {
console.log(error);
});
}
render() {
return (
Experience
{this.state.experience.length > 0 &&
{this.state.experience.name}
}
);
}
}
```
Upvotes: 1 |
2018/03/20 | 722 | 2,687 | <issue_start>username_0: I'm following the doc (<https://dev.office.com/reference/add-ins/excel/excel-add-ins-reference-overview>) to build an Excel add-in. The add-in needs to populate an Excel table column with either dropdown or checkbox for the user to select actions to be done on the table row. I can't seem to find any API to insert dropdown/checkbox in Excel spreadsheet. Could someone advise how I could do that? Thanks!<issue_comment>username_1: You can change the state like
```
axios({
method: 'GET',
url: 'https://example.com/dictionaries/',
headers: {
'User-Agent': 'React App/1.0 (<EMAIL>)',
'HH-User-Agent': 'React App/1.0 (<EMAIL>)',
'Content-Type':'application/x-www-form-urlencoded',
}
}).then((response) => {
console.log(response.data.experience);
this.setState({experience: response.data.experience})
}).catch(function (error) {
console.log(error);
});
```
And you can loop through the `experience` array by using this :
`{
this.state.experience.map((value, index) =>
{value.name}
)
}`
Upvotes: 2 <issue_comment>username_2: experience is in your state, so you have to use the setState method like :
```
this.setState({experience: response.data.experience})
```
Upvotes: 1 <issue_comment>username_3: Does this solution work for you ? Let me know!
Edit: Added conditional rendering for your if your `experience` is not yet defined
```
import React from "react";
import ReactDOM from "react-dom";
import axios from "axios";
class SearchForm extends React.Component {
constructor(props) {
super(props);
this.state = {
position: "",
area: "",
date: "",
experience: {
type: Array,
default: () => [],
},
};
}
componentWillReceiveProps = NextProps => {
if (NextProps.experience !== this.props.experience) {
this.setState({ experience: NextProps.experience });
}
};
componentDidMount() {
axios({
method: "GET",
url: "https://example.com/dictionaries/",
headers: {
"User-Agent": "React App/1.0 (<EMAIL>)",
"HH-User-Agent": "React App/1.0 (<EMAIL>)",
"Content-Type": "application/x-www-form-urlencoded",
},
})
.then(response => {
console.log(response.data.experience);
this.setState({ experience: response.data.experience });
})
.catch(function(error) {
console.log(error);
});
}
render() {
return (
Experience
{this.state.experience.length > 0 &&
{this.state.experience.name}
}
);
}
}
```
Upvotes: 1 |
2018/03/20 | 831 | 2,926 | <issue_start>username_0: So the question is this :
we have n classes (n intervals) with start time and finish time [si , fi] and we want to find the minimum number of classrooms which we can satisfy all the classes(intervals) without any collusion
the book that I'm reading says we can solve this in O(nlogn) but i cannot find any algorithm better than O(n^2)
it says we should sort them by the starting time but doesn't say the rest of the solution, but that doesn't make any sense because before giving each class a room, shouldn't we check all the other intervals to see if we have a collusion or not? which makes it O(n^2) because for each interval we need to check all the other intervals
am i missing something ?<issue_comment>username_1: You can sort the events (an event is either the start of a class or the end of a class) by time. This will take O(n log n).
Now, keep a stack of empty rooms and go through the events in order:
* for each start event take a room from the empty stack and allocate the class to it.
* for each end event put the corresponding room back to the empty stack.
This second phase can be completed in O(n).
By keeping track of the allocations and deallocations done you can easily find the number of needed rooms and the schedule.
If you just need the number of needed rooms this can be simplified to just use a counter instead of the list of rooms. Add one for each start event and subtract 1 for each end event; keep track of the maximum.
Upvotes: 3 [selected_answer]<issue_comment>username_2: **First step**: Store classes' starting and finishing points individually in `actions` array. If the point is starting point then `type` of `action` is `+1` else, if it is ending of a class its `type` is `-1`.
**Second step**: Sort the `actions` array in ascending order by their time. If the times are equal then sort them by `type` in ascending order.
**Third step**: Set counter to 0, iterate through `actions` array, if it is starting type add 1 to counter, if it is finishing type subtract 1 from counter. Again, if times are equal execute finishing types first. Because you can use the same classroom as soon as the class at that room ends.
The maximum value that counter reaches is your answer.
Here is an implementation of the algorithm in python:
```
classess = [ [13, 15], [11, 13], [4, 7], [2, 4], [3, 6] ]
# construct a action list:
# action[0] -> time of action
# action[1] -> type of action (-1 for finish type, 1 for start type)
actions = []
for cla55 in classess:
actions.append([cla55[0], 1])
actions.append([cla55[1], -1])
actions.sort()
# [[2, 1], [3, 1], [4, -1], [4, 1], [6, -1], [7, -1], [11, 1], [13, -1], [13, 1], [15, -1]]
min_classrooms = 0
curr_classrooms = 0
for action in actions:
curr_classrooms += action[1]
if curr_classrooms > min_classrooms:
min_classrooms = curr_classrooms
print(min_classrooms)
```
Upvotes: 1 |
2018/03/20 | 701 | 2,386 | <issue_start>username_0: I just want to know if it is possible in SQL Server 2008 to know the last SP/query that generated the "begin transaction"
Thank you so much for your response<issue_comment>username_1: You can sort the events (an event is either the start of a class or the end of a class) by time. This will take O(n log n).
Now, keep a stack of empty rooms and go through the events in order:
* for each start event take a room from the empty stack and allocate the class to it.
* for each end event put the corresponding room back to the empty stack.
This second phase can be completed in O(n).
By keeping track of the allocations and deallocations done you can easily find the number of needed rooms and the schedule.
If you just need the number of needed rooms this can be simplified to just use a counter instead of the list of rooms. Add one for each start event and subtract 1 for each end event; keep track of the maximum.
Upvotes: 3 [selected_answer]<issue_comment>username_2: **First step**: Store classes' starting and finishing points individually in `actions` array. If the point is starting point then `type` of `action` is `+1` else, if it is ending of a class its `type` is `-1`.
**Second step**: Sort the `actions` array in ascending order by their time. If the times are equal then sort them by `type` in ascending order.
**Third step**: Set counter to 0, iterate through `actions` array, if it is starting type add 1 to counter, if it is finishing type subtract 1 from counter. Again, if times are equal execute finishing types first. Because you can use the same classroom as soon as the class at that room ends.
The maximum value that counter reaches is your answer.
Here is an implementation of the algorithm in python:
```
classess = [ [13, 15], [11, 13], [4, 7], [2, 4], [3, 6] ]
# construct a action list:
# action[0] -> time of action
# action[1] -> type of action (-1 for finish type, 1 for start type)
actions = []
for cla55 in classess:
actions.append([cla55[0], 1])
actions.append([cla55[1], -1])
actions.sort()
# [[2, 1], [3, 1], [4, -1], [4, 1], [6, -1], [7, -1], [11, 1], [13, -1], [13, 1], [15, -1]]
min_classrooms = 0
curr_classrooms = 0
for action in actions:
curr_classrooms += action[1]
if curr_classrooms > min_classrooms:
min_classrooms = curr_classrooms
print(min_classrooms)
```
Upvotes: 1 |
2018/03/20 | 1,206 | 3,521 | <issue_start>username_0: i am working on a web application using codeigniter.
i have a problem
when i request the controller like this
```
http://localhost:8080/Saffron/Product/137/test-text
```
every thing is ok but when i request the controller like this
```
http://localhost:8080/Saffron/Product/137/مهارت-های-آموزشی-در مطالعه
```
got this error
```
The requested URL /Saffron/Product/137/کتاب-غلبه-بر-اضطراب-در-Ù…ØÛŒØ·-های-آموزشی was not found on this server.
```
i use this htaccess rules
```
Options -Indexes -MultiViews +FollowSymLinks
RewriteEngine On
RewriteBase /Saffron/
RewriteCond %{REQUEST\_FILENAME} !-d
RewriteCond %{REQUEST\_FILENAME} !-f
RewriteRule ^(.\*)$ index.php? [L]
```
it work on server without error i have this problem on wamp locahost.where is the problem?
UPDATE:
-------
i finally found it:
RewriteEngine On
RewriteBase /Saffron/
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST\_FILENAME} !-f
RewriteCond %{REQUEST\_FILENAME} !-d
RewriteRule . /Saffron/index.php [L]
work fine<issue_comment>username_1: Go to your config
replace $config['base\_url'] = ''; in code below how it will help you
```
if( (php_sapi_name() == 'cli') or defined('STDIN') )
{
$config['base_url'] = ((isset($_SERVER['HTTPS']) && $_SERVER['HTTPS'] == "on") ? "https" : "http");
$config['base_url'] .= "://". (isset($_SERVER['HTTP_HOST']) ? $_SERVER['HTTP_HOST'] : '127.0.0.1');
}
else
{
$config['base_url'] = ((isset($_SERVER['HTTPS']) && $_SERVER['HTTPS'] == "on") ? "https" : "http");
$config['base_url'] .= "://". (isset($_SERVER['HTTP_HOST']) ? $_SERVER['HTTP_HOST'] : '127.0.0.1');
$config['base_url'] .= str_replace(basename($_SERVER['SCRIPT_NAME']),"",$_SERVER['SCRIPT_NAME']);
}
$config['secure_base_url'] = ((isset($_SERVER['SERVER_NAME'])) ? "https://".$_SERVER['SERVER_NAME'] : '');
$config['secure_base_url'] .= str_replace(basename($_SERVER['SCRIPT_NAME']),"",$_SERVER['SCRIPT_NAME']);
```
For htaccess
```
RewriteEngine On
RewriteBase /folder name of your webapp/
#Removes access to the system folder by users.
#Additionally this will allow you to create a System.php controller,
#previously this would not have been possible.
#'system' can be replaced if you have renamed your system folder.
RewriteCond %{REQUEST\_URI} ^system.\*
RewriteRule ^(.\*)$ /index.php?/$1 [L]
#When your application folder isn't in the system folder
#This snippet prevents user access to the application folder
#Submitted by: Fabdrol
#Rename 'application' to your applications folder name.
RewriteCond %{REQUEST\_URI} ^application.\*
RewriteRule ^(.\*)$ /index.php?/$1 [L]
#Checks to see if the user is attempting to access a valid file,
#such as an image or css document, if this isn't true it sends the
#request to index.php
RewriteCond %{REQUEST\_FILENAME} !-f
RewriteCond %{REQUEST\_FILENAME} !-d
RewriteRule ^(.\*)$ index.php?/$1 [L]
# If we don't have mod\_rewrite installed, all 404's
# can be sent to index.php, and everything works as normal.
# Submitted by: ElliotHaughin
ErrorDocument 404 /index.php
```
Upvotes: 0 <issue_comment>username_2: You can add something like this to your router file `application/config/routes.php`.
`$route['Product/(:num)/(:any)] = 'Product/$1/$2';`
I am assuming Saffron is your `base_url` directory.
Check below for more information:
[CI URI Route documentation](https://www.codeigniter.com/user_guide/general/routing.html)
Upvotes: 2 [selected_answer] |
2018/03/20 | 862 | 3,233 | <issue_start>username_0: I am building a weather application with ionic 3, my problem is when the response comes back (and I know the names of its keys) when I call one of them WeatherStorm (the IDE I am using) says:
>
> property 'current\_observation' does not exist on type object"
>
>
>
but the application is working when I try it on 'localhost:8100/ionic-lab'.
Until now no problem, I took this and said that the problem is from the editor(cuz it is working and giving correct results) but when I was trying to build the app to generate apk the Windows-Command-Line and Git-Bash both complained about this error. The exact command causing me the trouble is: "ionic cordova build --release android".
Here is the call method:
```
getWeather(city, state) {
return this.http.get(this.url + '/' + state + '/' + city + '.json')
.map(res => res);
}
```
And here is the response (here comes the error at current\_observation):
```
this.weatherProvider.getWeather(
this.location.city,
this.location.state
).subscribe(weather => {
this.weather = weather.current_observation;
console.log(this.weather);
})
}).catch(()=> {
});
```
Any idea?
Thank you in advance.<issue_comment>username_1: You should also declare the type of `weather` variable so that you don't get an error.
```
this.weatherProvider.getWeather(
this.location.city,
this.location.state
).subscribe((weather : Weather) => {
this.weather = weather.current_observation;
console.log(this.weather);
})
}).catch(()=> {
});
```
Upvotes: 1 <issue_comment>username_2: You can specify the type of data in the `Observable` returned. The following function states that it will return an `Observable` having data of type `Weather`.
So at the place where you are subscribing it will expect a value of type `Weather` and will check type accordingly.
```
getWeather(city, state): Observable {
return this.http.get(this.url + '/' + state + '/' + city + '.json')
.map(res => res);
}
// place Weather in its own file and import in places where you use it.
class Weather {
current\_observation: any;
...otherKeys
}
```
If you don't care about any of these, you can try marking the return type as `any`
```
getWeather(city, state): Observable {
return this.http.get(this.url + '/' + state + '/' + city + '.json')
.map(res => res);
}
```
Also, make sure your project and your IDE are using the same version of Typescript.
Upvotes: 0 <issue_comment>username_3: **Try this way , it will do the trick**
```
this.weatherProvider.getWeather(
this.location.city,
this.location.state
).subscribe(weather => {
this.weather = weather["current_observation"];
console.log(this.weather);
}).catch(()=> {
});
```
Upvotes: 3 [selected_answer]<issue_comment>username_4: After solve this bug, Sometime it's not work in IDE/Editor.
So restart your computer and `ionic serve` your project again and try this code.
```
this.weatherProvider.getWeather(this.location.city, this.location.state)
.subscribe((weather) =>{
this.weather = weather["current_observation"];
});
```
Upvotes: 1 |
2018/03/20 | 353 | 1,189 | <issue_start>username_0: I recently downloaded and setup SonarQube using these instructions:
<https://docs.sonarqube.org/display/SONAR/Get+Started+in+Two+Minutes>
I now want to open the Update Centre to install the C++ community plugin. However, I cannot find the Update Centre in the web interface. Can anyone please guide me on how to proceed with this?
There is the screenshot from my administration (Administration>System tab)
[](https://i.stack.imgur.com/hCvBE.png)
Thank you in advance.<issue_comment>username_1: >
> `'Administration' (from top menu bar), ribbon 'system', 'update center'`
>
>
>
[](https://i.stack.imgur.com/T97DV.png)
I guess you will need "Administer System" permission in global security.
Note- screenshot is from Version 5.6.6, but it should be similar
Upvotes: 0 <issue_comment>username_2: In 6.7 the 'Update Center' was renamed to 'Marketplace' and moved into the top level of the admin menu:
[](https://i.stack.imgur.com/wmzSh.png)
Upvotes: 2 |
2018/03/20 | 1,824 | 6,290 | <issue_start>username_0: I'm facing some problem with spritekit in swift.
I was following closely to online tutorials (combining different tutorials into 1 project), trying out the code when I realised my SKSpriteNodes (my "player" and "enemy") ***sometimes*** go missing when I try it out on simulator or my iphone.
My situation is kinda similar to this user's problem [here](https://stackoverflow.com/questions/31442212/spritenode-shapes-change-size-and-disappear-after-returning-to-gamescene-swift), but I don't think my problem lies with the size.
Can anyone enlighten me? Thank you!
Here's my code.
```
var player : SKSpriteNode!
var backdrop : SKSpriteNode!
var gameTimer : Timer!
var possibleEnemies = ["enemy01", "enemy02", "enemy03"]
let bulletsCategory : UInt32 = 0x1 << 0
let enemyCategory : UInt32 = 0x1 << 1
override func didMove(to view: SKView) {
player = SKSpriteNode(imageNamed: "bird.png")
player.position = CGPoint(x: 0, y: (player.size.height / 2) )
self.addChild(player)
self.physicsWorld.gravity = CGVector(dx: 0, dy: 0)
self.physicsWorld.contactDelegate = self
self.anchorPoint = CGPoint (x: 0.5 , y: 0)
createBackdrop()
scoreLabel = SKLabelNode(text: "Score: 0")
scoreLabel.position = CGPoint(x: 260, y: self.frame.size.height - 90)
scoreLabel.fontName = "<NAME>"
scoreLabel.fontSize = 35
scoreLabel.fontColor = UIColor.gray
score = 0
self.addChild(scoreLabel)
gameTimer = Timer.scheduledTimer(timeInterval: 0.75, target: self, selector: #selector(addEnemies), userInfo: nil, repeats: true)
}
@objc func addEnemies() {
possibleEnemies = GKRandomSource.sharedRandom().arrayByShufflingObjects(in: possibleEnemies) as! [String]
let enemy = SKSpriteNode(imageNamed: possibleEnemies[0])
let randomEnemyPosition = GKRandomDistribution(lowestValue: -360, highestValue: 360)
let position = CGFloat(randomEnemyPosition.nextInt())
enemy.position = CGPoint(x: position, y: self.frame.size.height + enemy.size.height)
enemy.physicsBody = SKPhysicsBody(rectangleOf: enemy.size)
enemy.physicsBody?.isDynamic = true
enemy.physicsBody?.categoryBitMask = enemyCategory
enemy.physicsBody?.contactTestBitMask = bulletsCategory
enemy.physicsBody?.collisionBitMask = 0
self.addChild(enemy)
let animationDuration : TimeInterval = 6
var actionArray = [SKAction]()
actionArray.append(SKAction.move(to: CGPoint(x: position, y: -enemy.size.height), duration: animationDuration))
actionArray.append(SKAction.removeFromParent())
enemy.run(SKAction.sequence(actionArray))
}
override func touchesEnded(_ touches: Set, with event: UIEvent?) {
fireBullets()
}
func fireBullets() {
self.run(SKAction.playSoundFileNamed("shoot.wav", waitForCompletion: false))
let bullets = SKSpriteNode(imageNamed: "bullet.png")
bullets.position = player.position
bullets.position.y += 5
bullets.physicsBody = SKPhysicsBody(rectangleOf: bullets.size)
bullets.physicsBody?.isDynamic = true
bullets.physicsBody?.categoryBitMask = bulletsCategory
bullets.physicsBody?.contactTestBitMask = enemyCategory
bullets.physicsBody?.collisionBitMask = 0
bullets.physicsBody?.usesPreciseCollisionDetection = true
self.addChild(bullets)
let animationDuration : TimeInterval = 0.3
var actionArray = [SKAction]()
actionArray.append(SKAction.move(to: CGPoint(x: player.position.x, y: self.frame.size.height + 10), duration: animationDuration))
actionArray.append(SKAction.removeFromParent())
bullets.run(SKAction.sequence(actionArray))
}
func didBegin(\_ contact: SKPhysicsContact) {
var firstBody : SKPhysicsBody
var secondBody: SKPhysicsBody
if contact.bodyA.categoryBitMask < contact.bodyB.categoryBitMask {
firstBody = contact.bodyA
secondBody = contact.bodyB
} else {
firstBody = contact.bodyB
secondBody = contact.bodyA
}
if (firstBody.categoryBitMask & bulletsCategory) != 0 && (secondBody.categoryBitMask & enemyCategory) != 0 {
hitByBullets(bulletNode: firstBody.node as! SKSpriteNode, enemyNode: secondBody.node as! SKSpriteNode)
}
}
func hitByBullets (bulletNode: SKSpriteNode, enemyNode: SKSpriteNode) {
let shot = SKEmitterNode(fileNamed: "Magic01")!
shot.position = enemyNode.position
self.addChild(shot)
self.run(SKAction.playSoundFileNamed("shot.mp3", waitForCompletion: false))
bulletNode.removeFromParent()
enemyNode.removeFromParent()
self.run(SKAction.wait(forDuration: 2)) {
shot.removeFromParent()
}
score += 1
}
func touchDown(atPoint pos : CGPoint) {
player.position = pos
}
override func touchesBegan(\_ touches: Set, with event: UIEvent?) {
for t in touches { self.touchDown(atPoint: t.location(in: self)) }
}
override func update(\_ currentTime: TimeInterval) {
// Called before each frame is rendered
moveBackdrop()
}
```<issue_comment>username_1: the issue for you sprites not showing up is that none of your objects have a zPosition set on them. You need to layer the objects as you expect them to show in the scene.
for example...
```
background.zPosition = 1
player.zPosition = 1
enemy.zPosition = 1
bullet.zPosition = 2
scoreLabel.zPosition = 100
```
In my opinion you shouldn't be using timers to generate your enemies. Spritekit has it's own timing functionality built into the update function. Which you are already using to control the timing of the backgrounds.
You had waaaaay to much code in your question, you need to look at how I've tailored the code down to only relevant code to your question. Including all of your code in your question actually makes it more unlikely that you will get the help or answers you need because it is harder to go through all the code to figure out what is happening. Also don't include so many spaces in your code in your question scrolling through hundreds of lines even if a lot of them are spaces is very tedious.
Upvotes: 2 [selected_answer]<issue_comment>username_2: Didn't realised the importance of zPosition since my items show up perfectly on screen some of the times. Added the following in their respective place and they stop disappearing intermittently.
```
player.zPosition = 3
scoreLabel.zPosition = 100
enemy.zPosition = 3
bullets.zPosition = 2
backdrop.zPosition = 1
shot.zPosition = 3
```
Upvotes: -1 |
2018/03/20 | 293 | 1,115 | <issue_start>username_0: how can I upload multiple images using a single file upload control in asp.net web forms?
I am able to upload a single image file using file upload control but I want to make it more dynamic to upload multiple images by using one control.
Can anyone help me out in this?
Thankyou.<issue_comment>username_1: You can't. It is strictly one file per control.
To upload multiple files using one submit button you would need to use Javascript to add more FileControls dynamically, like this (using jQuery):
```
$(document).ready(function () {
$("#addAnotherFile").click(function () {
$("input[type='file']").after('
');
}
});
```
In the submit button handler, you can then enumerate through the Request.Files collection to access your uploads:
```
for (int i = 0; i < Request.Files.Count; i++)
{
HttpPostedFile file = Request.Files[i];
if (file.ContentLength > 0)
{
file.SaveAs(Path.Join("Uploaded/Files/Path",Path.GetFileName(file.FileName)));
}
}
```
Upvotes: 0 <issue_comment>username_2: You need to use AllowMultiple attribute, like this
Upvotes: 1 |
2018/03/20 | 1,283 | 5,003 | <issue_start>username_0: I follow the <https://cloud.google.com/translate/docs/reference/libraries#client-libraries-usage-java> to get started java client demo.
I have already `set` **authentication json** file to the environment variable `GOOGLE_APPLICATION_CREDENTIALS`. However, I got the translateException when I run java sample code.
```
Exception in thread "main" com.google.cloud.translate.TranslateException: The request is missing a valid API key.
at com.google.cloud.translate.spi.v2.HttpTranslateRpc.translate(HttpTranslateRpc.java:61)
at com.google.cloud.translate.spi.v2.HttpTranslateRpc.translate(HttpTranslateRpc.java:144)
at com.google.cloud.translate.TranslateImpl$4.call(TranslateImpl.java:113)
at com.google.cloud.translate.TranslateImpl$4.call(TranslateImpl.java:110)
Caused by:
com.google.api.client.googleapis.json.GoogleJsonResponseException: 403
Forbidden
{
"code" : 403,
"errors" : [ {
"domain" : "global",
"message" : "The request is missing a valid API key.",
"reason" : "forbidden"
} ],
"message" : "The request is missing a valid API key.",
"status" : "PERMISSION_DENIED"
}
```
The doc shows that this JSON file contains key infomation.
The sample code is shown
```
// Instantiates a client
Translate translate = TranslateOptions.getDefaultInstance().getService();
// The text to translate
String text = "Hello, world!";
// Translates some text into Russian
Translation translation =
translate.translate(
text,
TranslateOption.sourceLanguage("en"),
TranslateOption.targetLanguage("ru"));
System.out.printf("Text: %s%n", text);
System.out.printf("Translation: %s%n", translation.getTranslatedText());
```
I have no idea how to set api-key.
It still doesn't work after I set environment variable for `key` and `credentials`.
[](https://i.stack.imgur.com/5O6fH.png)<issue_comment>username_1: To make authenticated requests to Google Translation, you must create a service object with credentials or use an API key. The simplest way to authenticate is to use [Application Default Credentials](https://cloud.google.com/docs/authentication/production). These credentials are automatically inferred from your environment, so you only need the following code to create your service object:
```
Translate translate = TranslateOptions.getDefaultInstance().getService();
```
I have personally never gotten that to work.
This code can be also used with an API key. By default, an API key is looked for in the `GOOGLE_API_KEY` environment variable. Once the API key is set, you can make API calls by invoking methods on the Translation service created via `TranslateOptions.getDefaultInstance().getService()`.
Sample project [here](https://github.com/GoogleCloudPlatform/google-cloud-java/tree/master/google-cloud-translate)
Upvotes: 3 [selected_answer]<issue_comment>username_2: I was able to get google translate to work by running
"gcloud auth application-default login"
on the command prompt. This regenerated the credentials to the default location after asking you to authorize with your google account. See <https://github.com/GoogleCloudPlatform/google-cloud-java/tree/master/google-cloud-translate> for more details.
Upvotes: 2 <issue_comment>username_3: you can try to authenticating by your service acount json file like below
its pretty simple in node
```
// Instantiates a client
const translate = new Translate(
{
projectId: 'your project id', //eg my-project-0o0o0o0o'
keyFilename: 'path of your service acount json file' //eg my-project-0fwewexyz.json
}
);
```
you can take the reference <https://cloud.google.com/bigquery/docs/authentication/service-account-file> for java
Upvotes: 3 <issue_comment>username_4: Add
```
System.setProperty("GOOGLE_API_KEY", "your key here");
```
before
```
Translate translate = TranslateOptions.getDefaultInstance().getService();
```
Cheers :)
Upvotes: 2 <issue_comment>username_5: ```
{
"code" : 403,
"errors" : [ {
"domain" : "global",
"message" : "Requests from this Android client application are blocked.",
"reason" : "forbidden"
} ],
"message" : "Requests from this Android client application are blocked.",
"status" : "PERMISSION\_DENIED"
}
```
Following Approach I follow in android to resolve it. I tried with Private key but did not work for me. so I use public key for this
System.setProperty("GOOGLE\_API\_KEY", "Public key");
val translate = TranslateOptions.getDefaultInstance().service
```
translate?.let {
val translation = it.translate("Hola Mundo!",
Translate.TranslateOption.sourceLanguage("es"),
Translate.TranslateOption.targetLanguage("en"),
Translate.TranslateOption.model("nmt"));
val check = translation.translatedText
Log.e("inf",""+check)
}
```
Upvotes: 0 |
2018/03/20 | 794 | 3,334 | <issue_start>username_0: How does multi-core processors handle interrupts?
I know of how single core processors handle interrupts.
I also know of the different types of interrupts.
I want to know how multi core processors handle hardware, program, CPU time sequence and input/output interrupt<issue_comment>username_1: To make authenticated requests to Google Translation, you must create a service object with credentials or use an API key. The simplest way to authenticate is to use [Application Default Credentials](https://cloud.google.com/docs/authentication/production). These credentials are automatically inferred from your environment, so you only need the following code to create your service object:
```
Translate translate = TranslateOptions.getDefaultInstance().getService();
```
I have personally never gotten that to work.
This code can be also used with an API key. By default, an API key is looked for in the `GOOGLE_API_KEY` environment variable. Once the API key is set, you can make API calls by invoking methods on the Translation service created via `TranslateOptions.getDefaultInstance().getService()`.
Sample project [here](https://github.com/GoogleCloudPlatform/google-cloud-java/tree/master/google-cloud-translate)
Upvotes: 3 [selected_answer]<issue_comment>username_2: I was able to get google translate to work by running
"gcloud auth application-default login"
on the command prompt. This regenerated the credentials to the default location after asking you to authorize with your google account. See <https://github.com/GoogleCloudPlatform/google-cloud-java/tree/master/google-cloud-translate> for more details.
Upvotes: 2 <issue_comment>username_3: you can try to authenticating by your service acount json file like below
its pretty simple in node
```
// Instantiates a client
const translate = new Translate(
{
projectId: 'your project id', //eg my-project-0o0o0o0o'
keyFilename: 'path of your service acount json file' //eg my-project-0fwewexyz.json
}
);
```
you can take the reference <https://cloud.google.com/bigquery/docs/authentication/service-account-file> for java
Upvotes: 3 <issue_comment>username_4: Add
```
System.setProperty("GOOGLE_API_KEY", "your key here");
```
before
```
Translate translate = TranslateOptions.getDefaultInstance().getService();
```
Cheers :)
Upvotes: 2 <issue_comment>username_5: ```
{
"code" : 403,
"errors" : [ {
"domain" : "global",
"message" : "Requests from this Android client application are blocked.",
"reason" : "forbidden"
} ],
"message" : "Requests from this Android client application are blocked.",
"status" : "PERMISSION\_DENIED"
}
```
Following Approach I follow in android to resolve it. I tried with Private key but did not work for me. so I use public key for this
System.setProperty("GOOGLE\_API\_KEY", "Public key");
val translate = TranslateOptions.getDefaultInstance().service
```
translate?.let {
val translation = it.translate("Hola Mundo!",
Translate.TranslateOption.sourceLanguage("es"),
Translate.TranslateOption.targetLanguage("en"),
Translate.TranslateOption.model("nmt"));
val check = translation.translatedText
Log.e("inf",""+check)
}
```
Upvotes: 0 |
2018/03/20 | 928 | 3,708 | <issue_start>username_0: I am trying to select values based on the following case statment
```
CASE
when ty.type ='Catalog'
and zs.name='Aries'
or zs.name='Leo'
AND ( CASE
when actual_finish_date is not null
then actual_finish_date
when updated_finish_date is not null
then updated_finish_date
else baseline_finish_date
END ) is not null
and visibility.ty_visibility !='Private'
then 1
else 0
END as PARTICIPANT,
```
Now the problem is it's not checking the entire condition,it's stopping at the first line itself(ty.type='Catalog').
Even when ty\_visibility is equal to Private it's selecting the value as 1 instead of 0
Can someone point where I went wrong?<issue_comment>username_1: To make authenticated requests to Google Translation, you must create a service object with credentials or use an API key. The simplest way to authenticate is to use [Application Default Credentials](https://cloud.google.com/docs/authentication/production). These credentials are automatically inferred from your environment, so you only need the following code to create your service object:
```
Translate translate = TranslateOptions.getDefaultInstance().getService();
```
I have personally never gotten that to work.
This code can be also used with an API key. By default, an API key is looked for in the `GOOGLE_API_KEY` environment variable. Once the API key is set, you can make API calls by invoking methods on the Translation service created via `TranslateOptions.getDefaultInstance().getService()`.
Sample project [here](https://github.com/GoogleCloudPlatform/google-cloud-java/tree/master/google-cloud-translate)
Upvotes: 3 [selected_answer]<issue_comment>username_2: I was able to get google translate to work by running
"gcloud auth application-default login"
on the command prompt. This regenerated the credentials to the default location after asking you to authorize with your google account. See <https://github.com/GoogleCloudPlatform/google-cloud-java/tree/master/google-cloud-translate> for more details.
Upvotes: 2 <issue_comment>username_3: you can try to authenticating by your service acount json file like below
its pretty simple in node
```
// Instantiates a client
const translate = new Translate(
{
projectId: 'your project id', //eg my-project-0o0o0o0o'
keyFilename: 'path of your service acount json file' //eg my-project-0fwewexyz.json
}
);
```
you can take the reference <https://cloud.google.com/bigquery/docs/authentication/service-account-file> for java
Upvotes: 3 <issue_comment>username_4: Add
```
System.setProperty("GOOGLE_API_KEY", "your key here");
```
before
```
Translate translate = TranslateOptions.getDefaultInstance().getService();
```
Cheers :)
Upvotes: 2 <issue_comment>username_5: ```
{
"code" : 403,
"errors" : [ {
"domain" : "global",
"message" : "Requests from this Android client application are blocked.",
"reason" : "forbidden"
} ],
"message" : "Requests from this Android client application are blocked.",
"status" : "PERMISSION\_DENIED"
}
```
Following Approach I follow in android to resolve it. I tried with Private key but did not work for me. so I use public key for this
System.setProperty("GOOGLE\_API\_KEY", "Public key");
val translate = TranslateOptions.getDefaultInstance().service
```
translate?.let {
val translation = it.translate("<NAME>!",
Translate.TranslateOption.sourceLanguage("es"),
Translate.TranslateOption.targetLanguage("en"),
Translate.TranslateOption.model("nmt"));
val check = translation.translatedText
Log.e("inf",""+check)
}
```
Upvotes: 0 |
2018/03/20 | 1,767 | 6,986 | <issue_start>username_0: I am trying to read data from firebase using angular 2 and typescript
my code
```
export class DashboardComponent implements OnInit {
itemsRef: AngularFireList;
items: Observable;
constructor( afDatabase: AngularFireDatabase) {
this.itemsRef = afDatabase.list('/user\_orders/louro');
this.items = this.itemsRef.valueChanges();
this.items.subscribe(val\_2 => {
alert(val\_2.keys());
val\_2.forEach(function (value2) {
{
Object.keys(value2).forEach(function(k2) {
{
// k is key
let count = Object.keys( (value2[k2] )).length;
console.log("New order key "+k2);
for(let i=0;i "+JSON.stringify (value2[k2][i]));
}
}
});
}
})
});
}
ngOnInit() {
}
}
```
and on val\_2 only contains
```
[
{
"-L7rtl2NesdOYVD4-bMs": [
{
"ads_show": false,
"brand": "",
"buttonLabel": "Add to cart",
"child": "fruits",
"decrn": "testing for demonstrate",
"key": <KEY>",
"mid": "fresh fruits",
"note": "",
"orderInfo": {
"message": "nil",
"methode": "cash on delivery",
"status": "nil",
"time": 1521356314040,
"time2": 1521356254115
},
"p_id": 73,
"p_name": "testing",
"position": 0,
"primary_key": "testinglouro",
"qty": {
"m_qty": 1,
"qty": 23,
"unite": "1kg",
"user_intput_qty": 2
},
"quantity_new": 2,
"querykey_shop_name_top": "louro_fruits",
"sellerName": "louro",
"sellonline": true,
"seo": {
"meta_descrption": "",
"meta_title": ""
},
"service": false,
"serviceMessage": "Please enter your complaint details",
"serviceTitle": "Service Requesting",
"shopname": "louro",
"shopview": false,
"summery": "nil",
"tags": "",
"top": "fruits",
"uid": "IG2SxH6Gcabr3QVLz9jE9Wwweh62",
"variants": [
{
"img_position": 0,
"prize": {
"mrp": 58,
"selling_prize": 45,
"tax": 0
},
"qty": {
"m_qty": 1,
"qty": 23,
"unite": "1kg",
"user_intput_qty": 2
},
"shipping": {
"minmumbuy": 0,
"s_cost": 0,
"s_dlts": ""
}
}
],
"variants_position": 0
}
]
}
]
```
And my database is [](https://i.stack.imgur.com/rrLuc.png)
I need this key "<KEY>" .
How to get that key ?
I tried val\_2.key
but it showing error "[ts] Property 'key' does not exist on type 'any[]'. Did you mean 'keys'?"
In java i am using below code and work fine dataSnapshot1.getKey();
```
FirebaseDatabase database = FirebaseDatabase.getInstance();
DatabaseReference myRef = database.getReference(getResources().getString(R.string.user_orders)+"/"+
shop_name,getContext()));
myRef.addListenerForSingleValueEvent(new ValueEventListener() {
@Override
public void onDataChange(DataSnapshot dataSnapshot) {
// This method is called once with the initial value and again
// whenever data at this location is updated.
// MainAction.setDefaults("OpenCategories",dataSnapshot.toString(),getActivity());
if (dataSnapshot.hasChildren()) {
for (DataSnapshot dataSnapshot1 : dataSnapshot.getChildren()) {
orderDetails1 = null;
orderDetails1 = new OrderDetails();
if (dataSnapshot1.hasChildren()) {
int id = -1;
for (DataSnapshot dataSnapshot2 : dataSnapshot1.getChildren()) {
id = id + 1;
if (dataSnapshot2.hasChildren()) {
int productid = -1;
for (DataSnapshot dataSnapshot3 : dataSnapshot2.getChildren()) {
productid = productid + 1;
if (dataSnapshot3.hasChildren()) {
orderDetails1.productmillaList.add(dataSnapshot3.getValue(Productmilla.class));
}
orderDetails1.productmillaList.get(productid).setPosition(Integer.parseInt(dataSnapshot3.getKey()));
orderDetails1.productmillaList.get(productid).setId(dataSnapshot2.getKey());
// Log.d("key2",dataSnapshot2.getKey()+" "+productid);
}
}
//orderDetails1.productmillaList.get(id).setId(dataSnapshot2.getKey());
}
}
orderDetails1.key = dataSnapshot1.getKey();
orderDetails.add(orderDetails1);
}
getaddress(orderDetails);
} else mProgressBar.setVisibility(View.GONE);
}
@Override
public void onCancelled(DatabaseError error) {
// Failed to read value
mProgressBar.setVisibility(View.GONE);
Log.w("dd", "Failed to read value."+ error.getMessage());
}
});
```<issue_comment>username_1: In your JSON, `val` is an array.
```
Object.keys(val[0])
```
will give you the keys of the first element of that array, which is what you're after.
Upvotes: 0 <issue_comment>username_2: Update :
According to the updated response data, you should access using `Object.keys(val)[0]` which gives the value for key `"<KEY>"`, since it is the first key.
According to older response data :
Your val is an array.
This should work.
```
Object.keys(val[0]).forEach(function(k1) {
{
console.log("Key first : "+ k1);
});
```
Upvotes: 1 <issue_comment>username_3: TO get the first key in the Object
```
Object.keys(val)[0]; //returns 'first key'
```
[<https://stackoverflow.com/a/11509718/7458082][1]>
To iterate through the object
```
Object.keys(obj).forEach(function(key,index) {
// key: the name of the object key
// index: the ordinal position of the key within the object
});
```
[<https://stackoverflow.com/a/11509718/7458082][1]>
Upvotes: 0 |
2018/03/20 | 465 | 1,373 | <issue_start>username_0: Is there a way do get the vsphere build versing using any API/SDK/REST?
I know it's possible using powershell on vcenter for that, but it'd be great if there was another option.
Like described here: <https://www.virtuallyghetto.com/2017/08/powercli-script-to-help-correlate-vcenter-esxi-vsan-buildversions-wo-manual-vmware-kb-lookup.html><issue_comment>username_1: In your JSON, `val` is an array.
```
Object.keys(val[0])
```
will give you the keys of the first element of that array, which is what you're after.
Upvotes: 0 <issue_comment>username_2: Update :
According to the updated response data, you should access using `Object.keys(val)[0]` which gives the value for key `"<KEY>"`, since it is the first key.
According to older response data :
Your val is an array.
This should work.
```
Object.keys(val[0]).forEach(function(k1) {
{
console.log("Key first : "+ k1);
});
```
Upvotes: 1 <issue_comment>username_3: TO get the first key in the Object
```
Object.keys(val)[0]; //returns 'first key'
```
[<https://stackoverflow.com/a/11509718/7458082][1]>
To iterate through the object
```
Object.keys(obj).forEach(function(key,index) {
// key: the name of the object key
// index: the ordinal position of the key within the object
});
```
[<https://stackoverflow.com/a/11509718/7458082][1]>
Upvotes: 0 |
2018/03/20 | 1,050 | 3,816 | <issue_start>username_0: [](https://i.stack.imgur.com/MAH4S.png)
I am a beginner I want to make a top bar button that has badge like the picture above, after searching on the internet, I can make the badge on the button by implementing the `SSBadgeButton` like the code below
```
import UIKit
class SSBadgeButton: UIButton {
var badgeLabel = UILabel()
var badge: String? {
didSet {
addBadgeToButon(badge: badge)
}
}
public var badgeBackgroundColor = UIColor.red {
didSet {
badgeLabel.backgroundColor = badgeBackgroundColor
}
}
public var badgeTextColor = UIColor.white {
didSet {
badgeLabel.textColor = badgeTextColor
}
}
public var badgeFont = UIFont.systemFont(ofSize: 12.0) {
didSet {
badgeLabel.font = badgeFont
}
}
public var badgeEdgeInsets: UIEdgeInsets? {
didSet {
addBadgeToButon(badge: badge)
}
}
override init(frame: CGRect) {
super.init(frame: frame)
addBadgeToButon(badge: nil)
}
func addBadgeToButon(badge: String?) {
badgeLabel.text = badge
badgeLabel.textColor = badgeTextColor
badgeLabel.backgroundColor = badgeBackgroundColor
badgeLabel.font = badgeFont
badgeLabel.sizeToFit()
badgeLabel.textAlignment = .center
let badgeSize = badgeLabel.frame.size
let height = max(18, Double(badgeSize.height) + 5.0)
let width = max(height, Double(badgeSize.width) + 10.0)
var vertical: Double?, horizontal: Double?
if let badgeInset = self.badgeEdgeInsets {
vertical = Double(badgeInset.top) - Double(badgeInset.bottom)
horizontal = Double(badgeInset.left) - Double(badgeInset.right)
let x = (Double(bounds.size.width) - 10 + horizontal!)
let y = -(Double(badgeSize.height) / 2) - 10 + vertical!
badgeLabel.frame = CGRect(x: x, y: y, width: width, height: height)
} else {
let x = self.frame.width - CGFloat((width / 2.0))
let y = CGFloat(-(height / 2.0))
badgeLabel.frame = CGRect(x: x, y: y, width: CGFloat(width), height: CGFloat(height))
}
badgeLabel.layer.cornerRadius = badgeLabel.frame.height/2
badgeLabel.layer.masksToBounds = true
addSubview(badgeLabel)
badgeLabel.isHidden = badge != nil ? false : true
}
required init?(coder aDecoder: NSCoder) {
super.init(coder: aDecoder)
self.addBadgeToButon(badge: nil)
fatalError("init(coder:) has not been implemented")
}
}
```
as we can see the `SSBadgeButton`is UIButton, and I need to convert that `SSBadgeButton` to `UIBarButtonItem`. the purpose of this is to make the `UIBarButtonItem` class to be accessible in the Interface builder as the custom class like the picture below
[](https://i.stack.imgur.com/kBJi9.png)<issue_comment>username_1: you can create `UIBarButtonItem` with custom button
```
let button = SSBadgeButton(frame: CGRect(x: 0, y: 0, width: 30, height: 30)
let barBtnItem = UIBarButtonItem(customView: button)
```
Upvotes: 1 <issue_comment>username_2: You don't need to convert the `UIButton` to `UIBarButtonItem`, you can always create `UIBarbuttonItem` using `UIButton` as shown below
```
let button = UIButton()
button.setTitle("ABCD", for: .normal)
let uiBarButtonItem = UIBarButtonItem(customView: button)
self.navigationItem.leftBarButtonItems = [uiBarButtonItem]
```
Instead of UIButton you will use your `SSBadgeButton` thats all
Hope it helps
Upvotes: 3 [selected_answer] |
2018/03/20 | 1,404 | 6,049 | <issue_start>username_0: We are developing a .Net Core 2.1 Web API with JWT Bearer authentication. The application itself will generate and hand out tokens which are to be send to the backend.
While we have everything up and running, i.e. we can send the bearer token from Angular and test it with Postman, Swagger won't send the Bearer token.
We have added the Swagger configuration to use a SecurityDefinition as followed, I will post the complete ConfigureServices method:
```
public void ConfigureServices(IServiceCollection services)
{
// Add framework services.
services.AddMvc();
services.AddCors(options =>
{
options.AddPolicy("AllowAllOrigins",
policy => policy.WithOrigins("*").AllowAnyOrigin().AllowAnyHeader().AllowAnyMethod());
});
services.Configure(options =>
{
options.Filters.Add(new CorsAuthorizationFilterFactory("AllowAllOrigins"));
});
ServiceInstaller.Install(services, Configuration);
// api user claim policy
services.AddAuthorization(options =>
{
var authorizationPolicy = new AuthorizationPolicyBuilder()
.AddAuthenticationSchemes(JwtBearerDefaults.AuthenticationScheme)
.RequireAuthenticatedUser().Build();
options.AddPolicy("Bearer", authorizationPolicy);
});
// add identity
var builder = services.AddIdentityCore(o =>
{
// configure identity options
o.Password.RequireDigit = false;
o.Password.RequireLowercase = false;
o.Password.RequireUppercase = false;
o.Password.RequireNonAlphanumeric = false;
o.Password.RequiredLength = 6;
});
builder = new IdentityBuilder(builder.UserType, typeof(IdentityRole), builder.Services);
builder.AddEntityFrameworkStores().AddDefaultTokenProviders();
var keyByteArray = Encoding.ASCII.GetBytes("placekeyhere");
var signingKey = new Microsoft.IdentityModel.Tokens.SymmetricSecurityKey(keyByteArray);
services.AddAuthentication(options => { options.DefaultScheme = JwtBearerDefaults.AuthenticationScheme; }).AddJwtBearer(
options =>
{
options.TokenValidationParameters = new TokenValidationParameters()
{
IssuerSigningKey = signingKey,
ValidAudience = "Audience",
ValidIssuer = "Issuer",
ValidateIssuerSigningKey = true,
ValidateLifetime = true,
ClockSkew = TimeSpan.FromMinutes(0)
};
});
// Configure JwtIssuerOptions
services.Configure(options =>
{
options.Issuer = "Issuer";
options.Audience = "Audience";
options.SigningCredentials = new SigningCredentials(signingKey, SecurityAlgorithms.HmacSha256);
});
// Register the Swagger generator, defining one or more Swagger documents
services.AddSwaggerGen(c =>
{
c.SwaggerDoc("v1", new Info { Title = "AppName", Version = "v1" });
c.OperationFilter();
c.AddSecurityDefinition("Authorization", new ApiKeyScheme
{
Description =
"JWT Authorization header using the Bearer scheme. Example: \"Authorization: Bearer {token}\"",
Name = "Authorization",
In = "header",
Type = "apiKey",
});
});
}
```
This does add the Authenticate option to the top of the screen. In the configure method we tell the application to actually use the authentication:
```
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
app.UseAuthentication();
if (env.IsDevelopment())
{
// Enable middleware to serve generated Swagger as a JSON endpoint.
app.UseCors();
app.UseSwagger();
// Enable middleware to serve swagger-ui (HTML, JS, CSS, etc.), specifying the Swagger JSON endpoint.
app.UseSwaggerUI(c => { c.SwaggerEndpoint("/swagger/v1/swagger.json", "AppName"); });
}
app.UseMvc();
}
```
However when we authenticate ourselves with a token, the curl for the function does not show the Bearer token. It looks like Swagger does not send the token to the backend.
We use .Net Core 2.1 and Swagger 2.3. Any help would be appreciated, thank you.<issue_comment>username_1: **Update - The Swagger spec has changed. check answer by @nilay below for the correct solution.**
I had the very same problem.
2 things are neccessary
1. You have to put `"bearer "` like this.
Putting only token will not work.
2.
to get this to work in swagger 2.x, you need to accompany your scheme definition with a corresponding requirement to indicate that the scheme is applicable to all operations in your API:
```
c.AddSecurityRequirement(new Dictionary>
{
{ "Bearer", new string[] { } }
});
```
Complete definition:
```
services.AddSwaggerGen(c =>
{
c.SwaggerDoc("v1", new Info { Title = "Some API", Version = "v1" });
c.AddSecurityDefinition("Bearer", new ApiKeyScheme()
{
Description = "JWT Authorization header using the Bearer scheme. Example: \"Authorization: Bearer {token}\"",
Name = "Authorization",
In = "header",
Type = "apiKey"
});
c.AddSecurityRequirement(new Dictionary>
{
{ "Bearer", new string[] { } }
});
});
```
Upvotes: 5 [selected_answer]<issue_comment>username_2: I also face same issue, but I am using new version of Swagger which is based on OpenAPI. So, I have to use below snippet for same.
```
var securityScheme = new OpenApiSecurityScheme()
{
Description = "JWT Authorization header using the Bearer scheme. Example: \"Authorization: Bearer {token}\"",
Name = "Authorization",
In = ParameterLocation.Header,
Type = SecuritySchemeType.Http,
Scheme = "bearer",
BearerFormat = "JWT"
};
var securityRequirement = new OpenApiSecurityRequirement
{
{
new OpenApiSecurityScheme
{
Reference = new OpenApiReference
{
Type = ReferenceType.SecurityScheme,
Id = "bearerAuth"
}
},
new string[] {}
}
};
options.AddSecurityDefinition("bearerAuth", securityScheme);
options.AddSecurityRequirement(securityRequirement);
```
Upvotes: 3 |
2018/03/20 | 1,540 | 5,217 | <issue_start>username_0: I have an error when retrieving json data with retrofit2 in Android Studio. Last time i was try to retrieving data with json only 1 Response model, but now i just need to retrieving json data more than 1 table from database.
Error
```
java.lang.IllegalStateException: Expected a string but was BEGIN_ARRAY at line 1 column 20 path $.pesan
No adapter attached; skipping layout
```
My ResponseModel.class
```
public class ResponseModel {
String kode, pesan;
List result\_question; //But the problem is coming when i try to retrieving another json from another table(2)
List result; //i was success with this(1)
public String getKode() {
return kode;
}
public void setKode(String kode) {
this.kode = kode;
}
public String getPesan() {
return pesan;
}
public void setPesan(String pesan) {
this.pesan = pesan;
}
public List getResult\_question() {
return result\_question;
}
public void setResult\_question(List result\_question) {
this.result\_question = result\_question;
}
public List getResult() {
return result;
}
public void setResult(List result) {
this.result = result;
}
}
```
My QuestionModel.class
```
public class QuestionModel {
String id_question, id_user, judul, waktu, tanggal, jml_like, aktif;
public String getId_question() {
return id_question;
}
public void setId_question(String id_question) {
this.id_question = id_question;
}
public String getId_user() {
return id_user;
}
public void setId_user(String id_user) {
this.id_user = id_user;
}
public String getJudul() {
return judul;
}
public void setJudul(String judul) {
this.judul = judul;
}
public String getWaktu() {
return waktu;
}
public void setWaktu(String waktu) {
this.waktu = waktu;
}
public String getTanggal() {
return tanggal;
}
public void setTanggal(String tanggal) {
this.tanggal = tanggal;
}
public String getJml_like() {
return jml_like;
}
public void setJml_like(String jml_like) {
this.jml_like = jml_like;
}
public String getAktif() {
return aktif;
}
public void setAktif(String aktif) {
this.aktif = aktif;
}
}
```
My ApiRequest
```
@GET(url_question_list)
Call getQuestionData();
```
Enqueu
```
getData.enqueue(new Callback() {
@Override
public void onResponse(Call call, Response response) {
pd.dismiss();
Log.d(TAG,"onResponse: "+ response.body().getKode());
mList = response.body().getResult\_question();
mAdapter = new AdapterQuestion(MainActivity.this, mList);
mRecyclerView.setAdapter(mAdapter);
mAdapter.notifyDataSetChanged();
}
@Override
public void onFailure(Call call, Throwable t) {
pd.dismiss();
Log.e(TAG, "onFailure: "+t.getMessage());
}
});
```
JSON Format
```
{
"kode": 1,
"pesan": [
{
"id_question": "2",
"id_user": "8",
"judul": "Title1",
"waktu": "11:22:10",
"tanggal": "20-06-2018",
"jml_like": "0",
"aktif": "Y"
},
{
"id_question": "1",
"id_user": "9",
"judul": "Title2",
"waktu": "11:22:20",
"tanggal": "19-02-2012",
"jml_like": "1",
"aktif": "Y"
}
]
```
}
Thank for your help.
UPDATE: SOLVED
'Pesan' should be List<><issue_comment>username_1: **Update - The Swagger spec has changed. check answer by @nilay below for the correct solution.**
I had the very same problem.
2 things are neccessary
1. You have to put `"bearer "` like this.
Putting only token will not work.
2.
to get this to work in swagger 2.x, you need to accompany your scheme definition with a corresponding requirement to indicate that the scheme is applicable to all operations in your API:
```
c.AddSecurityRequirement(new Dictionary>
{
{ "Bearer", new string[] { } }
});
```
Complete definition:
```
services.AddSwaggerGen(c =>
{
c.SwaggerDoc("v1", new Info { Title = "Some API", Version = "v1" });
c.AddSecurityDefinition("Bearer", new ApiKeyScheme()
{
Description = "JWT Authorization header using the Bearer scheme. Example: \"Authorization: Bearer {token}\"",
Name = "Authorization",
In = "header",
Type = "apiKey"
});
c.AddSecurityRequirement(new Dictionary>
{
{ "Bearer", new string[] { } }
});
});
```
Upvotes: 5 [selected_answer]<issue_comment>username_2: I also face same issue, but I am using new version of Swagger which is based on OpenAPI. So, I have to use below snippet for same.
```
var securityScheme = new OpenApiSecurityScheme()
{
Description = "JWT Authorization header using the Bearer scheme. Example: \"Authorization: Bearer {token}\"",
Name = "Authorization",
In = ParameterLocation.Header,
Type = SecuritySchemeType.Http,
Scheme = "bearer",
BearerFormat = "JWT"
};
var securityRequirement = new OpenApiSecurityRequirement
{
{
new OpenApiSecurityScheme
{
Reference = new OpenApiReference
{
Type = ReferenceType.SecurityScheme,
Id = "bearerAuth"
}
},
new string[] {}
}
};
options.AddSecurityDefinition("bearerAuth", securityScheme);
options.AddSecurityRequirement(securityRequirement);
```
Upvotes: 3 |
2018/03/20 | 919 | 3,142 | <issue_start>username_0: ```
|
<% if !f.paid %>
<%= check\_box\_tag "pin\_number[]", f.id, checked= !f.paid? %>
<%end%>
| <%= f.year %> | <%= f.quarter %> | <%= number\_to\_currency(f.amount, unit: "", precision: 2)%> |
mergecommonrows();
setTimeout(addpaidrows,1000);
function addpaidrows(){
$(document).ready(function(){
var table = $("#property_dtl_table_body");
var rows = table.find("tr#paidrow.payment_paid");
var first = parseFloat(rows.find('.price').text());
sum = first;
var startIndex = 0;
var lastIndex = 0;
var startText = rows.find('.price').text();
var colsLength = 4;
var removeLater = new Array();
for(var i=3; i=1){
console.log(lastIndex+" - "+startIndex)
//console.log($($(rows[startIndex]).find('.price')))
$($(rows[startIndex]).find('.price')[i]).attr("rowspan",spanLength+1);
}
lastIndex = j;
startIndex = j;
startText = currentText;
var spanLength = lastIndex-startIndex;
if(spanLength>=1){
console.log(lastIndex+" - "+startIndex)
//console.log($($(rows[startIndex]).find("td")[i]))
$($(rows[startIndex]).find('.price')[i]).attr("rowspan",spanLength+1);
}
console.log("---");
sum = sum + currentNumber;
}
}
console.log(sum);
var frist = rows.find('.price').text();
for(var i in removeLater){
$(removeLater[i]).remove();
}
$('tr#paidrow.payment\_paid td.price').text(sum.toFixed(2))
});
}
```
I'm making a program where after the Javascript file does its thing, the text in the upper right hand corner of the table will be replaced with the 'sum.toFixed(2)' variable (in this case, 67.97). It works, but the problem is that the .text() method repeats itself, causing the result to become NaN, since the other numbers required to make the sum.toFixed(2) are erased after it has been added to the sum. And based on the console below, it seems that that is the cause of the problem.
[](https://i.stack.imgur.com/xbk2C.png)
[](https://i.stack.imgur.com/MDTBg.png)<issue_comment>username_1: Unfortunately, `text` is slightly unusual as compared to other jQuery methods. From [the documentation](http://api.jquery.com/text/):
>
> Get the combined text contents of each element in the set of matched elements, including their descendants, or set the text contents of the matched elements.
>
>
>
Almost every other accessor function in jQuery accesses only the **first** of the matched elements, but `text` combines all of their text.
If you want the text of only the **first** matched item, you have to do that on purpose:
```
var first = parseFloat(rows.find('.price').first().text());
// ---------------------------------------^^^^^^^^
```
Upvotes: 1 <issue_comment>username_2: I solved the problem. I added an if-else statement that checks if the sum is NaN (not a number) or not. If it is NaN, no output will be shown. But if it isn't, the output will show.
```
if (sum != sum){
}
else {
rows.find('.price').first().text(sum.toFixed(2));
}
```
Upvotes: 0 |
2018/03/20 | 2,520 | 8,510 | <issue_start>username_0: Here is output.json: <https://1drv.ms/u/s!AizscpxS0QM4hJo5SnYOHAcjng-jww>
i have issues in sts:AsumeRole.Principal.Service part when have multiple Services
```
Principal": {
"Service": [
"ssm.amazonaws.com",
"ec2.amazonaws.com"
]
}
```
in my code below, it's `.Principal.Service` field.
If have only one service, no issues
```
"InstanceProfileList": [
{
"InstanceProfileId": "AIPAJMMLWIVZ2IXTOC3RO",
"Roles": [
{
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"AWS": "*"
}
}
]
},
"RoleId": "AROAJPHJ4EDQG3G5ZQZT2",
"CreateDate": "2017-04-04T23:46:47Z",
"RoleName": "dev-instance-role",
"Path": "/",
"Arn": "arn:aws:iam::279052847476:role/dev-instance-role"
}
],
"CreateDate": "2017-04-04T23:46:47Z",
"InstanceProfileName": "bastionServerInstanceProfile",
"Path": "/",
"Arn": "arn:aws:iam::279052847476:instance-profile/bastionServerInstanceProfile"
}
],
"RoleName": "dev-instance-role",
"Path": "/",
"AttachedManagedPolicies": [
{
"PolicyName": "dev-instance-role-policy",
"PolicyArn": "arn:aws:iam::279052847476:policy/dev-instance-role-policy"
}
],
"RolePolicyList": [],
"Arn": "arn:aws:iam::279052847476:role/dev-instance-role"
},
{
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"Service": [
"ssm.amazonaws.com",
"ec2.amazonaws.com"
]
}
}
]
},
```
If only one service exists, no issues, but if more than one then getting error `string ("") and array (["ssm.amazonaws.com) cannot be added`
How to get all values for Principal.Service in one row.
My code:
```
jq -rc '.RoleDetailList
| map(select((.AssumeRolePolicyDocument.Statement | length > 0) and
(.AssumeRolePolicyDocument.Statement[].Principal.Service) or
(.AssumeRolePolicyDocument.Statement[].Principal.AWS) or
(.AssumeRolePolicyDocument.Statement[].Principal.Federated) or
(.AttachedManagedPolicies | length >0) or
(.RolePolicyList | length > 0)) )[]
| [.RoleName,
([.RolePolicyList[].PolicyName,
([.AttachedManagedPolicies[].PolicyName] | join("--"))]
| join(" ")),
(.AssumeRolePolicyDocument.Statement[]
| .Principal.Federated + "" + .Principal.Service + ""+.Principal.AWS)]
| @csv' ./output.json
```
Desired output:
```
"dev-instance-role","dev-instance-role-policy","ssm.amazonaws.com--ec2.amazonaws.com--*"
```
Current output:
```
"dev-instance-role","dev-instance-role-policy","*"
```<issue_comment>username_1: It appears that .Principal.Service is either a string or an array of strings, so you need to handle both cases. Consider therefore:
```
def to_s: if type == "string" then . else join("--") end;
```
You might want to make this more generic to make it more robust or for other reasons.
You might also want to streamline your jq filter to make it more intelligible and maintainable, e.g. by using jq variables. Note also that
```
.x.a + .x.b + x.c
```
can be written as:
```
.x | (.a + .b + .c)
```
Upvotes: 1 <issue_comment>username_2: Consider adding additional condition to check whether `.Principal.Service` is type of either `array` or `string`:
```
jq -rc '.RoleDetailList
| map(select((.AssumeRolePolicyDocument.Statement | length > 0) and
(.AssumeRolePolicyDocument.Statement[].Principal.Service) or
(.AssumeRolePolicyDocument.Statement[].Principal.AWS) or
(.AssumeRolePolicyDocument.Statement[].Principal.Federated) or
(.AttachedManagedPolicies | length >0) or
(.RolePolicyList | length > 0)) )[]
| [.RoleName,
([.RolePolicyList[].PolicyName,
([.AttachedManagedPolicies[].PolicyName] | join("--"))]
| join(" ")),
(.AssumeRolePolicyDocument.Statement[]
| .Principal.Federated + ""
+ (.Principal.Service | if type == "array" then join("--") else . end)
+ "" + .Principal.AWS)]
| @csv' ./output.json
```
The output:
```
"ADFS-Administrators","Administrator-Access ","arn:aws:iam::279052847476:saml-provider/companyADFS"
"ADFS-amtest-ro","pol-amtest-ro","arn:aws:iam::279052847476:saml-provider/companyADFS"
"adfs-host-role","pol-amtest-ro","ec2.amazonaws.com"
"aws-elasticbeanstalk-ec2-role","AWSElasticBeanstalkWebTier--AWSElasticBeanstalkMulticontainerDocker--AWSElasticBeanstalkWorkerTier","ec2.amazonaws.com"
"aws-elasticbeanstalk-service-role","AWSElasticBeanstalkEnhancedHealth--AWSElasticBeanstalkService","elasticbeanstalk.amazonaws.com"
"AWSAccCorpAdmin","AdministratorAccess","arn:aws:iam::279052847476:saml-provider/LastPass"
"AWScompanyCorpAdmin","AdministratorAccess","arn:aws:iam::279052847476:saml-provider/LastPass"
"AWScompanyCorpPowerUser","PowerUserAccess","arn:aws:iam::279052847476:saml-provider/LastPass"
"AWSServiceRoleForAutoScaling","AutoScalingServiceRolePolicy","autoscaling.amazonaws.com"
"AWSServiceRoleForElasticBeanstalk","AWSElasticBeanstalkServiceRolePolicy","elasticbeanstalk.amazonaws.com"
"AWSServiceRoleForElasticLoadBalancing","AWSElasticLoadBalancingServiceRolePolicy","elasticloadbalancing.amazonaws.com"
"AWSServiceRoleForOrganizations","AWSOrganizationsServiceTrustPolicy","organizations.amazonaws.com"
"AWSServiceRoleForRDS","AmazonRDSServiceRolePolicy","rds.amazonaws.com"
"Cloudyn","ReadOnlyAccess","arn:aws:iam::432263259397:root"
"DatadogAWSIntegrationRole","DatadogAWSIntegrationPolicy","arn:aws:iam::464622532012:root"
"datadog_alert_metrics_role","AWSLambdaBasicExecutionRole-66abe1f2-cee8-4a90-a026-061b24db1b02","lambda.amazonaws.com"
"dev-instance-role","dev-instance-role-policy","*"
"ec2ssmRole","AmazonEC2RoleforSSM","ssm.amazonaws.com--ec2.amazonaws.com"
"ecsInstanceRole","AmazonEC2ContainerServiceforEC2Role","ec2.amazonaws.com"
"ecsServiceRole","AmazonEC2ContainerServiceRole","ecs.amazonaws.com"
"flowlogsRole","oneClick_flowlogsRole_1495032428381 ","vpc-flow-logs.amazonaws.com"
"companyDevShutdownEC2Instaces","oneClick_lambda_basic_execution_1516271285849 ","lambda.amazonaws.com"
"companySAMLUser","AdministratorAccess","arn:aws:iam::279052847476:saml-provider/companyAzureAD"
"irole-matlabscheduler","pol-marketdata-rw","ec2.amazonaws.com"
"jira_role","","*"
"lambda-ec2-ami-role","lambda-ec2-ami-policy","lambda.amazonaws.com"
"lambda_api_gateway_twilio_processor","AWSLambdaBasicExecutionRole-f47a6b57-b716-4740-b2c6-a02fa6480153--AWSLambdaSNSPublishPolicyExecutionRole-d31a9f16-80e7-47c9-868a-f162396cccf6","lambda.amazonaws.com"
"lambda_stop_rundeck_instance","oneClick_lambda_basic_execution_1519651160794 ","lambda.amazonaws.com"
"OneLoginAdmin","AdministratorAccess","arn:aws:iam::279052847476:saml-provider/OneLoginAdmin"
"OneLoginDev","PowerUserAccess","arn:aws:iam::279052847476:saml-provider/OneLoginDev"
"rds-host-role","","ec2.amazonaws.com"
"rds-monitoring-role","AmazonRDSEnhancedMonitoringRole","monitoring.rds.amazonaws.com"
"role-amtest-ro","pol-amtest-ro","ec2.amazonaws.com"
"role-amtest-rw","pol-amtest-rw","ec2.amazonaws.com"
"Stackdriver","ReadOnlyAccess","arn:aws:iam::314658760392:root"
"vmimport","vmimport ","vmie.amazonaws.com"
"workspaces_DefaultRole","SkyLightServiceAccess ","workspaces.amazonaws.com"
```
Upvotes: 3 [selected_answer] |
2018/03/20 | 1,055 | 3,896 | <issue_start>username_0: I'm Trying to call MySQL store procedure using Hibernate - JAVA
When i run the following java method to execute the procedure, my table is getting locked and not returning any response. But Procedure is success when i run as mysql command.
Procedure :
```
CREATE DEFINER = `root`@`%` PROCEDURE `getReceiptNumber`(IN bankcode VARCHAR(5),IN receipttype VARCHAR(20),IN curyear INT(4), IN curday INT(3),OUT seq VARCHAR(5))
BEGIN
IF EXISTS (SELECT 1 FROM receipt_number WHERE bank_code = bankcode AND receipt_type = receipttype) THEN
IF NOT EXISTS(SELECT * FROM receipt_number WHERE bank_code = bankcode AND receipt_type = receipttype AND cur_year = curyear) THEN
UPDATE receipt_number SET cur_year = curyear, seq_number = 0 WHERE bank_code = bankcode AND receipt_type = receipttype;
END IF;
IF NOT EXISTS(SELECT 1 FROM receipt_number WHERE bank_code = bankcode AND receipt_type = receipttype AND cur_year = curyear AND cur_date = curday) THEN
UPDATE receipt_number SET cur_date = curday, seq_number = 0 WHERE bank_code = bankcode AND receipt_type = receipttype;
END IF;
ELSE
INSERT INTO receipt_number VALUES (bankcode,curday,curyear,receipttype,0);
END IF;
UPDATE receipt_number SET seq_number = (seq_number + 1) WHERE bank_code = bankcode AND receipt_type = receipttype;
SELECT LPAD(seq_number,5,'0') FROM receipt_number WHERE bank_code = bankcode AND receipt_type = receipttype INTO seq;
END;
```
MySQL Request :
```
call getReceiptNumber("ABCD", "REC-PGM-BATCH",2018,42, @seq);
select @seq;
```
Java Method :
```
private String getBatchNumber(String bankCode, String receipttype, int year, int julianDay) {
Session session = HibernateUtil.getSessionFactory().openSession();
String sequeceNumber = "";
try {
ProcedureCall call = session.createStoredProcedureCall("getReceiptNumber");
call.registerParameter("bankcode", String.class, ParameterMode.IN).bindValue(bankCode);
call.registerParameter("receipttype", String.class, ParameterMode.IN).bindValue(receipttype);
call.registerParameter("curyear", Integer.class, ParameterMode.IN).bindValue(year);
call.registerParameter("curday", Integer.class, ParameterMode.IN).bindValue(julianDay);
call.registerParameter("seq", String.class, ParameterMode.OUT);
ProcedureOutputs out = call.getOutputs();
sequeceNumber = (String) out.getOutputParameterValue("seq");
} catch (Exception e) {
e.printStackTrace();
} finally {
session.flush();
session.close();
}
return sequeceNumber;
}
```<issue_comment>username_1: ```
Connection conn = getSession().connection();
CallableStatment stat = conn.prepareCall("{CALL insertComm (?,?)}");
stat.setString(1, remitNo); // Assuming both parameters are String
stat.setString(2, opt);
stat.executeUpdate();
stat.close();
```
[like this post](https://stackoverflow.com/questions/14351589/how-to-call-a-mysql-stored-procedure-from-hibernate)
i think its help full
Upvotes: 1 <issue_comment>username_2: ```
@NamedStoredProcedureQuery(
name = "calculate",
procedureName = "calculate",
parameters = {
@StoredProcedureParameter(mode = ParameterMode.IN, type = Double.class, name = "x"),
@StoredProcedureParameter(mode = ParameterMode.IN, type = Double.class, name = "y"),
@StoredProcedureParameter(mode = ParameterMode.OUT, type = Double.class, name = "sum")
}
)
**Use it in your program**
StoredProcedureQuery query = this.em.createNamedStoredProcedureQuery("calculate");
query.setParameter("x", 1.23d);
query.setParameter("y", 4.56d);
query.execute();
Double sum = (Double) query.getOutputParameterValue("sum");
```
[If you are not able to solve then please refer this link](https://www.thoughts-on-java.org/hibernate-tips-call-stored-procedure/)
Upvotes: 0 |
2018/03/20 | 471 | 1,786 | <issue_start>username_0: When I run following query in **mysql 5.5**, I can see the table has a default value `''` for the primary key even though I've not specified it
```
CREATE TABLE `test1` (
`id` VARCHAR(10),
`data` VARCHAR(50),
PRIMARY KEY (`id`)
);
```
When I run following query in **mysql 5.5**, I can see the table has a default value `'0'` for the primary key even though I've not specified it
```
CREATE TABLE `test2` (
`id` int(10),
`data` VARCHAR(50),
PRIMARY KEY (`id`)
);
```
This doesn't happen in **mariadb 10.2**.
I used to compare database structures using the values in information\_schema. But now it seems to be not possible to compare default values of 2 table columns from 5.5 and 10.2 databases. Anybody knows why this is automatically added? And a solution for this?<issue_comment>username_1: If a column definition includes no explicit DEFAULT value, MySQL determines the default values.
Upvotes: 0 <issue_comment>username_2: Since Primary key cant be NULL & cant be duplicate, MYSQL assign default value for the first record if you don't specify a value to it at the time of record insert.
You can set this primary key as auto increment and you can also insert values if you want even after it is of type auto increment ( If you Want ).
Upvotes: 0 <issue_comment>username_3: According to [MariaDB documentation](https://mariadb.com/kb/en/library/primary-keys-with-nullable-columns/) `MariaDB` before the version `10.1.7` and `MySQL` before the version `5.7` convert such columns into a NOT NULL column with a default value of 0. But then both DBs changed their behaviour and currently the column is converted to NOT NULL, but without a default value.
Thus, you can change one of your DB version to achieve the same behaviour
Upvotes: 2 |
2018/03/20 | 1,765 | 6,566 | <issue_start>username_0: there is a table containing a column of type `SDO_GEOMETRY` representing some points. Later a spatial query like 'show all other points within distance x' shall be executed.
After pumping some data via `sqlloader` into a import-table I am going to bring these datasets into a base-table while merging if business-key still exists.
My application is a executable jar using `hibernate 5.0.9`, `hibernate-spatial 5.0.9 final`, Oracle12c database and the hibernate.dialect has been
```
org.hibernate.dialect.Oracle12cDialect
```
before but now changed to
```
org.hibernate.spatial.dialect.oracle.OracleSpatial10gDialect
```
I know that there is only a spatial-dialect oficially for Oracle10 and also testet with oracle11 but with oracle12.
The problem now is, that the objects are stored via hibernate into base-table but inside the SDO-Geometry all coordinates are null (while the sdo\_geometry object isnt).
```
select
businessKey,
my_sdo_geometry.sdo_point.x longitude,
my_sdo_geometry.sdo_point.y latitude
from my_import_table;
```
[](https://i.stack.imgur.com/6ZmH4.png)
After sqlloader, all coordinates in import-table are shown well.
[](https://i.stack.imgur.com/nWTb3.png)
But after processing each import-dataset its column 'imported' {Y, N} is set to 'Y'. And doing so I am using `myEntityManager.merge(importDataset)`. Due to this (although alle coordinates had been fine after sqlloader) the coordinates of the import-dataset are cleared to null.
```
//coordinates still viewable in database
importObject.setImported(Boolean.TRUE);
em.merge(importObject);
//coordinates null now
```
[](https://i.stack.imgur.com/R4i71.png)
I guess that the spatial dialect is not working properly.
I am afraid that this happens due to some incompatibility between last version of hibernate-spatial and oracle12c.
I am writing here with hope that there is anybody using hibernate-spatial successfully with oracle12 (to clarify that its possible at all) and to may get some help with any bad configuration on my side.
The persistance.xml of my application:
```
org.hibernate.ejb.HibernatePersistence
```
Thanks in advance.
edit:
in the entity class the sdo\_geometry column (here called 'Position') getter is annotated as follows:
```
@Column(name = "POSITION", nullable = false)
public Point getPosition() {
return this.position;
}
```
edit 2:
after enabling hibernate sql trace, I found that hibernate is trying this update:
```
Hibernate: update IMPORT_TABLE set HB_VERSION=?, BUSINESS_KEY=?, IMPORTED=?, [...], POSITION=? where IMPORT_TABLE_ID=? and HB_VERSION=?
```
I am pointing to `POSITION=?` which cannot be right I believe.
At this position I would expect the hibernate-dialect to take place.
May another hint to an imcompatibility with hibernate-dialect spatial10 and oracle12.<issue_comment>username_1: I found a workaround which is not answering my question as I intend but its a workaround so far.
Instead of using hibernate objects and the merge-operator It works when instead using a custom query like this:
```
Query q = em.createQuery("INSERT INTO base_table"
+ "(techId, myBuildingFK, [...], my_sdo_geometry) "
+ "SELECT l.impTechId, b, [...], l.my_sdo_geometry"
+ "FROM import_table l, Buildiung b "
+ "WHERE l.businessKey = g.businessKey ");
q.executeUpdate();
```
With this HQL-Query I am getting the coordinates not be emptied.
Due to its still a HQL Query may something of the spatial-dialect is still used but it seems it is not working properly with merge-command of entity manager. May I am missing still some annotation at the entity to get this working may its incompatible with Oracle12.
I hope still for other and may better answers than mine.
edit:
because I need merge-behaviour I thought of just merging the object via entity manager and then do an sql or hql updatequery to store the sdo\_geometry but since the transaction did not finish there is no datset to be updated.
```
// //not working
// em.merge(mergeObject);
// String updateQueryString =
// "update BaseTable set sdo_geometry = :sdoGeometry" + " where baseTableId = :baseTableId";
// Query updateQuery = em.createQuery(updateQueryString);
// updateQuery.setParameter("sdoGeometry", mergeObject.getSdoGeometry());
// updateQuery.setParameter("baseTableId", mergeObject.getBaseTableId());
// int cnt = updateQuery.executeUpdate();
```
Due to this I am now doing a native oracle merge statement which does work as expected but escapes every hibernate behaviour and is of bad architecture I think.
```
String mergeQueryString = "MERGE INTO Base_Table gl "
+ "USING (SELECT * FROM Import_Table WHERE import_table_id = :importTableId) igl "
+ "ON (gl.base_table_id = :baseTableId) "
+ "WHEN MATCHED THEN UPDATE SET gl.my_sdo_geometry = igl.my_sdo_geometry "
+ "WHEN NOT MATCHED THEN INSERT (gl.baseTableId, gl.anyFkId, gl.my_sdo_geometry) "
+ "VALUES (id_gen_function, :anyFkId, igl.my_sdo_geometry)";
Query mergeQuery = em.createNativeQuery(mergeQueryString);
mergeQuery.setParameter("importTableId", importObject.getImportTableId());
mergeQuery.setParameter("baseTableId", refModelObject.getBaseTableId());
mergeQuery.setParameter("anyFkId", refModelObject.getAnyFkObject().getAnyFkId());
int cnt = mergeQuery.executeUpdate();
```
Upvotes: 0 <issue_comment>username_2: Hibernate Spatial by default stores the coordinates in the SDO\_Ordinates array, even for Points. The SDO\_Geometry.SDO\_Point field is not used, so will always be null. In SQL you can use the SDO\_UTIL.GetVertices() function to access the coordinate data.
In geolatte-geom version 1.1 or higher you should be able to set the GEOLATTE\_USE\_SDO\_POINT\_TYPE Java system property. With that property set, Point geometries will be stored in the SDO\_Geometry.SDO\_Point field. If your Hibernate version uses an older Geolatte-geom version, you may want to add a more recent version to your classpath so you can use this feature.
Btw. Hibernate Spatial works by encoding the Java Geometries to values that can be set in a prepared statement, so the "... Position=?" part in the generated SQL is fine.
Upvotes: 2 [selected_answer] |
2018/03/20 | 664 | 2,409 | <issue_start>username_0: I have created class models using **HTTP networking library Alamofire** which contains following data:
```
class MyData: NSObject {
var name = ""
var params = PublicData()
func initUserWithInfo(userInfo: [String : AnyObject]) {
if let name = userInfo["name"] as? String {
self.name = name
}
if let params = userInfo["params"] as? ParamData {
self.params = params
}
}
}
class PublicData: NSObject {
var city = ""
func initUserWithInfo(userInfo: [String : AnyObject]) {
if let city = userInfo["city"] as? String {
self.city = city
}
}
}
```
Now, when I am trying to check whether `params` is `nil` or not, it's giving the following warning message:
```
let data = MyData()
if data.params != nil {
}
```
>
>
> ```
> Comparing non-optional value of type 'params' to nil always returns true
>
> ```
>
>
or
```
if data.params {
}
```
>
> 'params' is not convertible to 'Bool'
>
>
>
or
```
if data.rams as Bool {
}
```
>
> Cannot convert value of type 'ImpressionObject' to type 'Bool' in
> coercion
>
>
>
how can I able to check if the nested model class is `nil` or not?<issue_comment>username_1: You are getting this error because you are initialing the value of params and can not be nil later on. If you want to make it optional than you should try below code
```
class MyData: NSObject {
var name = ""
/// making variable optional
var params: PublicData?
func initUserWithInfo(userInfo: [String : AnyObject]) {
if let name = userInfo["name"] as? String {
self.name = name
}
if let params = userInfo["params"] as? ParamData {
self.params = params
}
}
}
```
After that you should use it as below
```
let data = MyData()
if let parameters = data.params {
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Or you can write something even smaller:
```
class MyData: NSObject {
var name = ""
/// making variable optional
var params: PublicData?
func initUserWithInfo(userInfo: [String : AnyObject]) {
guard let name = userInfo["name"] as? String, let param = userInfo["params"] as? ParamData else { return }
self.name = name
self.params = params
}
}
```
Upvotes: 0 |
2018/03/20 | 442 | 1,675 | <issue_start>username_0: Using PHP and DOMDocument class to parse HTML from TinyMCE editor. I'm having issues inserting `---` elements into the editor, because DOMDocument keeps losing the rest of the code.
```
# Input:
---
test input
$domDoc = new DOMDocument();
$domDoc->loadHTML($input, LIBXML_HTML_NOIMPLIED | LIBXML_HTML_NODEFDTD);
var_dump($domDoc->saveHTML());
// Result:
---
```
I can't find any reason for this, nor an option for `loadHTML()` to prevent this. What exactly happens and can I use `hr` element here?<issue_comment>username_1: You are getting this error because you are initialing the value of params and can not be nil later on. If you want to make it optional than you should try below code
```
class MyData: NSObject {
var name = ""
/// making variable optional
var params: PublicData?
func initUserWithInfo(userInfo: [String : AnyObject]) {
if let name = userInfo["name"] as? String {
self.name = name
}
if let params = userInfo["params"] as? ParamData {
self.params = params
}
}
}
```
After that you should use it as below
```
let data = MyData()
if let parameters = data.params {
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Or you can write something even smaller:
```
class MyData: NSObject {
var name = ""
/// making variable optional
var params: PublicData?
func initUserWithInfo(userInfo: [String : AnyObject]) {
guard let name = userInfo["name"] as? String, let param = userInfo["params"] as? ParamData else { return }
self.name = name
self.params = params
}
}
```
Upvotes: 0 |
2018/03/20 | 1,570 | 6,600 | <issue_start>username_0: I need to check if some value is null or not. And if its not null then just set some variable to true. There is no else statement here. I got too many condition checks like this.
Is there any way to handle this null checks without checking all method return values?
```
if(country != null && country.getCity() != null && country.getCity().getSchool() != null && country.getCity().getSchool().getStudent() != null .....) {
isValid = true;
}
```
I thought about directly checking variable and ignoring `NullpointerException`. Is this a good practice?
```
try{
if(country.getCity().getSchool().getStudent().getInfo().... != null)
} catch(NullPointerException ex){
//dont do anything.
}
```<issue_comment>username_1: No, it is generally not good practice in Java to catch a NPE instead of null-checking your references.
You can use [`Optional`](https://docs.oracle.com/javase/8/docs/api/java/util/Optional.html) for this kind of thing if you prefer:
```
if (Optional.ofNullable(country)
.map(Country::getCity)
.map(City::getSchool)
.map(School::getStudent)
.isPresent()) {
isValid = true;
}
```
or simply
```
boolean isValid = Optional.ofNullable(country)
.map(Country::getCity)
.map(City::getSchool)
.map(School::getStudent)
.isPresent();
```
if that is all that `isValid` is supposed to be checking.
Upvotes: 7 [selected_answer]<issue_comment>username_2: You could use `Optional` here, but it creates one Optional object at each step.
```
boolean isValid = Optional.ofNullable(country)
.map(country -> country.getCity()) //Or use method reference Country::getCity
.map(city -> city.getSchool())
.map(school -> school.getStudent())
.map(student -> true)
.orElse(false);
//OR
boolean isValid = Optional.ofNullable(country)
.map(..)
....
.isPresent();
```
Upvotes: 4 <issue_comment>username_3: As alternative to other fine usage of `Optional`, we could also use a utility method with a `Supplier` var-args as parameter.
It makes sense as we don't have many nested levels in the object to check but many fields to check.
Besides, it may easily be modified to log/handle something as a `null` is detected.
```
boolean isValid = isValid(() -> address, // first level
() -> address.getCity(), // second level
() -> address.getCountry(),// second level
() -> address.getStreet(), // second level
() -> address.getZip(), // second level
() -> address.getCountry() // third level
.getISO()
@SafeVarargs
public static boolean isValid(Supplier... suppliers) {
for (Supplier supplier : suppliers) {
if (Objects.isNull(supplier.get())) {
// log, handle specific thing if required
return false;
}
}
return true;
}
```
---
Suppose you would like to add some traces, you could so write :
```
boolean isValid = isValid( Arrays.asList("address", "city", "country",
"street", "zip", "Country ISO"),
() -> address, // first level
() -> address.getCity(), // second level
() -> address.getCountry(),// second level
() -> address.getStreet(), // second level
() -> address.getZip(), // second level
() -> address.getCountry() // third level
.getISO()
);
@SafeVarargs
public static boolean isValid(List fieldNames, Supplier... suppliers) {
if (fieldNames.size() != suppliers.length){
throw new IllegalArgumentException("...");
}
for (int i = 0; i < suppliers.length; i++) {
if (Objects.isNull(suppliers.get(i).get())) {
LOGGER.info( fieldNames.get(i) + " is null");
return false;
}
}
return true;
}
```
Upvotes: 3 <issue_comment>username_4: Java doesn't have "null-safe" operations, like, for example [Kotlin's null safety](https://kotlinlang.org/docs/reference/null-safety.html)
You can either:
* catch the NPE and ignore it
* check all the references manually
* use Optional as per the other answers
* use some sort of tooling like XLST
Otherwise if you have control over the domain objects, you can redesign your classes so that the information you need is available from the top level object (make `Country` class do all the null checking...)
Upvotes: 2 <issue_comment>username_5: The object-oriented approach is to put the isValid method in Country and the other classes. It does not reduce the amount of null checks, but each method only has one and you don't repeat them.
```
public boolean isValid() {
return city != null && city.isValid();
}
```
This has the assumption that validation is the same everywhere your Country is used, but typically that is the case. If not, the method should be named hasStudent(), but this is less general and you run the risk of duplicating the whole School interface in Country. For example, in another place you may need hasTeacher() or hasCourse().
Another approach is to use null objects:
```
public class Country {
public static final Country NO_COUNTRY = new Country();
private City city = City.NO_CITY;
// etc.
}
```
I'm not sure it is preferable is this case (strictly you would need a sub class to override all modification methods), the Java 8 way would be to go with Optional as method in the other answers, but I would suggest to embrace it more fully:
```
private Optional city = Optional.ofNullable(city);
public Optional getCity() {
return city;
}
```
Both for null objects and Nullable only work if you always use them instead of null (notice the field initialization), otherwise you still need the null checks. So this option avoid null, but you code becomes more verbose to reduced null checks in other places.
Of course, the correct design may be to use Collections where possible (instead of Optional). A Country has a set of City, City has a set of Schools, which has set of students, etc.
Upvotes: 3 <issue_comment>username_6: You could also look at vavr's Option which as the below posts describes is better than Java's Optional and has a much richer API.
<https://softwaremill.com/do-we-have-better-option-here/>
Upvotes: 0 |
2018/03/20 | 582 | 1,885 | <issue_start>username_0: Here i want the div to be shown if the specified value is present or selected and the div should be hide if the specified value is not selected.
here is my code
```
1
2
3
4
5
Q
```
here is html
```
Count:
```
Here is the script
```
$(document).ready(function(){
$('#qaranty\_count\_full').hide();
$("#result").change(function(){
if($('option:selected', this).val()=='q')
{
$('#qaranty\_count\_full').show();
$('#qaranty\_count').prop('required', true);
}
else
{
$('#qaranty\_count\_full').hide();
$('#qaranty\_count').prop('required', false);
}
});
});
```
in this case if i choose `q` first time the `div` becomes visible and if i choose `q` as second or third value the `div` is not becoming visible,the div should remain visible if `q` is selected,hope understood my problem<issue_comment>username_1: You can try this code.
```
$(document).ready(function(){
$("#result").change(function(){
if($.inArray("q", $(this).val()) !== -1){
$('#qaranty_count_full').show();
$('#qaranty_count').prop('required', true);
}else{
$('#qaranty_count_full').hide();
$('#qaranty_count').prop('required', false);
}
});
});
```
Here is a codepen I have created.
<https://codepen.io/smitraval27/pen/BrpPPN>
Upvotes: 3 [selected_answer]<issue_comment>username_2: You could do like this, using `.each`, and loop the selected array.
Stack snippet
```html
1
2
3
4
5
Q
Count:
$(document).ready(function(){
$('#qaranty\_count\_full').hide();
$("#result").change(function(){
$('#qaranty\_count\_full').hide();
$('#qaranty\_count').prop('required', false);
$('option:selected', this).each(function(i,sel){
if ($(sel).val() == 'q') {
$('#qaranty\_count\_full').show();
$('#qaranty\_count').prop('required', true);
}
});
});
});
```
Upvotes: 1 |
2018/03/20 | 312 | 1,058 | <issue_start>username_0: I `ssh` connect to a Linux server and want to save all the output from the server tty(console) to a log file, for later read/search.
For example, if I do `echo "dafds"` and then `ls` in `bash`, I want the log file to have the following content, or maybe something similar:
```
bash-4.4$ echo "dafds"
dafds
bash-4.4$ ls
README.md
```
After I leave this session, this file should have all the contents I saw over the terminal during this session.
Can I achieve this?
Should I do this on server side or on client side? Thanks.<issue_comment>username_1: You could use [`tee`](http://man7.org/linux/man-pages/man1/tee.1.html) to read from stdin and stdout and redirect to a log file:
`ssh user@server | tee ssh_session.out`
Assuming that you know before making the connection that you want to log the session.
Upvotes: 2 [selected_answer]<issue_comment>username_2: You can also use tmux for this.
`:capture-pane -S 100` save the lasts 100 lines in a buffer.
`:save-buffer filename.txt` write this buffer on a file.
Upvotes: 2 |
2018/03/20 | 280 | 902 | <issue_start>username_0: Suppose I'm editing file **a.txt** on a buffer, then run `:E.` and from the netrw open another file, say **b.txt**.
Now if I hit `CTRL`+`O`, I'm back on **a.txt** and not into the netrw explorer.
`:jumps` has not registered the Netrw location. Is there any way to make netrw dir locations to be registered as jumps, and hence make `CTRL`+`I`, `CTRL`+`O` work as I expect?<issue_comment>username_1: You could use [`tee`](http://man7.org/linux/man-pages/man1/tee.1.html) to read from stdin and stdout and redirect to a log file:
`ssh user@server | tee ssh_session.out`
Assuming that you know before making the connection that you want to log the session.
Upvotes: 2 [selected_answer]<issue_comment>username_2: You can also use tmux for this.
`:capture-pane -S 100` save the lasts 100 lines in a buffer.
`:save-buffer filename.txt` write this buffer on a file.
Upvotes: 2 |
2018/03/20 | 629 | 2,186 | <issue_start>username_0: I've searched around for a while now on how to generate a shortened url (e.g. how bit.ly or goo.gl work) but have been unsuccessful.
I presumed it would be something like:
```
baseN(hash(long_url))
```
But I always end up with a very long digest instead of something short like 6 characters.
Is it safe to just truncate the digest before encoding it (is encoding it even necessary - I believe it is for making it URL 'safe' but wanted to ask) and is there not a possibility of collisions when only dealing with six characters?
It seems like (warning: I don't know maths) a factorial of 6! (e.g. `6*5*4*3*2*1`) would result in only 720 combinations.
I also remember reading somewhere that with a hash table of 100k items, that a rough calculation for the number of collisions could yield ~17% chance of collision. That feels like a pretty large percentage to me?
The following Python code is based off my understanding of how I might do this type of url shortening:
```
import hashlib, base64
message = hashlib.sha512()
message.update("https://www.python.org/dev/peps/pep-0537/")
base64.urlsafe_b64encode(
message.hexdigest().encode("utf-8")
)[:6].decode("utf-8")
```<issue_comment>username_1: There is no effective function to do this. You need to:
1. Store the URL in a database
2. Generate a unique ID (or if you already have the url, reuse the id)
Upvotes: 2 <issue_comment>username_2: you may be looking for a bidirectional function as mentioned in [How to code a URL shortener?](https://stackoverflow.com/questions/742013/how-to-code-a-url-shortener#742047)
but I also recommend you to not over-complicate unless it is really a requirement for your scenario
a much simpler approach would be to just keep record of what you've mapped:
>
> ... there is no compression algorithm, but there is a lookup and generation algorithm. When a URL shortener gets a new URL, it has to create a new short URL that is not yet taken and return this. It will then store the short URL and the long URL in a key-value store and use this at lookup time.
>
>
>
<https://www.quora.com/What-are-the-http-bit-ly-and-t-co-shortening-algorithms>
Upvotes: 0 |
2018/03/20 | 663 | 1,979 | <issue_start>username_0: My environment is:
* CentOS 6.9
* Ubuntu 16.04 LTS
The GNU coreutils 8.4 has the test command to check the file using `-f` option.
`man test` shows
>
>
> ```
> -f FILE
> FILE exists and is a regular file
>
> ```
>
>
The definition of the "regular file" is ambiguous for me.
On the terminal, I did
```
$ touch afile
$ ln -fs afile LN-FILE
```
Then, I executed the following script (check\_file\_exist\_180320\_exec)
```
#!/usr/bin/env bash
if [ -e LN-file ]
then
echo "file exists [-e]"
fi
if [ -f LN-file ]
then
echo "file exists [-f]"
fi
```
For CentOS and Ubuntu, both show -e and -f for symbolic linked file (LN-FILE).
However, ls -l returns `l:symbolik link` (not `-:regular file`) identifiers for the LN-FILE file.
( see. <https://linuxconfig.org/identifying-file-types-in-linux>)
On the other hand, I found following,
[Difference between if -e and if -f](https://stackoverflow.com/questions/10204562/difference-between-if-e-and-if-f)
>
> A regular file is something that isn't a directory / symlink / socket / device, etc.
>
>
> answered Apr 18 '12 at 7:10
>
>
> jman
>
>
>
What is the reference I should check for the "regular file" (for CentOS and Ubuntu)?<issue_comment>username_1: Note the documentation further down in `man test`
>
> Except for -h and -L, all FILE-related tests dereference symbolic
> links.
>
>
>
Basically when you do `-f LN-file` , and LN-file is a symbolic link, the `-f` test will follow that symlink, and give you the result of what the symlink points to.
If you want to check if a file is a symlink or a regular file, you need to do e.g.
```
if [ -h LN-file ]
then
echo "file is a symlink [-h]"
elif [ -f LN-file ]
then
echo "file is a regular file [-f]"
fi
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: A regular file is a file that isn't a binary file neither a symbolic link.
It use to be associated to a plain text file.
Upvotes: 1 |
2018/03/20 | 347 | 1,388 | <issue_start>username_0: I've created my own Route file using AppServiceProvider,
```
public function boot(){
$this->loadRoutesFrom('routes/test/routes.php');
}
```
The routes are working but they doesn't found the Controllers
```
Route::get('/test', 'TestController@test');
```
ReflectionException (-1)
Class TestController does not exist
Maybe I missed something? Thanks in advance.<issue_comment>username_1: You need to map routes in your RouteServiceProvider.php, Check the example of web routes.
```
protected function mapWebRoutes()
{
Route::group([
'middleware' => 'web',
'namespace' => $this->namespace,
], function ($router) {
require base_path('routes/web.php');
});
}
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: First of all Thanks to username_1.
It was so easy, I didn´t have to touch AppServiceProvider, the answer was in obviously in RouteServiceProvider.php, I just add my customized route file to mapWebRoutes();
```
protected function mapWebRoutes()
{
Route::middleware('web')
->namespace($this->namespace)
->group(base_path('routes/web.php'));
/* My route-file */
Route::middleware('web')
->namespace($this->namespace)
->group(base_path('routes/test/test_routes.php'));
}
```
Upvotes: 0 |
2018/03/20 | 1,118 | 3,360 | <issue_start>username_0: I am able to work with Truffle and Ganache-cli. Have deployed the contract and can play with that using truffle console
```
truffle(development)>
Voting.deployed().then(function(contractInstance)
{contractInstance.voteForCandidate('Rama').then(function(v)
{console.log(v)})})
undefined
truffle(development)> { tx:
'0xe4f8d00f7732c09df9e832bba0be9f37c3e2f594d3fbb8aba93fcb7faa0f441d',
receipt:
{ transactionHash:
'0xe4f8d00f7732c09df9e832bba0be9f37c3e2f594d3fbb8aba93fcb7faa0f441d',
transactionIndex: 0,
blockHash:
'0x639482c03dba071973c162668903ab98fb6ba4dbd8878e15ec7539b83f0e888f',
blockNumber: 10,
gasUsed: 28387,
cumulativeGasUsed: 28387,
contractAddress: null,
logs: [],
status: '0x01',
logsBloom: ... }
```
Now when i started a server using "npm run dev". Server started fine but is not connecting with the Blockchain
i am getting the error
```
Uncaught (in promise) Error: Contract has not been deployed to detected network (network/artifact mismatch)
```
This is my truffle.js
```
// Allows us to use ES6 in our migrations and tests.
require('babel-register')
module.exports = {
networks: {
development: {
host: '127.0.0.1',
port: 8545,
network_id: '*', // Match any network id
gas: 1470000
}
}
}
```
Can you please guide me how i can connect ?<issue_comment>username_1: In your `truffle.js`, change `8545` to `7545`.
**Or**, in Ganache (GUI), click the gear in the upper right corner and change the port number from `7545` to `8545`, then restart. With ganache-cli use `-p 8545` option on startup to set 8545 as the port to listen on.
Either way, the mismatch seems to be the issue; these numbers should match. This is a common issue.
Also feel free to check out [ethereum.stackexchange.com](https://ethereum.stackexchange.com/). If you want your question moved there, you can flag it and leave a message for a moderator to do that.
Upvotes: 0 <issue_comment>username_2: Solve the issue.
issue was at currentProvider, i gave the url of ganache blockchain provider and it worked.
```
if (typeof web3 !== 'undefined') {
console.warn("Using web3 detected from external source like Metamask")
// Use Mist/MetaMask's provider
// window.web3 = new Web3(web3.currentProvider);
window.web3 = new Web3(new Web3.providers.HttpProvider("http://localhost:7545"));
} else {
console.warn("No web3 detected. Falling back to http://localhost:8545. You should remove this fallback when you deploy live, as it's inherently insecure. Consider switching to Metamask for development. More info here: http://truffleframework.com/tutorials/truffle-and-metamask");
// fallback - use your fallback strategy (local node / hosted node + in-dapp id mgmt / fail)
window.web3 = new Web3(new Web3.providers.HttpProvider("http://localhost:8545"));
}
```
Upvotes: 2 [selected_answer]<issue_comment>username_3: Change interface to 0.0.0.0 (all interfaces) in ganache settings > server.
In truffle-config.js, use provider instead of default host config:
```
const HDWalletProvider = require("@truffle/hdwallet-provider")
networks: {
development: {
provider: () => new HDWalletProvider([
"YOUR_PRIVATE_KEY",
], "http://127.0.0.1:7545/"),
port: 7545,
network_id: "*" // Match any network id
}
}
```
Upvotes: 0 |
2018/03/20 | 933 | 3,979 | <issue_start>username_0: I want to offer users of my application to download a docx file which would contain dynamic content depending on their request.
I prepared a template with header, otherwise the OpenXML creation is pain in the somewhere.
Now I am having trouble with editing it and returning it as FileResult in my ASP MVC application.
My plan is to read a file to `MemoryStream` and edit it as `WordprocessingDocument` and then returning `MemoryStream as Byte[]`.
I have two problems here:
1. If I read `WordprocessingDocument` directly from file without memory stream, I don't see a way to transform it to the shape `FileResult()` wants.
2. If I create new `WordprocessingDocument` with empty `MemoryStream` I can simply create content and return it as `File` but I lack the previously prepared header with desired content.
So, how to edit a `.docx` template and return it as `FileResult()`?<issue_comment>username_1: This is how you go about creating a new spreadsheet document with `MemoryStream` and then saving it to a `Byte[]`
```
Byte[] file;
using (MemoryStream mem = new MemoryStream())
{
using (SpreadsheetDocument spreadsheetDocument =
SpreadsheetDocument.Create(mem, SpreadsheetDocumentType.Workbook))
{
WorkbookPart workbookpart = spreadsheetDocument.AddWorkbookPart();
workbookpart.Workbook = new Workbook();
WorksheetPart worksheetPart = workbookpart.AddNewPart();
worksheetPart.Worksheet = new Worksheet(new SheetData());
Sheets sheets = spreadsheetDocument.WorkbookPart.Workbook.
AppendChild(new Sheets());
Sheet sheet = new Sheet()
{
Id = spreadsheetDocument.WorkbookPart.
GetIdOfPart(worksheetPart),
SheetId = 1,
Name = "aStupidSheet"
};
sheets.Append(sheet);
workbookpart.Workbook.Save()
spreadsheetDocument.Close();
}
file = mem.ToArray();
}
```
Upvotes: 2 <issue_comment>username_2: With the help from username_1 I managed to achieve my goal.
So the desired "workflow" is:
1. copy `.docx` template (added headers or other things) to memory
2. edit content of the document
3. return this document as `FileResult` without saving the changes to template
So here is the code:
```
public FileResult EditDOCXBody()
{
// Prepare file path
string file = "../WordTemplates/EmptyTemplate.docx";
String fullFilePath = HttpContext.ApplicationInstance.Server.MapPath(file);
// Copy file content to MemeoryStream via byte array
MemoryStream stream = new MemoryStream();
byte[] fileBytesArray = System.IO.File.ReadAllBytes(fullFilePath);
stream.Write(fileBytesArray, 0, fileBytesArray.Length); // copy file content to MemoryStream
stream.Position = 0;
// Edit word document content
using (WordprocessingDocument wordDoc = WordprocessingDocument.Open(stream, true))
{
MainDocumentPart mainPart = wordDoc.MainDocumentPart;
Body body = mainPart.Document.Body;
// add some text to body
body.Append(new Paragraph(
new Run(
new Text("Current time is: " + DateTime.Now.ToLongTimeString()))));
// Save changes to reflect on stream object
mainPart.Document.Save();
}
return File(stream.ToArray(), "application/vnd.openxmlformats-officedocument.wordprocessingml.document", "servedFilename.docx");
}
```
Some important notes:
* you need to write file bytes to MemoryStream manually otherwise you get **Memory stream is not expandable** error. More info [here](https://stackoverflow.com/questions/20589020/adding-to-wordprocessingdocument-opened-from-memorystream-without-getting-memor)
* you have to use `mainPart.Document.Save()` for changes to take effect on `MemoryStream`
* when returning the file, you have to use `.ToArray()` otherwise the returned file is corrupted
Upvotes: 3 [selected_answer] |
2018/03/20 | 2,072 | 7,159 | <issue_start>username_0: I want to show the date picker Dialog on Android. Now I can choose only Normal Date. How I can convert it to hijri(islamic) Calendar? Here is the code I am using to show the Dialog,
Code to Show Date-picker Dialog
```
private void showDOBPickerDialog(Context context, String DateString) {
try {
String defaltDate = getCurrentDate_MMDDYYYY();
if (DateString == null || DateString.isEmpty() || DateString.length() < 10)
DateString = defaltDate;
int monthOfYear, dayOfMonth, year;
monthOfYear = Integer.parseInt(DateString.substring(0, DateString.indexOf("/"))) - 1;
dayOfMonth = Integer.parseInt(DateString.substring(DateString.indexOf("/") + 1, DateString.lastIndexOf("/")));
year = Integer.parseInt(DateString.substring(DateString.lastIndexOf("/") + 1));
DatePickerDialog datePickerDialog = new DatePickerDialog(context, new DatePickerDialog.OnDateSetListener() {
@Override
public void onDateSet(DatePicker view, int year, int monthOfYear, int dayOfMonth) {
monthOfYear = monthOfYear + 1;
String Month = String.valueOf(monthOfYear), Day = String.valueOf(dayOfMonth);
if (monthOfYear < 10)
Month = "0" + monthOfYear;
if (dayOfMonth < 10)
Day = "0" + dayOfMonth;
String selectedDate = Month + "/" + Day + "/" + year;
edtTxtDateOfId.setText(selectedDate);
}
}, year, monthOfYear, dayOfMonth);
datePickerDialog.setTitle("Select Date");
datePickerDialog.show();
} catch (Exception e) {
}
}
```
To get the Current Date,
```
public static String getCurrentDate_MMDDYYYY() {
String DATE_FORMAT_NOW = "MM/dd/yyyy";
SimpleDateFormat sdf = new SimpleDateFormat(DATE_FORMAT_NOW);
Calendar cal = GregorianCalendar.getInstance(Locale.FRANCE);
cal.setTime(new Date());
return sdf.format(cal.getTime());
}
```<issue_comment>username_1: [As you don't want a library and need only native code](https://stackoverflow.com/questions/49380118/how-to-show-native-islamichijri-calendar#comment85761741_49380200), you can take a look at the source code of this implementation: <https://github.com/ThreeTen/threetenbp/tree/master/src/main/java/org/threeten/bp/chrono>
Take a look at the `HijrahChronology`, `HijrahDate` and `HijrahEra` classes, perhaps you can get some ideas and see how all the math is done to convert between this calendar and ISO8601 calendar.
But honestly, IMO calendars implementations are too complex and in most cases are not worth the trouble to do it by yourself. That's one of the cases where adding a library is totally worth it.
Using the [ThreeTen-Backport lib](http://www.threeten.org/threetenbp/) - and [configuring it to use with Android](https://stackoverflow.com/questions/38922754/how-to-use-threetenabp-in-android-project) - will give you an easy way to convert the dates and also to format them:
```
// get ISO8601 date (the "normal" date)
int dayOfMonth = 20;
int monthOfYear = 3;
int year = 2018;
// March 20th 2018
LocalDate dt = LocalDate.of(year, monthOfYear, dayOfMonth);
// convert to hijrah
HijrahDate hijrahDate = HijrahDate.from(dt);
// format to MM/DD/YYYY
DateTimeFormatter formatter = DateTimeFormatter.ofPattern("MM/dd/yyyy");
String formatted = formatter.format(hijrahDate); // 07/03/1439
```
You can also call `HijrahDate.now()` to directly get the current date.
And you can convert the `hijrahDate` back to a "normal" date with `LocalDate.from(hijrahDate)`.
---
You can also use [time4j](http://time4j.net/):
```
// get ISO8601 date (the "normal" date)
int dayOfMonth = 20;
int monthOfYear = 3;
int year = 2018;
PlainDate dt = PlainDate.of(year, monthOfYear, dayOfMonth);
// convert to Hijri, using different variants
HijriCalendar hijriDateUmalqura = dt.transform(HijriCalendar.class, HijriCalendar.VARIANT_UMALQURA);
HijriCalendar hijriDateWest = dt.transform(HijriCalendar.class, HijriAlgorithm.WEST_ISLAMIC_CIVIL);
// format to MM/DD/YYYY
ChronoFormatter fmt = ChronoFormatter.ofPattern("MM/dd/yyyy", PatternType.CLDR, Locale.ENGLISH, HijriCalendar.family());
String formatted = fmt.format(hijriDateUmalqura); // 07/03/1439
// get current date
HijriCalendar now = HijriCalendar.nowInSystemTime(HijriCalendar.VARIANT\_UMALQURA, StartOfDay.MIDNIGHT);
// convert back to "normal" date
PlainDate date = hijriDateUmalqura.transform(PlainDate.class);
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Taking into account your statement given in one comment that **you only want a native solution and reject any extra library**, I would advise to use ICU4J-class [IslamicCalendar](https://developer.android.com/reference/android/icu/util/IslamicCalendar.html).
Sure, you have then to accept two major disadvantages:
* API-level 24 (not so widespread on mobile phones)
* Old-fashioned API-style (for example not immutable)
Another disadvantage (which is only relevant if you are also interested in the clock time) is the fact that ICU4J does not support the start of Islamic day in the evening at sunset on previous day. This feature is only supported in my lib Time4J, nowhere else. But you have probably no need for this feature in a date picker dialog.
Advantages:
* API-style similar to what is "traditional" in package `java.util`, so I assume that you are got accustomed to it (but many/most people see the style rather as negative, you have to make your own decision)
* at least [umalqura-variant](https://developer.android.com/reference/android/icu/util/IslamicCalendar.CalculationType.html)-variant of Saudi-Arabia is offered (note: other libs like Threeten-BP or Joda-Time-Android do NOT offer that variant)
* acceptable or even good degree of internationalization (also better than in ThreetenBP or Joda-Time-Android)
For completeness, if you are willing to restrict your Android app to level 26 or higher only then you can also use [java.time.chrono.HijrahChronology](https://developer.android.com/reference/java/time/chrono/HijrahChronology.html). But I think this is still too early in year 2018 because the support of mobile phones for level 26 is actually very small. And while it does offer the Umalqura variant (of Saudi-Arabia), it does not offer any other variant.
Else there are no native solutions available. And to use only native solutions is a restriction, too, IMHO.
Upvotes: 1 <issue_comment>username_3: Convert current date to hijri date
```
SimpleDateFormat sdf = new SimpleDateFormat("dd-MM-yyyy");
Date date = new Date();
sdf.applyPattern("dd");
int dayOfMonth = Integer.parseInt(sdf.format(date));
sdf.applyPattern("MM");
int monthOfYear = Integer.parseInt(sdf.format(date));
sdf.applyPattern("yyyy");
int year = Integer.parseInt(sdf.format(date));
// Now
LocalDate dt = LocalDate.of(year, monthOfYear, dayOfMonth);
// convert to hijrah
HijrahDate hijrahDate = HijrahDate.from(dt);
// format to MM/DD/YYYY
DateTimeFormatter formatter = DateTimeFormatter.ofPattern("dd MMMM yyyy");
```
Upvotes: 0 |
2018/03/20 | 1,614 | 4,901 | <issue_start>username_0: So I was making an application using C++ Console, with multi threading as below, then I got an error 0x0000005.
The first time it run it was working as usual. Can anyone help me with this problem?
I am using Code::Blocks IDE with Borland C++ 5.5, and I am planning to make this into Borland C++ 5.02
```
#include
#include
#include
#include
#include
void linesmov(int mseconds, int y);
void linesmov(int mseconds, int y)
{
int i=0;
while (true)
{
i=i+1;
// Or system("cls"); If you may...
gotoxy(i,y);
cout << "\_\_\_\_||\_\_\_\_||\_\_\_\_";
gotoxy(i-1,y);
cout << " ";
Sleep(mseconds);
if (i>115)
{
i=0;
for(int o = 0; o < 100; o++)
{
gotoxy(0,y);
cout << " ";
}
}
}
}
DWORD WINAPI mythread1(LPVOID lpParameter)
{
printf("Thread inside %d \n", GetCurrentThreadId());
linesmov(5,10);
return 0;
}
DWORD WINAPI mythread2(LPVOID lpParameter)
{
printf("Thread inside %d \n", GetCurrentThreadId());
linesmov(30,15);
return 0;
}
int main(int argc, char\* argv[])
{
HANDLE myhandle1;
DWORD mythreadid1;
HANDLE myhandle2;
DWORD mythreadid2;
myhandle1 = CreateThread(0,0,mythread1,0,0,&mythreadid1);
myhandle2 = CreateThread(0,0,mythread2,0,0,&mythreadid2);
printf("Thread after %d \n", mythreadid1);
getchar();
return 0;
}
```<issue_comment>username_1: a) Both gotoxy not outputting via std::cout are not thread safe /synchronized. You need process-wide mutex to synchronize that
b) exception is likely due to fact that you do not use `WaitForMultipleObjects` in main to wait for threads to finish. Depending on hardware and optimization main may exit before threads finish their work.
Upvotes: 0 <issue_comment>username_2: All of these solutions in comments including mine are definitely not the way how it should be done. The main problem is lack of synchronization between threads and lack of processing their termination. Also, every function should be checked for thread-safe compatibility or should be wrapped to match it.
Considering `std::cout` since c++11 we have some data race guarantees:
>
> Concurrent access to a synchronized (§27.5.3.4) standard iostream
> object’s formatted and unformatted input (§27.7.2.1) and output
> (§27.7.3.1) functions or a standard C stream by multiple threads shall
> not result in a data race (§1.10). [ Note: Users must still
> synchronize concurrent use of these objects and streams by multiple
> threads if they wish to avoid interleaved characters. — end note ]
>
>
>
So lask of synchronization primitives is oblivious according to this note.
Considering processing of thread termination.
```
HANDLE threadH = CreateThread(...);
...
TerminateThread(threadH, 0); // Terminates a thread.
WaitForSingleObject(threadH, INFINITE); // Waits until the specified object is in the signaled state or the time-out interval elapses.
CloseHandle(threadH); // Closes an open object handle.
```
[TerminateThread()](https://msdn.microsoft.com/en-us/library/windows/desktop/ms686717(v=vs.85).aspx), but be aware of this solution, [because ..](https://stackoverflow.com/questions/10737417/kill-a-running-thread)
[WaitForSingleObject()](https://msdn.microsoft.com/en-us/library/windows/desktop/ms687032(v=vs.85).aspx)
And this is only first steps to thread-safe way.
I would like to recommend C++ Concurrency in Action: Practical Multithreading by <NAME> for further reading.
### Rude solution for synchronized output
```
#include
#include
#include
std::mutex \_mtx; // global mutex
bool online = true; // or condition\_variable
void gotoxy(int x, int y)
{
COORD c = { x, y };
SetConsoleCursorPosition(GetStdHandle(STD\_OUTPUT\_HANDLE), c);
}
void linesmov(int mseconds, int y) {
int i = 0;
while (online) {
i = i + 1;
// Or system("cls"); If you may...
\_mtx.lock(); // <- sync here
gotoxy(i, y);
std::cout << "\_\_\_\_||\_\_\_\_||\_\_\_\_"; gotoxy(i - 1, y);
std::cout << " ";
\_mtx.unlock();
Sleep(mseconds);
if (i > 75)
{
i = 0;
for (int o = 0; o < 60; o++)
{
\_mtx.lock(); // <- sync here
gotoxy(0, y);
std::cout << " ";
\_mtx.unlock();
}
}
}
}
DWORD WINAPI mythread1(LPVOID lpParameter)
{
std::cout << "Thread 1" << GetCurrentThreadId() << std::endl;
linesmov(5, 10);
return 0;
}
DWORD WINAPI mythread2(LPVOID lpParameter)
{
std::cout << "Thread 2" << GetCurrentThreadId() << std::endl;
linesmov(30, 15);
return 0;
}
int main(int argc, char\* argv[])
{
DWORD mythreadid1;
DWORD mythreadid2;
HANDLE myhandle1 = CreateThread(0, 0, mythread1, 0, 0, &mythreadid1);
HANDLE myhandle2 = CreateThread(0, 0, mythread2, 0, 0, &mythreadid2);
std::cout << "Base thread: " << GetCurrentThreadId() << std::endl;
getchar();
online = false;
WaitForSingleObject(myhandle1, INFINITE);
WaitForSingleObject(myhandle2, INFINITE);
CloseHandle(myhandle1);
CloseHandle(myhandle2);
return 0;
}
```
Upvotes: 2 [selected_answer] |
2018/03/20 | 1,026 | 2,834 | <issue_start>username_0: I have a table like this:
```
+-----+------+
| acc | CASE |
+-----+------+
| 001 | a |
| 001 | b |
| 001 | c |
| 002 | a |
| 002 | b |
| 003 | b |
| 003 | c |
| 004 | a |
| 005 | b |
| 006 | b |
| 007 | a |
| 007 | b |
| 007 | c |
| 008 | a |
| 008 | b |
| n | x |
+-----+------+
```
I have no idea how to group and count data with
```
+-----------+-----------+
| case | count_acc |
+-----------+-----------+
| a | 1 |
| b | 2 |
| c | 0 |
| a+b | 2 |
| b+c | 1 |
| a+b+c | 2 |
| a+b+c+…+x | n |
+-----------+-----------+
```
in case a+b,b+c ... a+b+c+…+x I can't group case and count acc. Do you have any idea to group and count?<issue_comment>username_1: You can achieve the same using LISTAGG in oracle
```
with test as ( select 001 acc , 'a' case FROM DUAL UNION
SELECT 001 , 'b' FROM DUAL UNION
SELECT 001 , 'c' FROM DUAL UNION
SELECT 002 , 'a' FROM DUAL UNION
SELECT 002 , 'b' FROM DUAL UNION
SELECT 003 , 'b' FROM DUAL UNION
SELECT 003 , 'c' FROM DUAL UNION
SELECT 004 , 'a' FROM DUAL UNION
SELECT 005 , 'b' FROM DUAL UNION
SELECT 006 , 'b' FROM DUAL UNION
SELECT 007 , 'a' FROM DUAL UNION
SELECT 007 , 'b' FROM DUAL UNION
SELECT 007 , 'c' FROM DUAL UNION
SELECT 008 , 'a' FROM DUAL UNION
SELECT 008 , 'b' FROM DUAL )
select case,count(1) from
(
SELECT count(1),acc, LISTAGG(case, '+') WITHIN GROUP (ORDER BY acc) AS case
FROM test
GROUP BY acc) GROUP BY case order by 1;
```
[SQL fiddle here](http://sqlfiddle.com/#!4/68b32/2024)
Upvotes: 0 <issue_comment>username_2: ```
select b.case,count(distinct(a.acc)) as account from
test a , (select acc , rtrim(case,'+') case
from ( select acc , case , rn from test
model
partition by (acc)
dimension by (row_number() over (partition by acc order by case) rn )
measures (cast(case as varchar2(10)) case)
rules
(case[any] order by rn desc = case[cv()]||'+'||case[cv()+1])
)
where rn = 1) b
where a.acc = b.acc
group by b.case
```
Upvotes: 1 <issue_comment>username_3: First you have to organize your data, grouping the ACC and Aggregating the CASE
```
SELECT
LISTAGG (case,'+') WITHIN GROUP (ORDER BY case) case,
ACC
FROM TEST
GROUP BY ACC
```
Them you will be able to count:
```
SELECT case, count(*) FROM (
SELECT
LISTAGG (case,'+') WITHIN GROUP (ORDER BY case) case,
ACC
FROM TEST
GROUP BY ACC
) GROUP BY case
ORDER BY CASE;
```
In case you don't have function LISTAGG, below 11g for example, refer to this website:
<http://oracle-base.com/articles/misc/string-aggregation-techniques>
Upvotes: 0 |
2018/03/20 | 847 | 2,727 | <issue_start>username_0: I am currently building a UWP app which overrides user's accent colors.
I changed it this way:
```
#fff001
```
Now every control in the app uses this accent color, which is pretty great!
But then the Permission Dialog still has the user's accent color. For example, I use Blue as accent color and this permission dialog doesn't use the new accent color (#fff001) but instead uses the blue one!
[Permission Dialog example](https://i.stack.imgur.com/Q2nqQ.png)
Is there any way to override the color of this permission dialog? I also already edited the Background Color in the Manifest file (Package.appxmanifest -> Visual Assets -> Background Color), still no luck.<issue_comment>username_1: You can achieve the same using LISTAGG in oracle
```
with test as ( select 001 acc , 'a' case FROM DUAL UNION
SELECT 001 , 'b' FROM DUAL UNION
SELECT 001 , 'c' FROM DUAL UNION
SELECT 002 , 'a' FROM DUAL UNION
SELECT 002 , 'b' FROM DUAL UNION
SELECT 003 , 'b' FROM DUAL UNION
SELECT 003 , 'c' FROM DUAL UNION
SELECT 004 , 'a' FROM DUAL UNION
SELECT 005 , 'b' FROM DUAL UNION
SELECT 006 , 'b' FROM DUAL UNION
SELECT 007 , 'a' FROM DUAL UNION
SELECT 007 , 'b' FROM DUAL UNION
SELECT 007 , 'c' FROM DUAL UNION
SELECT 008 , 'a' FROM DUAL UNION
SELECT 008 , 'b' FROM DUAL )
select case,count(1) from
(
SELECT count(1),acc, LISTAGG(case, '+') WITHIN GROUP (ORDER BY acc) AS case
FROM test
GROUP BY acc) GROUP BY case order by 1;
```
[SQL fiddle here](http://sqlfiddle.com/#!4/68b32/2024)
Upvotes: 0 <issue_comment>username_2: ```
select b.case,count(distinct(a.acc)) as account from
test a , (select acc , rtrim(case,'+') case
from ( select acc , case , rn from test
model
partition by (acc)
dimension by (row_number() over (partition by acc order by case) rn )
measures (cast(case as varchar2(10)) case)
rules
(case[any] order by rn desc = case[cv()]||'+'||case[cv()+1])
)
where rn = 1) b
where a.acc = b.acc
group by b.case
```
Upvotes: 1 <issue_comment>username_3: First you have to organize your data, grouping the ACC and Aggregating the CASE
```
SELECT
LISTAGG (case,'+') WITHIN GROUP (ORDER BY case) case,
ACC
FROM TEST
GROUP BY ACC
```
Them you will be able to count:
```
SELECT case, count(*) FROM (
SELECT
LISTAGG (case,'+') WITHIN GROUP (ORDER BY case) case,
ACC
FROM TEST
GROUP BY ACC
) GROUP BY case
ORDER BY CASE;
```
In case you don't have function LISTAGG, below 11g for example, refer to this website:
<http://oracle-base.com/articles/misc/string-aggregation-techniques>
Upvotes: 0 |
2018/03/20 | 1,577 | 6,310 | <issue_start>username_0: I have two tables that I want to join.
Both tables are containing some varchar columns that I use to join them.
However, running queries to make calculations using varchar columns in order to join, is a slow process.
So, I would like to transform these varchar columns to unique integer ids so the comparison will be faster.
```
SELECT /*do calculations*/
FROM [dbo].[messages] m WITH (NOLOCK)
JOIN [dbo].[jointable] j ON j.address = m.orig OR j.address = m.recip
```
The address, orig and recip are the column which have strings and would be better to have ids to make performance faster.
I realise that the part `ON j.address = m.orig OR j.address = m.recip` slows performance.
The tables that i want to join have the following structure:
```
CREATE TABLE [dbo].[jointable](
[displayname] [nvarchar](256) NULL,
[alias] [nvarchar](129) NULL,
[firstname] [nvarchar](129) NULL,
[lastname] [nvarchar](129) NULL,
[address] [nvarchar](256) NULL,
[company] [nvarchar](129) NULL,
[department] [nvarchar](129) NULL,
[office] [nvarchar](129) NULL)
CREATE TABLE [dbo].[messages](
[messageid] [bigint] NOT NULL,
[message] [varchar](150) NULL,
[orig] [nvarchar](256) NULL,
[recip] [nvarchar](256) NULL)
```
How can I do this? Is there any function that I can convert an id from varchar to integer? Thank you in advance.<issue_comment>username_1: 1. [Normalize](https://en.wikipedia.org/wiki/Database_normalization) data in your tables
2. Add int primary key to the first table
3. Add int foreign key to the second table. Set corresponding value from the first table
4. Join by int keys
Upvotes: 3 [selected_answer]<issue_comment>username_2: First try to add indexes on your columns (even if they are `VARCHAR`). If you are still struggling with performance, you can use the following to join by integer values.
```
-- Create a table to link a varchar with an integer
CREATE TABLE WordIndex(
WordID INT IDENTITY PRIMARY KEY,
Word VARCHAR(500))
CREATE NONCLUSTERED INDEX NCI_WordIndex_Word ON WordIndex (Word)
GO
-- Load the table with all available words
INSERT INTO WordIndex (
Word)
SELECT DISTINCT
YourVarcharColumn
FROM
YourTable
UNION
SELECT DISTINCT
YourOtherVarcharColumn
FROM
YourSecondTable
GO
-- Add the integer ID to your tables
ALTER TABLE YourTable ADD WordID INT
ALTER TABLE YourSecondTable ADD WordID INT
ALTER TABLE YourTable ADD FOREIGN KEY (WordID) REFERENCES WordIndex (WordID)
ALTER TABLE YourSecondTable ADD FOREIGN KEY (WordID) REFERENCES WordIndex (WordID)
GO
-- Optionally (but recommended) add indexes on the ID
CREATE NONCLUSTERED INDEX NCI_YourTable_WordID ON YourTable (WordID)
CREATE NONCLUSTERED INDEX NCI_YourSecondTable_WordID ON YourSecondTable (WordID)
GO
-- Update the integer ID
UPDATE T SET
WordID = W.WordID
FROM
YourTable AS T
INNER JOIN WordIndex AS W ON T.Word = W.Word
UPDATE T SET
WordID = W.WordID
FROM
YourSecondTable AS T
INNER JOIN WordIndex AS W ON T.Word = W.Word
GO
-- Join by integer
SELECT
1
FROM
YourTable AS T
INNER JOIN YourSecondTable AS N ON T.WordID = N.WordID
```
Using this approach requieres maintaining the word index table.
Upvotes: 1 <issue_comment>username_3: You **can** *generated a GUID from a VARCHAR*, but I doubt, you are happy with this (so you will need some kind of mapping table as suggested in other answers). Just to show the principles:
If your strings are short and unique within 16 bytes this might work for you:
```
DECLARE @tbl TABLE(SomeString VARCHAR(100),TheGUID UNIQUEIDENTIFIER);
--a GUID is a 16-Byte(128 bit) sized type
INSERT INTO @tbl(SomeString) VALUES
('test1')
,('Some short text')
,('Some very very very very long text')
,('Some very very very very long text which is the same as the other one in the first 16 bytes');
UPDATE @tbl SET TheGUID=CAST(CAST(SomeString AS VARBINARY(16)) AS UNIQUEIDENTIFIER);
SELECT SomeString
,TheGUID
,CAST(CAST(TheGUID AS VARBINARY(16)) AS VARCHAR(16))
FROM @tbl;
```
The result (scroll to the side)
```
+---------------------------------------------------------------------------------------------+--------------------------------------+--------------------+
| SomeString | TheGUID | (<NAME>) |
+---------------------------------------------------------------------------------------------+--------------------------------------+--------------------+
| test1 | 74736574-0031-0000-0000-000000000000 | test1 |
+---------------------------------------------------------------------------------------------+--------------------------------------+--------------------+
| Some short text | 656D6F53-7320-6F68-7274-207465787400 | Some short text |
+---------------------------------------------------------------------------------------------+--------------------------------------+--------------------+
| Some very very very very long text | 656D6F53-7620-7265-7920-766572792076 | Some very very v |
+---------------------------------------------------------------------------------------------+--------------------------------------+--------------------+
| Some very very very very long text which is the same as the other one in the first 16 bytes | 656D6F53-7620-7265-7920-766572792076 | Some very very v |
+---------------------------------------------------------------------------------------------+--------------------------------------+--------------------+
```
Upvotes: 2 <issue_comment>username_4: Need to see table design to properly answer.
If you just add int for the relationship and properly assign the int the problem is if you later change the data then the int relationship is wrong.
If you index those columns a varchar join is not much faster than int.
Would likely be the same query plan but
```
SELECT /*do calculations*/
FROM [dbo].[messages] m
JOIN [dbo].[jointable] j ON j.address in (m.orig, m.recip)
```
Upvotes: 0 |
2018/03/20 | 807 | 2,699 | <issue_start>username_0: I want to create a local inverted theme (modern browsers). Color shades are set using CSS Vars (CSS custom properties). Some elements have more contrast, others are low contrast. Now the inverted container has a black background. Everything within there, should be reversed. Dark grey should be light grey. Light grey should be dark grey.
My goal is to achieve this without reassigning the vars in CSS selectors. For this example it would be easy, but the actual code base is big and there are many selectors. So instead of that I just want change the CSS Vars. Also, I want keep the original CSS Vars to be editable.
**Final goal mockup**
[](https://i.stack.imgur.com/ogNlx.png)
Simple reassignment of the Vars (light = dark, dark = light) does not work, obviously. I tried to transpose the values to a new placeholder var, but that also didn't worked. *Maybe I was doing it wrong?* Is there a clean way? I don't think so.
I am aware of workarounds using SASS, or hacks using mix-blend-mode.
**Playground**:
<https://codepen.io/esher/pen/WzRJBy>
**Example code:**
```
high contrast
low contrast
high contrast
low contrast
```<issue_comment>username_1: What about something like this:
```css
:root {
--high-contrast: var(--high);
--low-contrast: var(--low);
--high: #222;
--low: #aaa;
/* Yes I can put them at the end and it will work, why?
Because it's not C, C++ or a programming language, it's CSS
And the order doesn't matter BUT we need to avoid
cyclic dependence between variables.
*/
}
.high-contrast {
color: var(--high-contrast)
}
.low-contrast {
color: var(--low-contrast)
}
.inverted {
--high-contrast: var(--low);
--low-contrast: var(--high);
}
```
```html
high contrast
low contrast
high contrast
low contrast
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: one solution is to use `filter:invert(100%)` as so:
```
.light {
--invert:invert(100%);
}
.dark {
--invert:invert(0%);
}
div {
filter:var(--invert)
};
```
wheither your `div` is associated to `.dark` or `.light` class you'll get the desired invertion.
```js
const button = document.getElementById('toggle');
let theme = 'dark';
button.addEventListener('click', () => {
const div = document.getElementsByTagName('div')[0];
div.classList.toggle(theme === 'dark' ? 'light' : 'dark');
})
```
```css
.dark{
--invert:invert(100%)
}
.light{
--invert:invert(0%)
}
.to-invert {
height:100px;
width:100px;
background-color:blue;
filter:var(--invert);
margin-bottom:1rem;
}
```
```html
Invert
```
Upvotes: 0 |
2018/03/20 | 1,468 | 4,697 | <issue_start>username_0: i have 3 tables
customers, times and sales
i want to find out all the customers income yearly condition is that customers with no children and income must be greater than a limit we are set
my table structure
customers
```
CREATE TABLE `customers` (
`customer_id` int(11) DEFAULT NULL,
`account_num` double DEFAULT NULL,
`lname` varchar(50) DEFAULT NULL,
`fname` varchar(50) DEFAULT NULL,
`mi` varchar(50) DEFAULT NULL,
`address1` varchar(50) DEFAULT NULL,
`address2` varchar(50) DEFAULT NULL,
`address3` varchar(50) DEFAULT NULL,
`address4` varchar(50) DEFAULT NULL,
`postal_code` varchar(50) DEFAULT NULL,
`region_id` int(11) DEFAULT NULL,
`phone1` varchar(50) DEFAULT NULL,
`phone2` varchar(50) DEFAULT NULL,
`birthdate` datetime DEFAULT NULL,
`marital_status` varchar(50) DEFAULT NULL,
`yearly_income` varchar(50) DEFAULT NULL,
`gender` varchar(50) DEFAULT NULL,
`total_children` smallint(6) DEFAULT NULL,
`num_children_at_home` smallint(6) DEFAULT NULL,
`education` varchar(50) DEFAULT NULL,
`member_card` varchar(50) DEFAULT NULL,
`occupation` varchar(50) DEFAULT NULL,
`houseowner` varchar(50) DEFAULT NULL,
`num_cars_owned` smallint(6) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
```
sales
```
CREATE TABLE `sales` (
`product_id` int(11) DEFAULT NULL,
`time_id` int(11) DEFAULT NULL,
`customer_id` int(11) DEFAULT NULL,
`store_id` int(11) DEFAULT NULL,
`store_sales` float DEFAULT NULL,
`store_cost` float DEFAULT NULL,
`unit_sales` double DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
```
times
```
CREATE TABLE `times` (
`time_id` int(11) DEFAULT NULL,
`the_date` datetime DEFAULT NULL,
`the_day` varchar(50) DEFAULT NULL,
`the_month` varchar(50) DEFAULT NULL,
`the_year` smallint(6) DEFAULT NULL,
`day_of_month` smallint(6) DEFAULT NULL,
`month_of_year` smallint(6) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
```
MY question is :-Find the list of the customers with no child and an yearly\_income greater that a limit given by the user when running the query.
MY query is
```
SET @limit=50;
SELECT customers.`fname`, customers.`lname` ,ROUND(SUM(sales.store_sales)) as income,times.the_year
FROM `sales`
LEFT JOIN times
ON sales.time_id=times.time_id
LEFT JOIN customers
ON customers.customer_id=sales.customer_id
WHERE income>@limit AND `total_children`=0
GROUP BY sales.customer_id,times.the_year
```
am getting this error:#1054 - Unknown column 'income' in 'where clause'<issue_comment>username_1: income is alias name,so can't use that in where condition.
```
SET @limit=50;
SELECT * FROM
(
SELECT customers.`fname`, customers.`lname` ,ROUND(SUM(sales.store_sales))
as income,times.the_year
FROM `sales` LEFT JOIN times ON sales.time_id=times.time_id
LEFT JOIN customers ON customers.customer_id=sales.customer_id
WHERE `total_children`=0
GROUP BY sales.customer_id,times.the_year
)t WHERE income>@limit
```
Upvotes: 0 <issue_comment>username_2: For aggreagte columns not present in the original table you should use `having` clause instead of `where`
```
SET @limit=50;
SELECT customers.`fname`, customers.`lname` ,ROUND(SUM(sales.store_sales)) as income,times.the_year
FROM `sales`
LEFT JOIN times
ON sales.time_id=times.time_id
LEFT JOIN customers
ON customers.customer_id=sales.customer_id
WHERE `total_children`=0
GROUP BY customers.customer_id,times.the_year
having income>@limit
```
Upvotes: 1 <issue_comment>username_3: The quantity you aliased as `income` is an aggregate, and therefore it does not make sense to refer to it in the `WHERE` clause. Move this `WHERE` logic to a `HAVING` clause:
```
SET @limit=50;
SELECT
c.fname,
c.lname,
ROUND(SUM(s.store_sales)) AS income,
t.the_year
FROM sales s
LEFT JOIN times t
ON s.time_id = t.time_id
LEFT JOIN customers c
ON c.customer_id = s.customer_id
WHERE
total_children = 0
GROUP BY
c.customer_id,
t.the_year
HAVING
ROUND(SUM(s.store_sales)) > @limit;
```
Note that technically we could have used the alias in the `HAVING` clause:
```
HAVING income > @limit;
```
But this would not be portable to most other databases. Also, I introduced aliases into the query, which make it easier to read.
Upvotes: 3 [selected_answer]<issue_comment>username_4: >
> Find the list of the customers with no child and an yearly\_income
> greater that a limit given by the user when running the query.
>
>
>
Are you sure you don't want a simple query like this?
```
SELECT *
FROM customers AS c
WHERE yearly_income > @limit -- but why is yearly_income a VarChar(50)?
AND total_children=0
```
Upvotes: 0 |
2018/03/20 | 696 | 1,960 | <issue_start>username_0: In my case I have a list of dicts that contains other several list of dicts.
```
l = [{
'a': [
{ 'b': 4}
]
}, {
'a': [
{ 'b': 3}
]
}]
```
What I would technically like to do would be to sort using the path ['a'][0]['b'] using the sort filter of jinja2.
Is it possible in someway?<issue_comment>username_1: You can write your custom template filter.
<http://jinja.pocoo.org/docs/dev/api/#custom-filters>
Here would be a rough solution (no support for reverse, case sensitive, etc.):
Somwhere in your app:
```
def deep_sort(value, attribute, subattribute):
return sorted(value, key=lambda element: element[attribute][0][subattribute])
environment.filters['deep_sort'] = deep_sort
```
And in your template:
```
{% for value in l|deep_sort('a', 'b') %}
{{value['a'][0]['b']}}
{% endfor %}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: You can try this in one line if you want:
```
l = [{
'a': [
{ 'b': 4}
]
}, {
'a': [
{ 'b': 9}
]
},
{
'a': [
{ 'b': 3}
]
}]
print(sorted(l,key=lambda x:x['a'][0]['b']))
```
output:
```
[{'a': [{'b': 3}]}, {'a': [{'b': 4}]}, {'a': [{'b': 9}]}]
```
Explanation :
Using lambda you can access b's values , so basically sorted() will sort basis on a list of int here how:
```
print(list(map(lambda x:x['a'][0]['b'],l)))
```
output:
```
[4, 9, 3]
```
sorted() will sort basis on this list.
Upvotes: -1 <issue_comment>username_3: You can use the `attribute` parameter in [`sort`](https://jinja.palletsprojects.com/en/3.0.x/templates/#jinja-filters.sort), where you specify `0` as the first element in the list then `b` as the key of that element, and in your case you have `attribute="a.0.b"`:
```py
your_document_list = [{
'a': [
{ 'b': 4}
]
}, {
'a': [
{ 'b': 3}
]
}]
```
```
{% for document in your_document_list | sort(attribute="a.0.b") %}
...
{% endfor %
```
Upvotes: 2 |
2018/03/20 | 627 | 2,509 | <issue_start>username_0: I am working on angular 2 project and I am having an issue when I am trying to change the list . NgFor not recognizing the changes , and displaying only the list loaded at first time .
here is an example code when I am loading all list and imminently after loading I reset it with null . the view still displaying all the list ...
this is my component constructor for example :
```
constructor( private songService : SongService)
this.songService.getSongs()
.subscribe(songsList => {
this.songs = songsList;
});
this.songs = null;
}
```
and this is the html :
```
```<issue_comment>username_1: Instead of null you should set an empty array, also have it inside a method, otherwise it never gets called
```
this.songService.getSongs()
.subscribe(songsList => {
this.songs = songsList;
});
clear(){
this.songs = [];
}
```
Upvotes: 1 <issue_comment>username_2: Try this
```
constructor(private songService: SongService) {
this.songService.getSongs()
.subscribe(songsList => {
this.songs = songsList;
this.reset();
});
}
reset() {
this.songs = [];
}
```
Upvotes: -1 <issue_comment>username_3: The reason why you are still seeing your list is because it is async. You can't be sure when the subscribe method is executed. It can be be direct, within seconds, take hours or not even at all. So in your case you are resetting the list before you are even getting one.
```
constructor( private songService : SongService)
this.songService.getSongs()
.subscribe(songsList => { //Might take a while before executed.
this.songs = songsList;
});
this.songs = null; //executed directly
}
```
The above explanation might be the cause of your problem, but there could also be another explanation. The constructor is only called when the component is created. Changing a router parameter doesn't necessarily create a component. Angular might re-use the component if it can.
Upvotes: 2 <issue_comment>username_4: Loops in Angular sometimes screw up, in the way that they don't track your items the way you would want it to.
To prevent that, you can use a custom track by function like this
```
```
In your TS
```
customTB(index, song) { return `${index}-${song.id}`; }
```
This way, you set up a custom trackBy function, that will update your view (the view wasn't getting updated because the tracking ID wasn't changing).
Upvotes: 3 |
2018/03/20 | 450 | 1,079 | <issue_start>username_0: The objective of task required to produce an array output such that `output[i]` is equal to the sum of all the elements of nums except `nums[i]`.
Eg: given `[6,7,8,9]`, return `[24,23,22,21]`.
`Input = [6,7,8,9]`
The calculation behind is
```
0+7+8+9 = 24
6+0+8+9 = 23
6+7+0+9 = 22
6+7+8+0 = 21
Output = [ 24, 24, 22, 21 ]
```<issue_comment>username_1: You can use list comprehension:
```
In [1]: a = [6,7,8,9]
In [2]: s = sum(a)
In [3]: [s - i for i in a]
Out[3]: [24, 23, 22, 21]
```
Upvotes: 2 <issue_comment>username_2: Use `numpy` broadcasting + vectorised operations for this:
```
import numpy as np
x = np.array([6,7,8,9])
y = np.sum(x) - x
# array([24, 23, 22, 21])
```
Upvotes: 1 <issue_comment>username_3: You can use a for loop and python's inbuilt sum function
```
a = [6,7,8,9] #your input array
b = [] # initialise an empty list
for index in range(len(a)): #iterate through the list's length
b.append( sum( a[0:index] + a[index+1:] ) ) #add to two parts before and
# after the index
print(b)
```
Upvotes: 0 |
2018/03/20 | 1,141 | 3,956 | <issue_start>username_0: I'm trying to create a script to download subtitles from one specific website. Please read the comments in the code.
Here's the code:
```
import requests
from bs4 import BeautifulSoup
count = 0
usearch = input("Movie Name? : ")
search_url = "https://www.yifysubtitles.com/search?q="+usearch
base_url = "https://www.yifysubtitles.com"
print(search_url)
resp = requests.get(search_url)
soup = BeautifulSoup(resp.content, 'lxml')
for link in soup.find_all("div",{"class": "media-body"}): #Get the exact class:'media-body'
imdb = link.find('a')['href'] #Find the link in that class, which is the exact link we want
movie_url = base_url+imdb #Merge the result with base string to navigate to the movie page
print("Movie URL : {}".format(movie_url)) #Print the URL just to check.. :p
next_page = requests.get(movie_url) #Soup number 2 begins here, after navigating to the movie page
soup2 = BeautifulSoup(next_page.content,'lxml')
#print(soup2.prettify())
for links in soup2.find_all("tr",{"class": "high-rating"}): #Navigate to subtitle options with class as high-rating
for flags in links.find("td", {"class": "flag-cell"}): #Look for all the flags of subtitles with high-ratings
if flags.text == "English": #If flag is set to English then get the download link
print("After if : {}".format(links))
for dlink in links.find("td",{"class": "download-cell"}): #Once English check is done, navigate to the download class "download-cell" where the download href exists
half_dlink = dlink.find('a')['href'] #STUCK HERE!!!HERE'S THE PROBLEM!!! SOS!!! HELP!!!
download = base_url + half_dlink
print(download)
```
I'm getting the following error :
```
File "C:/Users/PycharmProjects/WhatsApp_API/SubtitleDownloader.py", line 24, in
for x in dlink.find("a"):
TypeError: 'NoneType' object is not iterable
```<issue_comment>username_1: Just change the above line this:
`for dlink in links.find("td",{"class": "download-cell"}):`
to this:
```
for dlink in links.find_all("td",{"class": "download-cell"}):
```
because you are running a loop on an single element rather than a list.
**Note:** The only difference is that find\_all() returns a list containing the single result, and find() just returns the result.
Hope this will helps you! :)
Upvotes: 2 [selected_answer]<issue_comment>username_2: Have a look at the documentation of [`find_all()`](https://www.crummy.com/software/BeautifulSoup/bs4/doc/#find) and [`find()`](https://www.crummy.com/software/BeautifulSoup/bs4/doc/#find).
>
> `find_all()`:
>
>
>
> >
> > The `find_all()` method looks through a tag’s descendants and retrieves *all* descendants that match your filters.
> >
> >
> >
>
>
> `find`:
>
>
>
> >
> > The `find_all()` method scans the entire document looking for results, but sometimes you only want to find one result. If you know a
> > document only has one tag, it’s a waste of time to scan the
> > entire document looking for more. Rather than passing in `limit=1`
> > every time you call `find_all`, you can use the `find()` method.
> >
> >
> >
>
>
>
So, you don't need to loop over the `find()` function to get the tags. You need to make the following changes in your code (removed the unnecessary `for` loops):
```
...
# Previous code is the same
soup2 = BeautifulSoup(next_page.content,'lxml')
for links in soup2.find_all("tr",{"class": "high-rating"}):
if links.find("td", {"class": "flag-cell"}).text == "English":
print("After if : {}".format(links))
half_dlink = links.find('td', {'class': 'download-cell'}).a['href']
download = base_url + half_dlink
print(download)
```
Upvotes: 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.