Id
int64 1.68k
75.6M
| PostTypeId
int64 1
2
| AcceptedAnswerId
int64 1.7k
75.6M
⌀ | ParentId
int64 1.68k
75.6M
⌀ | Score
int64 -60
3.16k
| ViewCount
int64 8
2.68M
⌀ | Body
stringlengths 1
41.1k
| Title
stringlengths 14
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
int64 0
1
⌀ | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
int64 -1
21.3M
⌀ | OwnerUserId
int64 1
21.3M
⌀ | Tags
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
74,105,625 | 2 | null | 74,105,528 | 0 | null | thsi is the best i could come up with ... its kind of gross
```
def agg_most_common(vals):
print("vals")
matches = []
for i in collections.Counter(vals).most_common():
if not matches or matches[0][1] == i[1]:
matches.append(i)
else:
break
return [x[0] for x in matches]
print(df.groupby('Runner')['Training Time'].agg(agg_most_common))
```
| null | CC BY-SA 4.0 | null | 2022-10-18T04:12:26.627 | 2022-10-18T04:26:12.770 | 2022-10-18T04:26:12.770 | 541,038 | 541,038 | null |
74,105,705 | 2 | null | 74,105,528 | 1 | null | ```
import pandas as pd
data = {'Date': ['2022/09/01', '2022/09/02', '2022/09/03', '2022/09/04', '2022/09/05','2022/09/01', '2022/09/02', '2022/09/03', '2022/09/04', '2022/09/05','2022/09/01', '2022/09/02', '2022/09/03', '2022/09/04', '2022/09/05'],
'Runner': ['Runner A', 'Runner A', 'Runner A', 'Runner A', 'Runner A','Runner B', 'Runner B', 'Runner B', 'Runner B', 'Runner B','Runner C', 'Runner C', 'Runner C', 'Runner C', 'Runner C'],
'Training Time': ['less than 1 hour', 'less than 1 hour', 'less than 1 hour', 'less than 1 hour', '1 hour to 2 hour','less than 1 hour', '1 hour to 2 hour', 'less than 1 hour', '1 hour to 2 hour', '2 hour to 3 hour', '1 hour to 2 hour ', '2 hour to 3 hour' ,'1 hour to 2 hour ', '2 hour to 3 hour', '2 hour to 3 hour']
}
df = pd.DataFrame(data)
s = df.groupby(['Runner', 'Training Time'], as_index=False).size()
s.columns = ['Runner', 'Training Time', 'Size']
r = s.groupby(['Runner'], as_index=False)['Size'].max()
df_list = []
for index, row in r.iterrows():
temp_df = s[(s['Runner'] == row['Runner']) & (s['Size'] == row['Size'])]
df_list.append(temp_df)
df_report = pd.concat(df_list)
print(df_report)
df_report.to_csv('report.csv', index = False)
```
| null | CC BY-SA 4.0 | null | 2022-10-18T04:24:58.220 | 2022-10-18T04:38:01.583 | 2022-10-18T04:38:01.583 | 5,232,681 | 5,232,681 | null |
74,105,872 | 2 | null | 74,105,745 | 3 | null | The mismatch happens because useState is run during SSR and so the server generates a random word but when the page is rehydrated on the client side a different random word is generated. This mismatch causes your React app to warn you about this inconsistency.
To remedy this, you can use an empty string in the useState hook, and then update the state with a randomly generated word in a useEffect hook with an empty dependency array, which only runs on the client side when your component is mounted:
```
const [word1, setWord1] = React.useState('');
// Only runs on the client
React.useEffect(() => {
setWord1(generateWord());
}, []);
```
| null | CC BY-SA 4.0 | null | 2022-10-18T04:57:53.060 | 2022-10-18T04:57:53.060 | null | null | 395,910 | null |
74,106,382 | 2 | null | 74,106,310 | 0 | null | ```
.container {
display: flex;
flex-direction: row;
flex-wrap: wrap;
width: 50%;
}
```
```
<div class="container">
<button>Button 1</button>
<button>Button 2</button>
<button>Button 3</button>
<button>Button 4</button>
<button>Button 5</button>
<button>Button 6</button>
<button>Button 7</button>
<button>Button 8</button>
</div>
```
| null | CC BY-SA 4.0 | null | 2022-10-18T06:05:01.703 | 2022-10-18T11:47:10.277 | 2022-10-18T11:47:10.277 | 6,457,679 | 6,457,679 | null |
74,106,778 | 2 | null | 66,820,206 | 5 | null | I checked the sources of Card/Surface composables and found out that you need to have background and clip modifiers with the same shape. So for example the following Box has rounded corner shape and click ripple is cut with the same bounds:
```
val shape = RoundedCornerShape(16.dp)
Box(
modifier = Modifier
.background(
color = Color.Yellow,
shape = shape
)
.clip(shape)
.clickable { onClick() },
) {
// your content here
}
```
| null | CC BY-SA 4.0 | null | 2022-10-18T06:49:05.653 | 2022-10-18T06:49:05.653 | null | null | 4,907,704 | null |
74,106,825 | 2 | null | 72,447,056 | 0 | null | I just got the answer. we have to have the pdfView aspect Fit
| null | CC BY-SA 4.0 | null | 2022-10-18T06:52:51.303 | 2022-10-18T06:52:51.303 | null | null | 17,374,116 | null |
74,106,993 | 2 | null | 74,051,665 | 0 | null | There may be a more efficient algorithm, but this one accomplishes the task.
1. Calculate the direction from the starting point to the first vertex, in degrees. vi = round( degrees( atan2( Point A, Point B ) ) / 60 )
2. Calculate the angle for the derived vertex. vangle = vi * 60 * pi / 180
3. Calculate the position of the first vertex. vc = Point A's x + cos( vangle )
vr = Point A's y + sin( vangle )
4. Determine the next vertex based on the smallest angle of the three possible next vertices with respect to the line of Point A to Point B. start = if vi mod 2 == 1 then -1 else 2
nearest = infinity
5. Loop over the three possible vertices. for k = start step 2 until 3 do
6. Determine the angle between Point B and a candidate vertex. kangle = k * 60 * pi / 180
nc = vc + cos( kangle )
nr = vr + sin( kangle )
7. Determine the distance between Point B and the candidate vertex. d = distance( Point B, Point(nc, nr) )
8. If the distance is smaller than any distance found so far, record the vertex and change up the three candidate three vertices (by increasing the selected vertex index): if d < nearest then
nearest = d
sc = nc
sr = nr
vi = k + 1
end
9. Repeat from step 4 until the distance is less than or equal to 1.
Cherry-picked result from this algorithm, with additional modifications:

Another result:

| null | CC BY-SA 4.0 | null | 2022-10-18T07:07:25.583 | 2022-10-18T07:07:25.583 | null | null | 59,087 | null |
74,107,049 | 2 | null | 74,104,551 | 2 | null | The error is caused by having a constant in the first place in a pair of parentheses. Scheme (or any dialect of Lisp, for that matter) expects a function in this location.
```
(1 2 3) ; an error, '1 is not a function'
(#f #t) ; an error, '#f is not a function'
```
There are many ways to edit and run Scheme code. My preference is for Dr Racket, which is for the very similar Racket language. By putting the tag `#lang scheme` in the first line you have a very reasonable Scheme development system.
When I run the code, it immediately shows the problem. Your 'or' function is infix, when it should be prefix - a natural problem caused by switching between infix and postfix languages.
[](https://i.stack.imgur.com/5o6KI.png)
| null | CC BY-SA 4.0 | null | 2022-10-18T07:12:29.020 | 2022-10-18T07:12:29.020 | null | null | 9,841,104 | null |
74,107,167 | 2 | null | 41,158,325 | 0 | null | ```
<Dialog
open={true}
style={{width: '200px', marginLeft: '40%', backgroundColor: 'transparent'}}
title= 'Loading'
titleStyle={{paddingTop: '0px', paddingLeft: '45px', fontSize: '15px', lineHeight: '40px'}}
BackdropProps={{invisible: true}}
>
<RefreshIndicator
style= {{display: 'inline-block'}}
size={50}
left={50}
top={30}
loadingColor="#FF9800"
status="loading"
/>
</Dialog>
```
-
Remove backgroundColor: 'transparent' and add BackdropProps={{invisible: true}}
| null | CC BY-SA 4.0 | null | 2022-10-18T07:22:41.767 | 2022-10-18T07:22:41.767 | null | null | 17,073,479 | null |
74,107,415 | 2 | null | 12,528,963 | 1 | null | while taskkill didn't work
```
taskkill /f /t /pid 14492
ERROR: The process with PID 14492 (child process of PID 7992) could not be terminated.
Reason: There is no running instance of the task.
```
the simpler tskill has worked fine for me
```
tskill 14492
```
| null | CC BY-SA 4.0 | null | 2022-10-18T07:44:14.063 | 2022-10-18T07:44:14.063 | null | null | 7,792,522 | null |
74,107,602 | 2 | null | 74,106,473 | 0 | null | Unfortunately this is a long-time open bug in VS Code:
[https://github.com/microsoft/vscode/issues/36490](https://github.com/microsoft/vscode/issues/36490)
| null | CC BY-SA 4.0 | null | 2022-10-18T07:59:03.867 | 2022-10-18T07:59:03.867 | null | null | 3,018,229 | null |
74,108,079 | 2 | null | 74,103,821 | 0 | null | Check
[https://tailwindcss.com/docs/float](https://tailwindcss.com/docs/float)
> Use float-right to float an element to the right of its container.
Check
[https://tailwindcss.com/docs/min-width](https://tailwindcss.com/docs/min-width)
> Utilities for setting the minimum width of an element (e.g. min-w-max)
| null | CC BY-SA 4.0 | null | 2022-10-18T08:40:22.523 | 2022-10-18T08:40:22.523 | null | null | 3,842,598 | null |
74,108,190 | 2 | null | 21,002,657 | 0 | null | You could determine it by timing,
if time between keypresses is less than 50ms, then it's Scanner
```
private readonly Stopwatch _stopwatch;
private long _lastKeyPressedAgo;
public Constructor()
{
_stopwatch = new Stopwatch();
_stopwatch.Start();
_lastKeyPressedAgo = -1;
}
private async Task KeyDown(RoutedEventArgs routedEvent)
{
if (routedEvent is not KeyEventArgs keyEventArgs)
{
return;
}
_stopwatch.Stop();
_lastKeyPressedAgo = _stopwatch.ElapsedMilliseconds;
_stopwatch.Restart();
if (_lastKeyPressedAgo is > 0 and < 50)
{
//This means it's from scanner
}
}
```
Something like this should work. Although in this case, first key press will not be registered, but you could think of workarounds for it, save it in variable for example, and then if you confirm it's scanner, you know what first press was.
| null | CC BY-SA 4.0 | null | 2022-10-18T08:50:34.793 | 2022-10-18T08:50:34.793 | null | null | 10,549,954 | null |
74,108,704 | 2 | null | 74,107,823 | 0 | null |
```
library(data.table)
setDT(dt)
dcast(dt, rowid(chain) ~ chain, value.var = "name")[, -1]
```
```
Acala Algorand Algorand-borrowed
1: Acala Dollar Algodex Algofi Lend
2: Acala LCDOT Algofi Lend Folks Finance
3: Acala Swap <NA> <NA>
```
```
dt <- data.frame(
chain = c(rep(c("Acala", "Algorand", "Algorand-borrowed"), 2), "Acala"),
name = c("Acala Dollar", "Algodex", "Algofi Lend", "Acala LCDOT", "Algofi Lend", "Folks Finance", "Acala Swap")
)
```
| null | CC BY-SA 4.0 | null | 2022-10-18T09:30:17.940 | 2022-10-18T09:30:17.940 | null | null | 10,415,749 | null |
74,108,714 | 2 | null | 73,796,134 | 0 | null | If you on Ubuntu you can folow this link it helped
[Install the Microsoft ODBC driver for SQL Server (Linux)](https://learn.microsoft.com/en-us/sql/connect/odbc/linux-mac/installing-the-microsoft-odbc-driver-for-sql-server?view=sql-server-ver16#17)
| null | CC BY-SA 4.0 | null | 2022-10-18T09:30:48.060 | 2022-10-18T09:30:48.060 | null | null | 6,461,354 | null |
74,109,162 | 2 | null | 74,108,334 | 0 | null | I think this kind of feature is underdevelopment [https://github.com/recharts/recharts/issues/1541](https://github.com/recharts/recharts/issues/1541)
So, I ended up using composed charts (Scattered and Bar) [https://recharts.org/en-US/examples/LineBarAreaComposedChart](https://recharts.org/en-US/examples/LineBarAreaComposedChart) and I set the fillOpacity transparent for Bar. but I am not sure if it is the best workaround...
| null | CC BY-SA 4.0 | null | 2022-10-18T10:03:31.170 | 2022-10-18T10:03:31.170 | null | null | 12,262,686 | null |
74,109,188 | 2 | null | 30,545,677 | 1 | null | Making lots of assumptions here. Let's draw your current history anew, to make discussing it easier:
```
.-D'-E' -- master' (after rebase)
A-B-C-D-E -- master (before rebase)
`F-G-H -- branch1 (before rebase)
```
If I understood correctly, your final history should:
```
.-F'-G'-H' -- branch1 (after rebase)
.-D'-E' -- master' (after rebase)
A-B-C-D-E -- master (before rebase)
`F-G-H -- branch1 (before rebase)
```
Or, hiding the old branches (before rebase):
```
A-B-C-D'-E' -- master (after rebase)
`F'-G'-'H -- branch1 (after rebase)
```
You should be able to achieve this by specifying which commit range to rebase and what your new upstream should be:
```
git rebase --onto "E'" E branch1
```
This will copy all commits `E..branch1` and re-apply them on top of `E'`.
| null | CC BY-SA 4.0 | null | 2022-10-18T10:05:15.120 | 2022-10-18T10:05:15.120 | null | null | 112,968 | null |
74,109,284 | 2 | null | 19,319,455 | 1 | null | Like this :
[Example of plotLabel.m](https://i.stack.imgur.com/HGVOK.png)
```
function h=plotLabel(x,y,varargin)
% h=plotLabel(x,y,varargin)
% Plot like plot but on wich each line is labelled with an integer corresponding to the number of the curve in y
% PlotLabel uses the function ''contour'' internally.
%
% x: [optional] like plot, x is a vector or a matrix, if not present y curves are along dimension 2.
% y: matrix containing y value to plot, y must be adapted to x
% varargin : plot argument passed to the plotter
%
% h is the ''contour'' structure of the plot
%
% Limitation : after this plot Legend is not warking because the plot is ''contour plot'' : all curves is ONE object
%
% see also contour
%
% Matthieu RICHARD
% 2022
if nargin == 0 % Helper
figure(14)
subplot(311)
x = 1:10;
y = [x.^2 ; x+3];
% plot(x,y)
h=plotLabel(x,y);
subplot(312)
plotLabel(x',y','r--', 'LabelSpacing' , 200); % works also, the oritentation of x defines the way to plot Y
subplot(313)
h = plotLabel(y'); % to be ''compatible with plot'' if only x is given then curves are plotted along the dimension 1
h.LineWidth = 3; % second way to change the plot characteristics
h.LabelSpacing = 300;
else
if nargin == 1
y=x';
x=(1:size(y,2));
end
if isvector(x) && ~isvector(y)
if size(x,1)>size(x,2) % x est vertical
assert(size(x,1)==size(y,1))
x=x'; % on organise les 2 scénario de la meme facon
y=y';
else
assert(size(x,2)==size(y,2))
end
x=repmat(x,[size(y,1) , 1]); % on adapte X avec y en dupliquant x
end
nbCurves = size(y,1);
nbPts = size(y,2);
x=cat(1,x(1,:),x); % on duplique la première ligne
y=cat(1,y(1,:),y); % on duplique la première ligne
z = ones(nbCurves+1,nbPts) .* (0:nbCurves)'; % levels will be the ''labels'' Curve 1 --> 1 ...
levels = 0:nbCurves;
[~ , h ] = contour(x,y,z, levels , varargin{1:end}, 'ShowText','on');
end
if nargout == 0
h=[];
end
```
| null | CC BY-SA 4.0 | null | 2022-10-18T10:13:51.627 | 2022-10-18T10:24:36.410 | 2022-10-18T10:24:36.410 | 20,271,573 | 20,271,573 | null |
74,109,472 | 2 | null | 13,191,854 | 0 | null | Just adjusting the app's availability from 25 years to 24 will solve the problem
[](https://i.stack.imgur.com/U0LFZ.png)
| null | CC BY-SA 4.0 | null | 2022-10-18T10:30:39.827 | 2022-10-18T10:30:39.827 | null | null | 14,248,462 | null |
74,109,670 | 2 | null | 73,385,026 | 0 | null | If you want to load local images in tesseract you have to load them via input tag, Here is the working example.
```
<input type="file" id="input_image" accept="image/*">
```
```
const input_image = document.getElementById("input_image");
const offscreen_canvas = new OffscreenCanvas(0, 0);
const offscreen_canvas_context = offscreen_canvas.getContext("2d");
input_image.addEventListener("change", () => {
var file = input_image.files[0];
if (file == undefined) return;
var reader = new FileReader();
reader.onload = function (event) {
const reader_image = event.target.result;
const image = new Image();
image.onload = function () {
offscreen_canvas.width = image.width;
offscreen_canvas.height = image.height;
offscreen_canvas_context.drawImage(image, 0, 0);
offscreen_canvas.convertToBlob().then((blob) => {
Tesseract.recognize(blob, "eng", {
logger: (m) => console.log(m)
}).then(({ data: { text } }) => {
console.log(text);
});
});
};
image.src = reader_image;
};
reader.readAsDataURL(file);
});
```
| null | CC BY-SA 4.0 | null | 2022-10-18T10:45:24.633 | 2022-10-18T10:45:24.633 | null | null | 9,454,904 | null |
74,109,776 | 2 | null | 57,744,392 | 0 | null |
### iOS 15 below
```
import SwiftUI
import SwiftUIFlowLayout
public struct HyperlinkText: View {
private let subStrings: [StringWithLinks]
public init(html: String) {
let newString = html.replacingOccurrences(of: "<a href=\'(.+)\'>(.+)</a>",
with: "@&@$2#&#$1@&@",
options: .regularExpression,
range: nil)
self.subStrings = newString.components(separatedBy: "@&@").compactMap{ subString in
let arr = subString.components(separatedBy: "#&#")
return StringWithLinks(string: arr[0], link: arr[safe: 1])
}
}
public var body: some View {
FlowLayout(mode: .scrollable,
binding: .constant(false),
items: subStrings,
itemSpacing: 0) { subString in
if let link = subString.link, let url = URL(string: link) {
Text(subString.string)
.foregroundColor(Color(hexString: "#FF0000EE"))
.onTapGesture {
if UIApplication.shared.canOpenURL(url) {
UIApplication.shared.open(url)
}
}
.fixedSize(horizontal: false, vertical: true)
} else {
Text(subString.string).fixedSize(horizontal: false, vertical: true)
}
}
}
}
struct StringWithLinks: Hashable, Identifiable {
let id = UUID()
let string: String
let link: String?
static func == (lhs: StringWithLinks, rhs: StringWithLinks) -> Bool {
lhs.id == rhs.id
}
func hash(into hasher: inout Hasher) {
hasher.combine(id)
}
}
```
| null | CC BY-SA 4.0 | null | 2022-10-18T10:53:27.980 | 2022-10-18T10:53:27.980 | null | null | 10,481,474 | null |
74,110,551 | 2 | null | 74,110,234 | 1 | null | You can extend the `MATCH` by adding a second path p2, and including it in the `RETURN`
```
MATCH p = (r:Reports)<--(s:Schedules)<--(m:MDRMs)<--(br:Business_Requirements)-->(rp: Report_Logic)-->(ra: Reporting_layer_attributes),
p2 = (br)<-[:MAPPED_TO]-(ba:Business_Attributes)
where r.Report_Name ='FFIEC 031' and s.Schedule = 'RC-B - Securities'
RETURN p,p2
```
in case the MAPPED_TO rel is not always there, you can also use the OPTIONAL MATCH
```
MATCH p = (r:Reports)<--(s:Schedules)<--(m:MDRMs)<--(br:Business_Requirements)-->(rp: Report_Logic)-->(ra: Reporting_layer_attributes)
where r.Report_Name ='FFIEC 031' and s.Schedule = 'RC-B - Securities'
OPTIONAL MATCH p2 = (br)<-[MAPPED_TO]-(ba:Business_Attributes)
```
RETURN p,p2
| null | CC BY-SA 4.0 | null | 2022-10-18T11:55:10.457 | 2022-10-18T12:20:50.797 | 2022-10-18T12:20:50.797 | 1,734,996 | 1,734,996 | null |
74,110,649 | 2 | null | 952,263 | 1 | null | I would like to point out a fact that may surprise people for which indexes in the resulting tables are important.
Solutions presented here using sequences:
```
$contents[$node] = ...
...
$contents[] = ...
```
will generate unexpected results when directory names contain only numbers.
Example:
```
/111000/file1
/file2
/700030/file1
/file2
/456098
/file1
/file2
/999900/file1
/file2
/file1
/file2
```
Result:
```
Array
(
[111000] => Array
(
[0] => file1
[1] => file2
)
[700030] => Array
(
[456098] => Array
(
[0] => file1
[1] => file2
)
[456099] => file1 <---- someone can expect 0
[456100] => file2 <---- someone can expect 1
)
[999900] => Array
(
[0] => file1
[1] => file2
)
[999901] => file1 <---- someone can expect 0
[999902] => file2 <---- someone can expect 1
)
```
As you can see 4 elements has index as incrementation of last name of directory.
| null | CC BY-SA 4.0 | null | 2022-10-18T12:01:53.940 | 2022-10-18T12:05:54.623 | 2022-10-18T12:05:54.623 | 7,346,655 | 7,346,655 | null |
74,110,676 | 2 | null | 74,107,779 | 1 | null | I've installed the font on my local machine, which I use as my SSRS server, but I found out that the server is running on the admin user while the font was installed for a non-admin user
problem is solved when I installed it for all windows users

| null | CC BY-SA 4.0 | null | 2022-10-18T12:03:51.173 | 2022-10-18T12:03:51.173 | null | null | 9,464,594 | null |
74,111,186 | 2 | null | 74,108,454 | 0 | null | If you have Excel 365 you could use this formula in `C92` to retrieve the averages for row 55:
`=TOROW(BYROW(WRAPROWS(C55:H55,2),LAMBDA(r,AVERAGE(r))))`
`WRAPWROWS` wraps the row into single rows per two columns - then it is easy to calculate the average 'BYROW'.
| null | CC BY-SA 4.0 | null | 2022-10-18T12:43:57.260 | 2022-10-18T12:43:57.260 | null | null | 16,578,424 | null |
74,111,218 | 2 | null | 74,024,366 | 0 | null |
Add this line
```
android:translationZ="10dp"
```
| null | CC BY-SA 4.0 | null | 2022-10-18T12:45:57.097 | 2022-10-18T12:45:57.097 | null | null | 12,839,091 | null |
74,111,301 | 2 | null | 74,105,722 | 0 | null | ```
<Avatar url={item.image} alt="food" />
```
to `<Avatar src={item.image} alt="food" />`
tôi đã nhầm lẫn src thành url
sorry for my stupid
| null | CC BY-SA 4.0 | null | 2022-10-18T12:52:47.430 | 2022-10-18T12:52:47.430 | null | null | 13,610,485 | null |
74,111,294 | 2 | null | 74,109,888 | 0 | null | It's probably going to help to clean things up a bit (will make things easier to digest).
- Most obvious: You have a `WS` rule with a `skip` action so you can drop all of the `[ ]*` (and similar) stuff. This also means you don't need the `{setText(getText().trim());}` stuff.- You can use `options { caseInsensitive = true; }` to avoid things like `IF: ('IF' | 'if');`- a `|` in a set (`[abd|c]`) is the actual `|` character, not an `or` operator. so you don't want stuff like `\uff0c|\u3001|\uff1b|\uff1a` (should be `\uff0c\u3001\uff1b\uff1a`)
This gives you:
```
grammar Pict
;
options {
caseInsensitive = true;
}
model: parameterRow* constraint*;
//The part of Parameters and Values of Parameters parameters: parameterRow;
parameterRow
: parameterName COLON parameterValue (',' parameterValue)*
;
parameterName: Value;
parameterValue: NUMBER | Value;
//The part of submodel submodel:;
//The part of constraints constraints: constraint+;
constraint
: predicate ';'?
| (IF | IFNOT) predicate THEN predicate (ELSE predicate)? ';'?
;
predicate: clause | (clause LogicalOperator predicate);
clause: term | '(' predicate ')' | NOT predicate;
term
: '[' parameterName ']' IN ' {' (String | NUMBER) (
',' (NUMBER | String)
)* '}' # inStatment
| '[' parameterName ']' Relation (NUMBER | String) # relationValueStatement
| '[' parameterName ']' LIKE (NUMBER | String) # likeStatement
| '[' parameterName ']' Relation '[' parameterName ']' # relationParaStatement
;
COLON: ':';
IN: 'in';
LIKE: 'like';
Relation: ('=' | '<>' | '>' | '>=' | '<' | '<=');
IF: 'if';
IFNOT: 'if not';
THEN: 'then';
ELSE: 'else';
NOT: 'not';
LogicalOperator: ('and' | 'or');
NUMBER
: '-'? INT '.' INT EXP? // 1.35, 1.35E-9, 0.3, -4.5
| '-'? INT EXP // 1e10 -3e4
| '-'? INT // -3, 45
;
Value
: LETTERNoWhiteSpace
[-.?!a-z\u4e00-\u9fa5_0-9\u3002\uff1f\uff01\uff0c\u3001\uff1b\uff1a\u201c\u201d\u2018\u2019\uff08\uff09\u300a\u300b\u3008\u3009\u3010\u3011\u300e\u300f\u300c\u300d\ufe43\ufe44\u3014\u3015\u2026\u2014\uff5e\ufe4f\uffe5]
(
' '?
[-.?!a-z\u4e00-\u9fa5_0-9\u3002\uff1f\uff01\uff0c\u3001\uff1b\uff1a\u201c\u201d\u2018\u2019\uff08\uff09\u300a\u300b\u3008\u3009\u3010\u3011\u300e\u300f\u300c\u300d\ufe43\ufe44\u3014\u3015\u2026\u2014\uff5e\ufe4f\uffe5]
)*
;
String: ('"' .*? '"') {setText(getText().trim());};
WS: [ \t\r\n]+ -> skip;
COMMENT: '#' .*? '\n' -> skip;
fragment INT: '0' | '1' ..'9' '0' ..'9'*; // no leading zeros
fragment EXP
: 'e' [+\-]? INT
; // \- since - means "range" inside [...]
fragment LETTERNoWhiteSpace: [a-z\u4e00-\u9fa5_0-9];
```
With the following errors for your input...
```
line 2:7 token recognition error at: 'a,'
line 2:10 token recognition error at: 'b,'
line 2:13 token recognition error at: 'c,'
line 2:16 token recognition error at: 'd\n'
line 4:0 missing {NUMBER, Value} at 'IF'
```
so we can see that your `Value` rule doesn't recognize single letter values. If you modify it it to:
```
Value
: LETTERNoWhiteSpace (
[-.?!a-z\u4e00-\u9fa5_0-9\u3002\uff1f\uff01\uff0c\u3001\uff1b\uff1a\u201c\u201d\u2018\u2019\uff08\uff09\u300a\u300b\u3008\u3009\u3010\u3011\u300e\u300f\u300c\u300d\ufe43\ufe44\u3014\u3015\u2026\u2014\uff5e\ufe4f\uffe5]
(
' '?
[-.?!a-z\u4e00-\u9fa5_0-9\u3002\uff1f\uff01\uff0c\u3001\uff1b\uff1a\u201c\u201d\u2018\u2019\uff08\uff09\u300a\u300b\u3008\u3009\u3010\u3011\u300e\u300f\u300c\u300d\ufe43\ufe44\u3014\u3015\u2026\u2014\uff5e\ufe4f\uffe5]
)*
)?
;
```
(Note: This rule is quite complex, and, by allowing embedded spaces, is likely to cause some problems with tokenization in more complex examples than yours, but it works fine for your sample input.)
Then there are no errors and you get the following tree:
[](https://i.stack.imgur.com/GomyL.png)
| null | CC BY-SA 4.0 | null | 2022-10-18T12:52:05.707 | 2022-10-18T12:52:05.707 | null | null | 73,764 | null |
74,111,588 | 2 | null | 74,110,708 | 0 | null | You have created the DB having granted the privileges on the `public` schema. Chances are your `admin` user is using the new DB, which only have the default priviledges
| null | CC BY-SA 4.0 | null | 2022-10-18T13:12:08.853 | 2022-10-18T13:12:08.853 | null | null | 7,635,569 | null |
74,111,630 | 2 | null | 74,110,708 | 21 | null | The first [comment](https://stackoverflow.com/questions/74110708/postgres-15-permission-denied-for-schema-public/74111630#comment130849690_74110708) nailed the most likely reason this is happening. Quoting the [release announcement](https://www.postgresql.org/about/news/postgresql-15-released-2526/#:%7E:text=PostgreSQL%2015%20also%20revokes%20the%20CREATE%20permission%20from%20all%20users%20except%20a%20database%20owner%20from%20the%20public):
> PostgreSQL 15 also `CREATE` except a database owner from the `public` (or default) schema.
The reason your fix didn't work is that all actions you took on database `postgres` in regards to user `admin`'s privileges on schema `public` concern only that schema within the database `postgres`. Schema `public` on database `postgres` is not the same schema `public` as the one on newly created `mydb`.
Also, this:
```
GRANT ALL ON DATABASE mydb TO admin;
```
grants privileges on the database itself, not things within the database. `admin` can now drop the database, for example, still without being able to create tables in schema `public`. My guess is that you wanted to make `admin` also the owner of `mydb`, in which case you need to add
```
ALTER DATABASE mydb OWNER TO admin;
```
Or you need to repeat your `GRANT USAGE, CREATE ON SCHEMA public TO admin;` on `mydb`.
Here's some more documentation on [secure schema usage patterns](https://www.postgresql.org/docs/14/ddl-schemas.html#DDL-SCHEMAS-PATTERNS) the PostgreSQL 15 change was based on.
| null | CC BY-SA 4.0 | null | 2022-10-18T13:14:50.133 | 2023-01-22T19:32:21.270 | 2023-01-22T19:32:21.270 | 5,298,879 | 5,298,879 | null |
74,111,862 | 2 | null | 15,620,316 | 0 | null | Actually, it may have worked but a piece of advice:
make sure your reciever is not disabled bro.
See, in you manifest it written as enabled="false", maybe, later day, it will cause you error.
| null | CC BY-SA 4.0 | null | 2022-10-18T13:30:01.993 | 2022-10-18T13:30:01.993 | null | null | 14,471,944 | null |
74,112,029 | 2 | null | 74,112,012 | 2 | null | General solution is elementwise `mean`:
```
print (df)
Sub_1 Sub_2
0 [1,2,3] [4,5,3]
1 [1,7,3] [4,8,3]
```
If same values in each column is possible create 3d numpy array and then count `mean`:
```
arr = np.mean(np.array(df.to_numpy().tolist()), axis=2)
df1 = pd.DataFrame(arr, columns=df.columns, index=df.index)
print (df1)
Sub_1 Sub_2
0 2.000000 4.0
1 3.666667 5.0
df1 = df.applymap(np.mean)
print (df1)
Sub_1 Sub_2
0 2.000000 4.0
1 3.666667 5.0
```
Or:
```
df1 = df.explode(['Sub_1','Sub_2']).groupby(level=0).mean()
```
| null | CC BY-SA 4.0 | null | 2022-10-18T13:40:59.910 | 2022-10-18T13:46:24.177 | 2022-10-18T13:46:24.177 | 2,901,002 | 2,901,002 | null |
74,112,467 | 2 | null | 74,112,218 | 0 | null | Try:
```
df = pd.read_json(
"https://www.ebi.ac.uk/thornton-srv/m-csa/api/residues/?format=json"
)
df = df.explode("residue_chains")
df = df.explode("residue_sequences")
df = df.explode("roles")
df = pd.concat(
[df, df.pop("roles").apply(pd.Series).add_prefix("roles_")], axis=1
).drop(columns="roles_0")
df = pd.concat(
[
df,
df.pop("residue_chains").apply(pd.Series).add_prefix("residue_chains_"),
],
axis=1,
).drop(columns="residue_chains_0")
df = pd.concat(
[
df,
df.pop("residue_sequences")
.apply(pd.Series)
.add_prefix("residue_sequences_"),
],
axis=1,
)
print(df.head())
```
Prints:
```
mcsa_id roles_summary function_location_abv ptm roles_group_function roles_function_type roles_function roles_emo residue_chains_chain_name residue_chains_pdb_id residue_chains_assembly_chain_name residue_chains_assembly residue_chains_code residue_chains_resid residue_chains_auth_resid residue_chains_is_reference residue_chains_domain_name residue_chains_domain_cath_id residue_sequences_uniprot_id residue_sequences_code residue_sequences_is_reference residue_sequences_resid
0 1 activator, electrostatic stabiliser, hydrogen bond acceptor, hydrogen bond donor, proton acceptor activator spectator activator EMO_00038 A 1b73 A 1 Asp 7.0 7.0 True A01 3.40.50.1860 P56868 Asp True 7
0 1 activator, electrostatic stabiliser, hydrogen bond acceptor, hydrogen bond donor, proton acceptor interaction hydrogen bond acceptor EMO_00113 A 1b73 A 1 Asp 7.0 7.0 True A01 3.40.50.1860 P56868 Asp True 7
0 1 activator, electrostatic stabiliser, hydrogen bond acceptor, hydrogen bond donor, proton acceptor electrostatic interaction spectator electrostatic stabiliser EMO_00033 A 1b73 A 1 Asp 7.0 7.0 True A01 3.40.50.1860 P56868 Asp True 7
0 1 activator, electrostatic stabiliser, hydrogen bond acceptor, hydrogen bond donor, proton acceptor interaction hydrogen bond donor EMO_00114 A 1b73 A 1 Asp 7.0 7.0 True A01 3.40.50.1860 P56868 Asp True 7
0 1 activator, electrostatic stabiliser, hydrogen bond acceptor, hydrogen bond donor, proton acceptor electrostatic interaction spectator electrostatic stabiliser EMO_00033 A 1b73 A 1 Asp 7.0 7.0 True A01 3.40.50.1860 P56868 Asp True 7
```
| null | CC BY-SA 4.0 | null | 2022-10-18T14:11:35.097 | 2022-10-18T14:11:35.097 | null | null | 10,035,985 | null |
74,113,713 | 2 | null | 74,113,662 | 0 | null | You can try like this. Assuming Data frame names as `df`. if you want to start from `zero`
```
df['member_id'] = df.index
```
If you want to start from `1` (Your case)
```
df['member_id'] = df.index+1
```
| null | CC BY-SA 4.0 | null | 2022-10-18T15:37:11.013 | 2022-10-18T15:44:32.397 | 2022-10-18T15:44:32.397 | 15,358,800 | 15,358,800 | null |
74,113,834 | 2 | null | 73,714,654 | 0 | null | `[email protected]` is not compatible with `bcc-0.25.0`, but it works with `bcc-0.24.0`.
I checked out the code at the desired version:
```
git clone --branch v0.24.0 https://github.com/iovisor/bcc.git
```
Then I followed the instructions to build it from source:
```
mkdir bcc/build; cd bcc/build
cmake ..
make
sudo make install
cmake -DPYTHON_CMD=python3 .. # build python3 binding
pushd src/python/
make
sudo make install
popd
```
[This issue](https://github.com/iovisor/gobpf/pull/311) has more information. There was a PR merged 12 days ago with a potential fix - it will be available in the next release of gobpf.
| null | CC BY-SA 4.0 | null | 2022-10-18T15:45:56.903 | 2022-10-18T15:45:56.903 | null | null | 182,629 | null |
74,113,935 | 2 | null | 31,235,330 | 1 | null | If you hoover / click on that red arrow, you get to see the prior version of your code (displaying what you deleted)
Example:
[](https://i.stack.imgur.com/028jV.png)
| null | CC BY-SA 4.0 | null | 2022-10-18T15:53:03.587 | 2022-10-18T15:53:03.587 | null | null | 12,031,499 | null |
74,113,947 | 2 | null | 29,500,227 | 0 | null | You added the dependency (module) to your pod, but without a version (either specific or range).
Please define the version, run `pod install`, and open the `.xcworkspace` file not `.xcodeproj` file
| null | CC BY-SA 4.0 | null | 2022-10-18T15:54:19.410 | 2022-10-18T15:54:19.410 | null | null | 5,159,093 | null |
74,114,226 | 2 | null | 74,013,658 | 0 | null | - - - - -
| null | CC BY-SA 4.0 | null | 2022-10-18T16:15:27.477 | 2022-10-18T16:15:27.477 | null | null | 10,510,070 | null |
74,114,238 | 2 | null | 66,396,425 | 3 | null | This is how I created a similar animation for a disclosure panel:
```
<Transition
enter="transition ease duration-500 transform"
enterFrom="opacity-0 -translate-y-12"
enterTo="opacity-100 translate-y-0"
leave="transition ease duration-300 transform"
leaveFrom="opacity-100 translate-y-0"
leaveTo="opacity-0 -translate-y-12"
>
```
| null | CC BY-SA 4.0 | null | 2022-10-18T16:16:57.317 | 2022-10-18T16:16:57.317 | null | null | 17,741,068 | null |
74,114,376 | 2 | null | 74,112,067 | 2 | null | my mistake was that I did not specify the fields in the select, otherwise everything is buzzing, the upper code is working
```
Select(x => new SalesReportItem
{
ProductId = x.Key.ProductId,
ProductName = x.Key.ProductName,
CompanyName = x.Key.CompanyName,
CustomerName = x.Key.CustomerName,
Quantity = x.Sum(x => (x.MovementType == TableMovementType.Income ? x.Quantity : - x.Quantity)),
Amount = x.Sum(x => (x.MovementType == TableMovementType.Income? x.Amount: - x.Amount))
});
```
Thanks for the help
Hans Kesting
| null | CC BY-SA 4.0 | null | 2022-10-18T16:26:56.343 | 2022-10-18T16:26:56.343 | null | null | 15,559,935 | null |
74,115,713 | 2 | null | 17,659,952 | 0 | null | Give each `<path>` in the SVG an id, like:
```
<svg xmlns="http://www.w3.org/2000/svg">
<path id="first-path"/>
<path id="second-path"/>
</svg>
```
Then you can select the individual paths in JS (e.g. `document.querySelector('#first-path')`) and do whatever you want with it.
To fill in the inside of the path, you could do something like this in your CSS:
```
#first-path {
cursor: pointer;
pointer-events: fill;
}
#first-path:hover {
fill: rgba(255,255,255,0.5);
}
```
| null | CC BY-SA 4.0 | null | 2022-10-18T18:19:11.950 | 2022-10-18T18:19:11.950 | null | null | 15,173,597 | null |
74,116,245 | 2 | null | 74,116,077 | 2 | null | Assuming all your keys are unique... then this (Modified Slightly):
```
project_index = [
{'A': ['1', '2', '3']},
{'B': ['4', '5', '6']},
{'C': ['7', '8', '9']},
{'D': ['10', '11', '12', '20']},
{'E': ['13', '14', '15']},
{'F': ['16', '17', '18']}
]
```
Should probably look like this:
```
project_index_dict = {}
for x in project_index:
project_index_dict.update(x)
print(project_index_dict)
# Output:
{'A': ['1', '2', '3'],
'B': ['4', '5', '6'],
'C': ['7', '8', '9'],
'D': ['10', '11', '12', '20'],
'E': ['13', '14', '15'],
'F': ['16', '17', '18']}
```
At this point, rather than re-invent the wheel... you could just use `pandas`.
```
import pandas as pd
# Work-around for uneven lengths:
df = pd.DataFrame.from_dict(project_index_dict, 'index').T.fillna('')
df.to_csv('file.csv', index=False)
```
Output `file.csv`:
```
A,B,C,D,E,F
1,4,7,10,13,16
2,5,8,11,14,17
3,6,9,12,15,18
,,,20,,
```
---
---
`csv` module method:
```
import csv
from itertools import zip_longest, chain
header = []
for d in project_index:
header.extend(list(d))
project_index_rows = [dict(zip(header, x)) for x in
zip_longest(*chain(list(*p.values())
for p in project_index),
fillvalue='')]
with open('file.csv', 'w') as f:
writer = csv.DictWriter(f, fieldnames = header)
writer.writeheader()
writer.writerows(project_index_rows)
```
| null | CC BY-SA 4.0 | null | 2022-10-18T19:05:25.697 | 2022-10-18T21:30:08.773 | 2022-10-18T21:30:08.773 | 11,865,956 | 11,865,956 | null |
74,116,318 | 2 | null | 59,308,202 | 0 | null | First you have to run in terminal
```
`venv\Scripts\activate`
in your project directory
then
`tensorboard --logdir results --port 6006`
and you got it
```
| null | CC BY-SA 4.0 | null | 2022-10-18T19:11:56.307 | 2022-10-18T19:11:56.307 | null | null | 14,660,717 | null |
74,116,564 | 2 | null | 74,087,977 | 0 | null | Try below
```
create temp function get_keys(input string) returns array<string> language js as """
return Object.keys(JSON.parse(input));
""";
create temp function get_values(input string) returns array<string> language js as """
return Object.values(JSON.parse(input));
""";
create temp function get_leaves(input string) returns string language js as '''
function flattenObj(obj, parent = '', res = {}){
for(let key in obj){
let propName = parent ? parent + '.' + key : key;
if(typeof obj[key] == 'object'){
flattenObj(obj[key], propName, res);
} else {
res[propName] = obj[key];
}
}
return JSON.stringify(res);
}
return flattenObj(JSON.parse(input));
''';
select *
from (
select
keys[safe_offset(2)] as manu_id,
keys[safe_offset(6)] as regularHours,
keys[safe_offset(7)] as key,
val
from your_table, unnest([struct(get_leaves(response) as leaves)]),
unnest(get_keys(leaves)) key with offset
join unnest(get_values(leaves)) val with offset using(offset),
unnest([struct(split(key, '.') as keys)])
where ends_with(key, 'startTime')
or ends_with(key, 'endTime')
)
pivot (any_value(val) for key in ('startTime', 'endTime'))
```
if applied to sample response you provided in your question
[](https://i.stack.imgur.com/eK3pL.png)
output is
[](https://i.stack.imgur.com/yj0YX.png)
Note: I would not expected above to 100% fit your requirements/expectations - but at least it should give you good direction!
| null | CC BY-SA 4.0 | null | 2022-10-18T19:35:16.380 | 2022-10-18T19:35:16.380 | null | null | 5,221,944 | null |
74,116,641 | 2 | null | 74,116,249 | 2 | null | One way to do it is by parsing the HTML string with [DOMParser API](https://developer.mozilla.org/en-US/docs/Web/API/DOMParser) to turn your string into a `Document` object and then walk through it with a [TreeWalker](https://developer.mozilla.org/en-US/docs/Web/API/TreeWalker) object to get the `textContent` of each `Text` node in the HTML. The result should be an array of strings.
```
function parseTextFromMarkDown(mdString) {
const htmlString = marked(mdString);
const parser = new DOMParser();
const doc = parser.parseFromString(htmlString, 'text/html');
const walker = document.createTreeWalker(doc, NodeFilter.SHOW_TEXT);
const textList = [];
let currentNode = walker.currentNode;
while(currentNode) {
textList.push(currentNode.textContent);
currentNode = walker.nextNode();
}
return textList;
}
```
| null | CC BY-SA 4.0 | null | 2022-10-18T19:43:35.073 | 2022-10-18T19:43:35.073 | null | null | 11,619,647 | null |
74,116,768 | 2 | null | 74,116,221 | 2 | null | You can use `iconSize` from `IconButton`, default value is 24
```
IconButton(
iconSize: x,// based on your need
),
```
| null | CC BY-SA 4.0 | null | 2022-10-18T19:54:54.310 | 2022-10-18T19:54:54.310 | null | null | 10,157,127 | null |
74,116,931 | 2 | null | 74,100,091 | 6 | null | > But if possible I would like to not use the Binding on ActualWidth.
Well, you need to define the width contraint somehow. `Auto` effectively means that the column will grow along with the widest element in it, i.e. the "minor" information `TextBlock` in this case.
So you should set the `Width` of the column to the `ActualWidth` of `MainInfo`, for example using a binding. Or programmatically. Either way, you have to set it one way or another.
| null | CC BY-SA 4.0 | null | 2022-10-18T20:11:59.707 | 2022-10-18T20:11:59.707 | null | null | 7,252,182 | null |
74,116,952 | 2 | null | 74,116,769 | 0 | null |
## method 01
- [merge, join, concatenate](https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html)
```
import pandas as pd
data = [
['MCDONALD', 123987, 'IN', '4/1/22', '3:56:00 AM'],
['MCDONALD', 123987, 'OUT', '4/1/22', '##########'],
['MCDONALD', 123987, 'IN', '4/1/22', '3:54:00 PM'],
['MCDONALD', 123987, 'OUT', '4/1/22', '8:02:00 PM'],
['MCDONALD', 123987, 'IN', '4/2/22', '3:57:00 AM'],
['MCDONALD', 123987, 'OUT', '4/2/22', '##########'],
['MCDONALD', 123987, 'IN', '4/2/22', '3:56:00 PM'],
['MCDONALD', 123987, 'OUT', '4/2/22', '8:01:00 PM'],
['MCDONALD', 123987, 'IN', '4/3/22', '3:55:00 AM'],
['MCDONALD', 123987, 'OUT', '4/3/22', '##########'],
['MCDONALD', 123987, 'IN', '4/3/22', '3:57:00 PM'],
['MCDONALD', 123987, 'OUT', '4/3/22', '8:00:00 PM']]
pks = ['EMP NAME','EMP ID','PUNCH DATE']
cols = ['EMP NAME', 'EMP ID', 'PUNCH TYPE', 'PUNCH DATE', 'PUNCH TIME']
df = pd.DataFrame(data)
df.columns = cols
def merge_dfs(left,right):
df = pd.merge(left,right,how='outer',on=pks)
return df
left = df.loc[df['PUNCH TYPE']=='IN']
l1 = left.drop_duplicates(subset=pks, keep='first')
l2 = left.drop_duplicates(subset=pks, keep='last')
right = df.loc[df['PUNCH TYPE']=='OUT']
r1 = right.drop_duplicates(subset=pks, keep='first')
r2 = right.drop_duplicates(subset=pks, keep='last')
tmp1 = merge_dfs(l1,r1)
tmp2 = merge_dfs(l2,r2)
final = merge_dfs(tmp1,tmp2)
```
## output
```
EMP NAME EMP ID PUNCH TYPE_x_x PUNCH DATE PUNCH TIME_x_x PUNCH TYPE_y_x PUNCH TIME_y_x PUNCH TYPE_x_y PUNCH TIME_x_y PUNCH TYPE_y_y PUNCH TIME_y_y
0 MCDONALD 123987 IN 4/1/22 3:56:00 AM OUT ########## IN 3:54:00 PM OUT 8:02:00 PM
1 MCDONALD 123987 IN 4/2/22 3:57:00 AM OUT ########## IN 3:56:00 PM OUT 8:01:00 PM
2 MCDONALD 123987 IN 4/3/22 3:55:00 AM OUT ########## IN 3:57:00 PM OUT 8:00:00 PM
```
[](https://i.stack.imgur.com/PH8Iq.png)
## method 02
- [df.pivot()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pivot.html?highlight=pivot#pandas.DataFrame.pivot)-
## aside
-
```
def funk(x):
# do something
pass
df.colum_name.apply(lambda x: funk(x))
```
| null | CC BY-SA 4.0 | null | 2022-10-18T20:13:49.940 | 2022-10-18T20:30:34.503 | 2022-10-18T20:30:34.503 | 14,343,465 | 14,343,465 | null |
74,116,969 | 2 | null | 74,116,159 | 1 | null | Amazon Athena is a that can perform queries on objects stored in Amazon S3. It . If you want to modify those input files in-place, then you'll need to find another way to do it.
However, it is possible for Amazon Athena to with the output files stored in a different location. You could use the existing files as and then store new files as .
The basic steps are:
- - `CREATE TABLE AS``location``SELECT`
See: [Creating a table from query results (CTAS) - Amazon Athena](https://docs.aws.amazon.com/athena/latest/ug/ctas.html)
| null | CC BY-SA 4.0 | null | 2022-10-18T20:16:03.620 | 2022-10-18T20:16:03.620 | null | null | 174,777 | null |
74,117,230 | 2 | null | 74,116,382 | 0 | null | Note that this diagram is not meant to explain what Sagas and CQRS are. In fact, looking at it this way it is quite confusing. What this diagram is telling you is what patterns you can use to read and write data that spans multime microservices. It is saying that in order to write data (somehow transactionally) across multiple microservices you can use Sagas and in order to read data which belongs to multiple microservices you can use CQRS. But that doesn't mean that Sagas and CQRS have anything in common. They are two different patterns to solve completely different problems (reads and writes). To make an analogy, it's like saying that to make pizzas (Write) you can use an oven and to view the pizzas menu (Read) you can use a tablet.
On the specific patterns:
1. Sagas: you can see them as a process manager or state machine. Note that they do not implement transactions in the RDBMS sense. Basically, they allow you to create a process that will take care of telling each microservice to do a write operation and if one of the operations fails, it'll take care of telling the other microservices to rollback (or compensate) the action that they did. So, these "transactions" won't be atomic, because while the process is running some microservices will have already modified the data and others won't. And it is not garanteed that whatever has succeed can sucessfully be rolled back or compensated.
2. CQRS (Command Query Responsibility Segregation): suggests the separation of Commands (writes) and Queries (Reads). The reason for that, it is what I was saying before, that the reads and writes are two very different operations. Therefore, by separating them, you can implement them with the patterns that better fit each scenario. The reason why CQRS is shown in your diagram as a solution for reading data that comes from multiple microservices is because one way of implementing queries is to listen to Domain Events coming from multiple microservices and storing the information in a single database so that when it's time to query the data, you can find it all in a single place. An alternative to this would be Data Composition. Which would mean that when the query arrives, you would submit queries to multiple microservices at that moment and compose the response with the composition of the responses.
> So can I consider CQRS as an advanced version of saga which increases the speed of reads?
Personally I would not mix the concepts of CQRS and Sagas. I think this can really confuse you. Consider both patterns as two completely different things and try to understand them both independently.
| null | CC BY-SA 4.0 | null | 2022-10-18T20:40:53.583 | 2022-10-18T20:40:53.583 | null | null | 352,826 | null |
74,117,233 | 2 | null | 74,116,077 | 1 | null | My solution does not use Pandas. Here is the plan:
- - `zip`
```
import csv
def first_key(d):
"""Return the first key in a dictionary."""
return next(iter(d))
def first_value(d):
"""Return the first value in a dictionary."""
return next(iter(d.values()))
with open("output.csv", "w", encoding="utf-8") as stream:
writer = csv.writer(stream)
# Write the header row
writer.writerow(first_key(d) for d in project_index)
# Write the rest
rows = zip(*[first_value(d) for d in project_index])
writer.writerows(rows)
```
Contents of output.csv:
```
A,B,C,D,D,F
1,4,7,10,13,16
2,5,8,11,14,17
3,6,9,12,15,18
```
| null | CC BY-SA 4.0 | null | 2022-10-18T20:41:09.420 | 2022-10-18T20:41:09.420 | null | null | 459,745 | null |
74,117,584 | 2 | null | 74,116,911 | 1 | null | Your Vite code has two extra lights
```
const aLight = new THREE.AmbientLight(0xffffff, 0.8)
const pLight = new THREE.PointLight( 0xffffff, 2.0, 200, 2 )
scene.add(aLight, pLight);
```
Of course it's going to look brighter than your previous code.
| null | CC BY-SA 4.0 | null | 2022-10-18T21:19:23.160 | 2022-10-18T21:19:23.160 | null | null | 2,608,515 | null |
74,117,854 | 2 | null | 21,646,738 | 0 | null | ```
function* chunks(array, size) {
for (let i = 0; i < array.length; i += size) {
yield array.slice(i, i + size);
}
}
function hexToRgba(hex, opacity = 1) {
const arr = hex.replace("#", "").split("");
return [...chunks(arr, arr.length === 6 ? 2 : 1)].reduce(
(accum, cv, index, array) => {
const lastIndex = array.length - 1 === index;
const int = parseInt(
array.length === 2 ? cv.join("") : cv[0] + cv[0],
16
);
return accum + int + (lastIndex ? `,${opacity})` : ",");
},
"rgba("
);
}
console.log(hexToRgba("#eee", 1));
```
With a generator and reduce. Can be used with or without a `#`.
| null | CC BY-SA 4.0 | null | 2022-10-18T21:55:11.377 | 2022-10-18T23:10:34.897 | 2022-10-18T23:10:34.897 | 12,369,920 | 12,369,920 | null |
74,117,926 | 2 | null | 24,114,676 | 7 | null | I got this error because I tried to push without committing
So tried
> git add .git commit -m "message"git push -f
Then it worked well for me
| null | CC BY-SA 4.0 | null | 2022-10-18T22:03:49.270 | 2022-10-18T22:03:49.270 | null | null | 19,558,306 | null |
74,117,931 | 2 | null | 26,889,970 | 0 | null | I have updated my Intellij IDEA version to 2022.2.3 like below:
> IntelliJ IDEA 2022.2.3 (Ultimate Edition)Build #IU-222.4345.14Runtime version: 17.0.4.1+7-b469.62 aarch64
It solved my problem.
| null | CC BY-SA 4.0 | null | 2022-10-18T22:04:29.777 | 2022-10-18T22:04:29.777 | null | null | 3,052,880 | null |
74,118,257 | 2 | null | 74,118,232 | 1 | null | ```
pd.DataFrame( current_df.set_index('car').stack() )
```
| null | CC BY-SA 4.0 | null | 2022-10-18T22:54:19.697 | 2022-10-18T22:54:19.697 | null | null | 11,243,998 | null |
74,118,335 | 2 | null | 74,118,239 | 0 | null | What is probably the case is that `"Äa"` is three UTF-8 encoded bytes in the source file (Equivalent to `char[4]{ 0xC3, 0x84, 'a', '\0' }`), and the QString constructor expects UTF-8 encoded data.
The 65533 character (U+FFFD) is the [replacement character](https://doc.qt.io/qt-6/qchar.html#SpecialCharacter-enum) for the invalid UTF-8 data.
Use [QString::fromLatin1](https://doc.qt.io/qt-6/qstring.html#fromLatin1):
```
myQString::myQString(const char* p) : QString(QString::fromLatin1(p, std::strlen(p)))
{
ENTER_FUNCTION;
}
myQString::myQString(const QByteArray& ba) : QString(QString::fromLatin1(ba))
{
ENTER_FUNCTION;
}
```
Also consider using [QLatin1StringView](https://doc-snapshots.qt.io/qt6-dev/qlatin1stringview.html) instead of `char*` to avoid getting confused about encoding (might be called `QLatin1String` in older QT versions)
| null | CC BY-SA 4.0 | null | 2022-10-18T23:07:55.770 | 2022-10-18T23:07:55.770 | null | null | 5,754,656 | null |
74,118,358 | 2 | null | 74,118,232 | 2 | null | Building on Simons answer (which should be accepted), to also get the correct headers add the following.
```
current_df = (
pd
.DataFrame(current_df.set_index("car").stack())
.rename(columns={0: "price"})
.rename_axis(("car", "salesman"))
)
print(current_df)
price
car salesman
honda crv john 9000
peter 9100
kate 9200
mazda cx5 john 9300
peter 10000
kate 10100
john 29300
peter 310000
kate 510100
```
| null | CC BY-SA 4.0 | null | 2022-10-18T23:12:44.943 | 2022-10-18T23:12:44.943 | null | null | 3,249,641 | null |
74,118,514 | 2 | null | 63,285,651 | 1 | null | In my case I had specified the paths property on the tsconfig.json file but I was still facing the error on vscode but whenever I was running eslint via CLI the issue wasn't reported.
If you read the description of the eslint plugin on vscode there's asection that talks about `eslint.workingDirectories` which fixed it for me when I set that option to `{"mode": "auto"}` on the vscode settings
| null | CC BY-SA 4.0 | null | 2022-10-18T23:47:47.623 | 2022-10-18T23:47:47.623 | null | null | 1,823,109 | null |
74,118,989 | 2 | null | 74,118,713 | 0 | null | Those aren't exactly rebase options. Compare to what happens when you type in a Google query:
how
how
how
Howie Mandel
Howl's Moving Castle
Your question is a bit like asking why these options popped up for `howitzer` in the middle of the question.
The screenshot you show—by the way, don't use screenshots if at all possible, but if you do, inline them (I did that for you here)—have brought up two basic Git commands:
- `git pull``git fetch``git merge``git rebase``git pull`- `git rebase``git rebase`
The third option you got was to run a VSCode-specific thing that VSCode calls "git sync". This is not a command at all: instead, it's a sort of VSCode action that runs two Git commands for you, namely `git pull` followed by `git push`. The `pull` itself a Git command, which then runs the two Git commands (fetch, plus whatever second one you've chosen by configuration, although in this case perhaps the selector overrides the configuration and makes the second command be `git rebase` anyway). See [What does git sync do in VSCode](https://stackoverflow.com/q/36878344/1256452) for more about this.
| null | CC BY-SA 4.0 | null | 2022-10-19T01:34:02.347 | 2022-10-19T01:34:02.347 | null | null | 1,256,452 | null |
74,119,166 | 2 | null | 74,119,081 | 0 | null | What you might be looking for is the grid-column property. There are many uses for this property, but the below example should be applied to the 7th div in question.
```
grid-column: 2;
```
This is a very basic answer, and will help if you're not planning to dynamically change the number of divs, or add more in the future. If this is not what you're looking for exactly, give me more information on what the grid is going to be used for, or what you're trying to achieve with it, so I can assist further :)
| null | CC BY-SA 4.0 | null | 2022-10-19T02:13:26.517 | 2022-10-19T02:13:26.517 | null | null | 18,927,044 | null |
74,119,337 | 2 | null | 74,116,249 | 1 | null | While I think Emiel already gave the best answer, another approach would be to use the abstract syntax tree created by Marked's parser, [mdast](https://github.com/syntax-tree/mdast-util-from-markdown). Then we can walk the syntax tree extracting all the text, combining it into reasonable output. One approach looks like this:
```
const astToText = ((types) => ({type, children = [], ...rest}) =>
(types [type] || types .default) (children .map (astToText), rest)
)(Object .fromEntries (Object .entries ({
'default': () => ` *** Missing type: ${type} *** `,
'root': (ns) => ns .join ('\n'),
'heading, paragraph': (ns) => ns .join ('') + '\n',
'text, code': (ns, {value}) => value,
'html': (ns, {value}) =>
new DOMParser () .parseFromString (value, 'text/html') .textContent,
'listItem, link, emphasis': (ns) => ns .join (''),
'list': (ns, {ordered}) => ordered
? ns .map ((n, i) => `${i + 1} ${n}`) .join ('\n')
: ns .map ((n) => `• ${n}`) .join ('\n'),
'image': (ns, {title, url, alt}) => `Image "${title}" ("${alt}" - ${url})`,
// ... probably many more
}) .flatMap (([k, v]) => k .split (/,\s*/) .map (n => [n, v]))))
// import {fromMarkdown} from 'mdast-util-from-markdown'
// const ast = fromMarkdown (<your string>)
// dummy version
const ast = {type: "root", children: [{type: "heading", depth:1, children: [{type: "text", value: "Some Page Title", children: []}]}, {type: "paragraph", children: [{type: "html", value: '<a href="cafe" target="_blank">', children: []}, {type: "text", value: "Go to Cafe Page", children: []}, {type: "html", value: "</a>", children: []}]}, {type: "code", lang:null, meta:null, value: "<Cafe host>/portos/cafe", children: []}, {type: "heading", depth:2, children: [{type: "text", value: "Links", children: []}]}, {type: "list", ordered:!1, start:null, spread:!1, children: [{type: "listItem", spread:!1, checked:null, children: [{type: "heading", depth:5, children: [{type: "link", title:null, url: "#cafe_tacos", children: [{type: "text", value: "Tacos", children: []}]}]}]}, {type: "listItem", spread:!1, checked:null, children: [{type: "heading", depth:5, children: [{type: "link", title:null, url: "#cafe_burritos", children: [{type: "text", value: "Burritos", children: []}]}]}]}, {type: "listItem", spread:!1, checked:null, children: [{type: "heading", depth:5, children: [{type: "link", title:null, url: "#cafe_bebidas", children: [{type: "text", value: "Bebidas", children: []}]}]}]}]}, {type: "heading", depth:2, children: [{type: "text", value: "Overview", children: []}]}, {type: "paragraph", children: [{type: "text", value: "This is the overview text for the page. I really like tacos and burritos.", children: []}]}, {type: "paragraph", children: [{type: "link", title:null, url: "some/path/to/images/hello.png", children: [{type: "image", title: "Tacos", url: "some/path/to/images/hello.png", alt: "Taco Tabs", children: []}]}]}, {type: "heading", depth:2, children: [{type: "text", value: "Dining ", children: []}, {type: "html", value: '<a name="dining">', children: []}, {type: "html", value: "</a>", children: []}]}, {type: "paragraph", children: [{type: "text", value: "Dining is foo bar burrito taco mulita.", children: []}]}, {type: "paragraph", children: [{type: "link", title:null, url: "some/path/to/images/hello2.png", children: [{type: "image", title: "Cafe Overview", url: "some/path/to/images/hello2.png", alt: "Cafe Overview", children: []}]}]}, {type: "paragraph", children: [{type: "text", value: "The cafe has been open since 1661. It has lots of food.", children: []}]}, {type: "paragraph", children: [{type: "text", value: "It was declared the top 1 cafe of all time.", children: []}]}, {type: "heading", depth:3, children: [{type: "text", value: "How to order food", children: []}]}, {type: "paragraph", children: [{type: "text", value: "You can order food by ordering food.", children: []}]}, {type: "html", value: '<div class="alert alert-info">\n <strong> Note: </strong> TACOS ARE AMAZING.\n</div>', children: []}]}
console .log (astToText (ast))
```
```
.as-console-wrapper {max-height: 100% !important; top: 0}
```
The advantage of this approach over the plain HTML one is that we can decide how certain nodes are rendered in plain text. For instance, here we choose to render this image markup:
```

```
as
```
Image "Tacos" ("Taco Tabs" - some/path/to/images/hello.png)
```
Of course HTML nodes are still going to be problematic. Here I use `DOMParser` and `.textContent`, but you could just add it to `text, code` to include the raw HTML text.
Each function passed to the configuration receives a list of already formatted children as well as the remainder of the node,
| null | CC BY-SA 4.0 | null | 2022-10-19T02:49:00.023 | 2022-10-19T02:49:00.023 | null | null | 1,243,641 | null |
74,119,423 | 2 | null | 74,119,081 | 0 | null | please use "direction: rtl".
```
.grid-container {
width: 100%;
display: grid;
grid-template-columns: 1fr 1fr 1fr 1fr;
gap: 16px;
grid-gap: 16px;
direction: rtl;
}
.grid-item {
width: 100%;
height: 30px;
border: 1px solid black;
}
```
```
<div class="grid-container">
<div class="grid-item"></div>
<div class="grid-item"></div>
<div class="grid-item"></div>
<div class="grid-item"></div>
<div class="grid-item"></div>
<div class="grid-item"></div>
<div class="grid-item"></div>
<div class="grid-item"></div>
<div class="grid-item"></div>
<div class="grid-item"></div>
<div class="grid-item"></div>
</div>
```
| null | CC BY-SA 4.0 | null | 2022-10-19T03:05:08.220 | 2022-10-19T03:05:08.220 | null | null | 20,246,691 | null |
74,119,494 | 2 | null | 74,065,606 | 2 | null | I had a similar issue with my Android emulator. I fixed it by changing the Emulated Performance option of the emulator to "Software" from "Hardware". You can find this setting by editing your emulator using the pencil button in the Device Manager, and then the setting is under "Verify Configuration".
| null | CC BY-SA 4.0 | null | 2022-10-19T03:21:09.970 | 2022-10-19T03:21:09.970 | null | null | 6,851,010 | null |
74,119,776 | 2 | null | 74,119,705 | 0 | null | `read()` returns 0 when EOF is reached, ie when the peer has closed the TCP connection on its end.
The data you have shown is not larger than your buffer, so the first `read()` receives all of the data, and there is nothing left for the second `read()` because the server closed the connection after sending the data.
| null | CC BY-SA 4.0 | null | 2022-10-19T04:11:16.687 | 2022-10-19T04:11:16.687 | null | null | 65,863 | null |
74,120,409 | 2 | null | 71,656,886 | 1 | null | You need to increase the number of lines in the editor.foldingMaximumRegions.
| null | CC BY-SA 4.0 | null | 2022-10-19T05:44:01.120 | 2022-10-19T05:44:01.120 | null | null | 8,858,694 | null |
74,120,464 | 2 | null | 74,119,285 | 0 | null | ```
#just add header = None, since first line of txt is considered header that's why it is managing duplicate column names.
import pandas as pd
readfile = pd.read_csv(r'text.txt',header=None)
readfile.to_csv(r'CSV.csv, index=None)
#sample example output of readfile
0 1 2 3 4 5 6 7 8
0 1 2 3 5 0.0 0.0 0.0 4 6
```
| null | CC BY-SA 4.0 | null | 2022-10-19T05:51:31.970 | 2022-10-19T05:51:31.970 | null | null | 11,277,281 | null |
74,120,628 | 2 | null | 19,792,398 | 0 | null | I have met same question about av_read_frame return EOF(End Of File) while decoding realtime stream. Finanlly I found that when this problem appears. It's Because I set the AVFormatCtx.interrupt_callback.callback, and the number of timeout is too small(this call back can prevent av_read_frame() blocking). So When The callback return, av_read_frame() return EOF. Hope this question I met may help you.
| null | CC BY-SA 4.0 | null | 2022-10-19T06:13:11.763 | 2022-10-19T06:13:11.763 | null | null | 20,279,452 | null |
74,121,115 | 2 | null | 11,328,411 | 0 | null | GitLab and others offer a setting called Semi-linear history which will only allow you to merge a merge request if it is a direct descendent of the target branch.
This means you will need to rebase one last time before being able to merge a finished MR/PR. Of course, this will not prevent merging a branch which itself has merge requests in it.
I'd still challenge your DevOps teams workflow. Why can't they use `git log` with `--no-merges`, `--first-parent`, `--date-order`/`--topo-order` or similar to get the commit view they require?
| null | CC BY-SA 4.0 | null | 2022-10-19T07:00:57.877 | 2022-10-19T07:00:57.877 | null | null | 112,968 | null |
74,121,138 | 2 | null | 30,730,644 | 0 | null | In my case, it happened when I was using Docker with Oracle 19C.
The workaround is to find the listener.ora file, change 'PORT' and restart the container, ORACLE DB, listener.
It is presumed to be an error that occurred when the host tried to access TCP because it was already LISENT (HOST) by another process.
(When accessing Docker, consider that in most cases, you are accessing localhost.)
[](https://i.stack.imgur.com/emTJ8.png)
I changed the port to 1523, and all the problems were solved.
| null | CC BY-SA 4.0 | null | 2022-10-19T07:03:50.897 | 2022-10-24T02:57:07.377 | 2022-10-24T02:57:07.377 | 7,054,854 | 7,054,854 | null |
74,121,170 | 2 | null | 74,117,128 | 0 | null | I would:
1. compute BBOX or OBB of PCL in case your PCL can have any orientation use OBB or simply find 2 most distant points in PCL and use that as major direction.
2. sort the PCL's BBOX major by axis (biggest side of BBOX or OBB) In case your data has always the same orientation you can skip #1, for non axis aligned orientation just sot by dot(pnt[i]-p0,p1-p0)
where p0,p1 are endpoints of major side of OBB or most distant points in PCL and pnt[i] are the points from your PCL.
3. use sliding average to filter out noise so just a "curve" remains and not that zig-zag pattern your filtered image shows.
4. threshold slope change let call detected changes + (increasing slope) and - (decreasing slope) so you just remember position (index in sorted PCL) of each and then detect these patterns: UP (positive peak): + - (here is your UP) -
DOWN (negative peak): - + (here is your DOWN) +
to obtain the slope you can simply use atan2 ...
| null | CC BY-SA 4.0 | null | 2022-10-19T07:06:14.397 | 2022-10-19T07:11:26.110 | 2022-10-19T07:11:26.110 | 2,521,214 | 2,521,214 | null |
74,121,180 | 2 | null | 74,121,080 | 0 | null | ```
import 'dart:io';
void main() {
print('Please your name:');
String? name = stdin.readLineSync();
print('$name');
}
```
| null | CC BY-SA 4.0 | null | 2022-10-19T07:07:10.240 | 2022-10-19T07:07:10.240 | null | null | 11,922,179 | null |
74,121,194 | 2 | null | 74,121,150 | 0 | null | You're using the `splice` function without a second argument, which will delete every element from the index until the end of the array:
[https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/splice](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/splice)
| null | CC BY-SA 4.0 | null | 2022-10-19T07:07:57.690 | 2022-10-19T07:07:57.690 | null | null | 80,911 | null |
74,121,407 | 2 | null | 74,118,546 | 0 | null | Without any example code/output it's a little difficult to understand what you mean. But you can set the `vmin` and `vmax` keywords on the `norm` that you use. If you omit that, Matplotlib will derive them from the data used, and they will indeed change if the data changes. This is true for other normalizations as well, not just the `PowerNorm`.
```
norm = mpl.colors.PowerNorm(gamma=1./8., vmin=0, vmax=2)
```
If you need to use the same properties for many you could put them in a dict once, up front. And unpack that whenever you call the plotting method, like:
```
style = {
"cmap": cmap, # mpl.colormaps["Blues"].copy() ...
"norm": mpl.colors.PowerNorm(gamma=1./8., vmin=0, vmax=2),
}
ax.pcolormesh(t, f, Zxx, **style)
```
[https://matplotlib.org/stable/api/_as_gen/matplotlib.colors.PowerNorm.html#matplotlib.colors.PowerNorm](https://matplotlib.org/stable/api/_as_gen/matplotlib.colors.PowerNorm.html#matplotlib.colors.PowerNorm)
| null | CC BY-SA 4.0 | null | 2022-10-19T07:27:15.610 | 2022-10-19T07:27:15.610 | null | null | 1,755,432 | null |
74,121,478 | 2 | null | 49,074,666 | 4 | null | Following code give good result: (the URL set in tag is the one generated by Gitlab when attaching a image)
`![]() <img src="/uploads/d19fcc3d3b4d313c8cd7960a343463b6/table.png" width="120">`
It shows a clickable thumbail with a fixed width (and keep image ratio).
| null | CC BY-SA 4.0 | null | 2022-10-19T07:33:52.923 | 2022-10-19T07:33:52.923 | null | null | 2,486,332 | null |
74,121,561 | 2 | null | 73,820,643 | 0 | null | This is not related to unity ads.
You have a webview that opens a website that ask for get cookies without asking.
| null | CC BY-SA 4.0 | null | 2022-10-19T07:40:34.230 | 2022-10-19T07:40:34.230 | null | null | 16,930,239 | null |
74,121,635 | 2 | null | 49,558,009 | 0 | null | For me going to `File -> Project Structure -> Project -> SDK -> Add SDK -> Download JDK` and selecting solved the problem. Prior to that I had a different JDK selected and apparently it didn't include source code
| null | CC BY-SA 4.0 | null | 2022-10-19T07:47:09.573 | 2022-10-19T07:47:09.573 | null | null | 7,219,194 | null |
74,121,984 | 2 | null | 74,121,080 | 0 | null | Can you please use this code and let me know about the result :
```
import 'dart:convert';
import 'dart:io';
void main() {
print('1 + 1 = ...');
var line = stdin.readLineSync(encoding: utf8);
print('$line');
}
```
| null | CC BY-SA 4.0 | null | 2022-10-19T08:13:20.030 | 2022-10-19T08:13:20.030 | null | null | 2,591,714 | null |
74,121,999 | 2 | null | 74,120,954 | 0 | null | Your predictions match the true labels. Thus, you perform a perfect classification with 100 % accuracy.
Make sure to obtain the predictions from `model` i.e., `model.predict()`, which is currently not used. Exclude the subtotals from crosstab with `margins=False`, otherweise you include the subtotals in the confusion matrix.
Here is an example, for a less ideal classification with one edited value (last in dict) to demonstrate the concept:
```
import pandas as pd
import seaborn as sn
import matplotlib.pyplot as plt
data = {
"y_Actual": [
"Lablae",
"Long",
"Maejam(Ceiyng-saen-hnoy)",
"Maejam(Ceiyng-saen-luang)",
"Maejam(Hong-pi)",
"Maejam(Hong-poi)",
"Maejam(Kan-seiyn-sam)",
"Maejam(Kom-rup-nk)",
"Maejam(Kom-whua-mon-nai-nk-non)",
"Maejam(Kud-kho-bed)",
"Maejam(La-kon-klang)",
"Maejam(La-kon-luang)",
"Maejam(La-kon-noy)",
"Maejam(Lay-kan-sam-aew)",
"Maejam(Nak-kum)",
"Maejam(Nk-kum)",
"Maejam(Nok-nk-kum)",
"Maejan(Khan-aew-u)",
"Muang-nan",
"Sri-sat-shanalai",
],
"y_Predicted": [
"Lablae",
"Long",
"Maejam(Ceiyng-saen-hnoy)",
"Maejam(Ceiyng-saen-luang)",
"Maejam(Hong-pi)",
"Maejam(Hong-poi)",
"Maejam(Kan-seiyn-sam)",
"Maejam(Kom-rup-nk)",
"Maejam(Kom-whua-mon-nai-nk-non)",
"Maejam(Kud-kho-bed)",
"Maejam(La-kon-klang)",
"Maejam(La-kon-luang)",
"Maejam(La-kon-noy)",
"Maejam(Lay-kan-sam-aew)",
"Maejam(Nak-kum)",
"Maejam(Nk-kum)",
"Maejam(Nok-nk-kum)",
"Maejan(Khan-aew-u)",
"Sri-sat-shanalai",
"Sri-sat-shanalai",
],
}
df = pd.DataFrame(data, columns=["y_Actual", "y_Predicted"])
confusion_matrix = pd.crosstab(
df["y_Actual"],
df["y_Predicted"],
rownames=["Actual"],
colnames=["Predicted"],
margins=False,
)
sn.heatmap(confusion_matrix, annot=True)
plt.show()
```
[](https://i.stack.imgur.com/hqdi6.png)
| null | CC BY-SA 4.0 | null | 2022-10-19T08:14:16.557 | 2022-10-19T08:29:03.080 | 2022-10-19T08:29:03.080 | 5,755,604 | 5,755,604 | null |
74,122,153 | 2 | null | 74,121,898 | 0 | null | Small showcase of how would you search for the last occurrence of ":" in `Range("B11")` and replace it with a "."
```
Sub replaceTest()
Dim val As String
Dim pos As Long
Dim rng As Range
Set rng = Range("B11")
val = rng.Value
pos = InStrRev(val, ":")
If pos > 0 Then
Mid$(val, pos, 1) = "."
rng.Value = val
End If
Set rng = Nothing
End Sub
```
| null | CC BY-SA 4.0 | null | 2022-10-19T08:26:37.250 | 2022-10-19T08:26:37.250 | null | null | 20,076,134 | null |
74,122,198 | 2 | null | 74,121,781 | 2 | null | The problem is clear: There is no entry in `pg_hba.conf` for user when connecting to database from host with SSL disabled.
That could be because is different from , or because your GUI does not attempt an SSL connection, and the server requires it.
That's as good an answer as you can expect without showing `pg_hba.conf` or any other details.
| null | CC BY-SA 4.0 | null | 2022-10-19T08:30:08.670 | 2022-10-19T08:30:08.670 | null | null | 6,464,308 | null |
74,122,256 | 2 | null | 74,119,081 | 0 | null | You can solve this using nth-child:
```
.grid-container {
display: grid;
grid-template-columns: 1fr 1fr 1fr;
gap: 16px;
border: 1px solid red;
margin: 10px;
}
.grid-item {
height: 30px;
border: 1px solid black;
}
.grid-item:last-child {
grid-column-end: -1;
}
.grid-item:nth-last-child(2):nth-child(even) {
grid-column-end: -2;
}
```
```
<div class="grid-container">
<div class="grid-item"></div>
<div class="grid-item"></div>
<div class="grid-item"></div>
<div class="grid-item"></div>
<div class="grid-item"></div>
<div class="grid-item"></div>
<div class="grid-item"></div>
<div class="grid-item"></div>
<div class="grid-item"></div>
<div class="grid-item"></div>
<div class="grid-item"></div>
</div>
<div class="grid-container">
<div class="grid-item"></div>
<div class="grid-item"></div>
<div class="grid-item"></div>
<div class="grid-item"></div>
<div class="grid-item"></div>
<div class="grid-item"></div>
<div class="grid-item"></div>
<div class="grid-item"></div>
<div class="grid-item"></div>
<div class="grid-item"></div>
</div>
<div class="grid-container">
<div class="grid-item"></div>
<div class="grid-item"></div>
<div class="grid-item"></div>
<div class="grid-item"></div>
<div class="grid-item"></div>
<div class="grid-item"></div>
</div>
```
| null | CC BY-SA 4.0 | null | 2022-10-19T08:35:19.233 | 2022-10-19T08:35:19.233 | null | null | 8,620,333 | null |
74,122,319 | 2 | null | 74,121,445 | 0 | null | ```
=MID(A1;FIND("PC";A1)+2;8)
```
Output: 00001521
| null | CC BY-SA 4.0 | null | 2022-10-19T08:39:43.827 | 2022-10-19T08:39:43.827 | null | null | 19,735,003 | null |
74,122,407 | 2 | null | 22,867,620 | 4 | null | Newer version of matplotlib throwns `AttributeError: 'Arrow3D' object has no attribute 'do_3d_projection'` with old definition of `Arrow3D`. It was asked here by several comments and still remained kind of unclear. You have to add function `do_3d_projection()`, while `draw()` is no longer needed. Current code looks like this:
```
import numpy as np
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.patches import FancyArrowPatch
from mpl_toolkits.mplot3d import proj3d
class Arrow3D(FancyArrowPatch):
def __init__(self, xs, ys, zs, *args, **kwargs):
super().__init__((0,0), (0,0), *args, **kwargs)
self._verts3d = xs, ys, zs
def do_3d_projection(self, renderer=None):
xs3d, ys3d, zs3d = self._verts3d
xs, ys, zs = proj3d.proj_transform(xs3d, ys3d, zs3d, self.axes.M)
self.set_positions((xs[0],ys[0]),(xs[1],ys[1]))
return np.min(zs)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
arrow_prop_dict = dict(mutation_scale=20, arrowstyle='-|>', color='k', shrinkA=0, shrinkB=0)
a = Arrow3D([0, 10], [0, 0], [0, 0], **arrow_prop_dict)
ax.add_artist(a)
plt.show()
```
Help came from [github](https://github.com/matplotlib/matplotlib/issues/21688).
| null | CC BY-SA 4.0 | null | 2022-10-19T08:46:42.720 | 2022-10-19T08:46:42.720 | null | null | 3,157,428 | null |
74,122,433 | 2 | null | 74,121,504 | 0 | null | The names of language should be lowercase and variants are uppercase - so be precise with name of the files: `language_en.properties` and `language_vi.properties`. It is a good idea to keep your preferred language file as the fallback so rename one of the above to just `language.properties` without the 2 digit iso country code.
The following code would search for resource bundle named "language" as class and properties definitions in the top level of every classpath jar and directory:
```
// DEPRECATED IN JDK19, use Locale.of("vi")
Locale locale = new Locale("Vi");
ResourceBundle bundle = ResourceBundle.getBundle("language",locale);
```
=> SEARCHES FOR:
```
Class language.class
resource language.properties
Class language_vi.class
resource language_vi.properties
PLUS class+properties for lang and lang+variants of Locale.getDefault() eg language_en and language_en_GB
```
Resource pathnames should not contain `\\`.If you wish to use a directory to keep them, use `/`. So this would search under a sub-folder:
```
ResourceBundle bundle = ResourceBundle.getBundle("mydir/name",locale);
```
=> SEARCHES FOR:
```
Class mydir/name.class
Resource mydir/name.properties
Class mydir/name_vi.class
Resource mydir/name_vi.properties
PLUS mydir/name class+properties for lang and lang+variants of Locale.getDefault() eg lmydir/name_en and mydir/name_en_GB
```
| null | CC BY-SA 4.0 | null | 2022-10-19T08:47:58.607 | 2022-10-20T11:43:11.440 | 2022-10-20T11:43:11.440 | 4,712,734 | 4,712,734 | null |
74,122,481 | 2 | null | 74,121,898 | 0 | null | Since your date times `18/10/2022 11:42:10:358` look to be text. I recommend to convert them into a numeric date time (so you can calculate with it and use comparisons which is greater or smaller).
Therefore you need to split the text up into its parts and turn it into a real numeric date using `DateSerial` and `TimeSerial`. Finally you need to calculate the milliseconds and add them.
Then you can use `.NumberFormat = "DD\/MM\/YYYY hh:mm:ss.000"` to format it as you like.
```
Public Function StringToDateTime(ByVal InputString As String) As Double
Dim InputDateTime() As String ' Split into DD/MM/YYYY and hh:mm:ss:000
InputDateTime = Split(InputString, " ")
Dim InputDate() As String ' Split into DD and MM and YYYY
InputDate = Split(InputDateTime(0), "/")
Dim InputTime() As String ' Split into hh and mm and ss and 000
InputTime = Split(InputDateTime(1), ":")
Dim RetVal As Double
RetVal = DateSerial(InputDate(2), InputDate(1), InputDate(0)) + TimeSerial(InputTime(0), InputTime(1), InputTime(2)) + InputTime(3) / 24 / 60 / 60 / 1000
StringToDateTime = RetVal
End Function
```
```
Public Sub Example()
Dim Cell As Range
For Each Cell In Range("A1:A5")
Cell.Value2 = StringToDateTime(Cell.Value2)
Cell.NumberFormat = "DD\/MM\/YYYY hh:mm:ss.000"
Next Cell
End Sub
```
| null | CC BY-SA 4.0 | null | 2022-10-19T08:52:08.820 | 2022-10-19T08:52:08.820 | null | null | 3,219,613 | null |
74,122,655 | 2 | null | 20,755,044 | 0 | null | for Apple Silicon Mac
```
$ sudo gem uninstall ffi && sudo gem install ffi -- --enable-libffi-alloc
```
| null | CC BY-SA 4.0 | null | 2022-10-19T09:05:43.737 | 2022-10-19T09:07:27.710 | 2022-10-19T09:07:27.710 | 5,211,833 | 5,729,377 | null |
74,122,842 | 2 | null | 73,020,289 | 0 | null | Add the line `Shell.NavBarIsVisible="False"` to your appshell.xaml
This should work if I understand your question correctly
| null | CC BY-SA 4.0 | null | 2022-10-19T09:18:12.173 | 2022-10-19T09:18:12.173 | null | null | 18,540,474 | null |
74,122,957 | 2 | null | 56,890,227 | 0 | null | In a notebook recently, I had to add those lines at the beginning, to sync the python versions:
```
import os
import sys
os.environ['PYSPARK_PYTHON'] = sys.executable
os.environ['PYSPARK_DRIVER_PYTHON'] = sys.executable
```
| null | CC BY-SA 4.0 | null | 2022-10-19T09:25:46.230 | 2022-10-19T09:25:46.230 | null | null | 14,450,207 | null |
74,123,126 | 2 | null | 74,122,911 | 0 | null | try:
```
=ARRAYFORMULA(QUERY({"Week "&WEEKNUM(A2:A)\ B2:E};
"select Col2,Col3,sum(Col4)
where Col5 = 'Planned'
group by Col2,Col3
pivot Col1"))
```
[](https://i.stack.imgur.com/WVWrq.png)
| null | CC BY-SA 4.0 | null | 2022-10-19T09:38:54.540 | 2022-10-19T09:38:54.540 | null | null | 5,632,629 | null |
74,123,315 | 2 | null | 74,122,964 | 0 | null | can you try this:
```
dfx=df.groupby(['country','rank_level']).agg({'column_name_want_to_mean':'mean','column_name_want_to_max':'max'})
```
| null | CC BY-SA 4.0 | null | 2022-10-19T09:52:32.217 | 2022-10-19T09:52:32.217 | null | null | 15,415,267 | null |
74,123,621 | 2 | null | 68,044,084 | 0 | null | I did not find any configuration to do so, but this CSS rule works for me. I am using 0.29.1:
```
.monaco-editor .hover-row.status-bar {
display: none;
}
```
| null | CC BY-SA 4.0 | null | 2022-10-19T10:13:15.947 | 2022-10-19T10:13:15.947 | null | null | 8,059,855 | null |
74,123,844 | 2 | null | 70,714,690 | 9 | null | [Николай Сычев solution](https://stackoverflow.com/a/70719923/4300054) didn't work for me at first.
Instead, I succeeded by simply
1. installing buffer as a dev dependency yarn add buffer (use npm equivalent if you use npm)
2. and then adding it to the global scope in the index.html like this:
```
<html lang="en">
<head>
<script type="module">
import { Buffer } from "buffer";
window.Buffer = Buffer;
</script>
...
```
It also works for similar dependencies like `process` which you'd import in the index.html like this:
```
import process from "process";
window.process = process;
```
---
For a different project I needed `util`, which required `process`. The above suggested method didn't suffice in that case.
Instead I found out that `@esbuild-plugins` (for `vite dev`) and `rollup-plugin-polyfill-node` (for `vite build`) would successfully provide all these nodejs packages.
Here is a full `vite.config.ts` that works for me:
```
import { defineConfig } from 'vite'
import vue from '@vitejs/plugin-vue'
import { NodeGlobalsPolyfillPlugin } from '@esbuild-plugins/node-globals-polyfill'
import { NodeModulesPolyfillPlugin } from '@esbuild-plugins/node-modules-polyfill'
import rollupNodePolyFill from 'rollup-plugin-polyfill-node'
export default defineConfig({
plugins: [vue()],
base: '',
optimizeDeps: {
esbuildOptions: {
// Node.js global to browser globalThis
define: {
global: 'globalThis'
},
// Enable esbuild polyfill plugins
plugins: [
NodeGlobalsPolyfillPlugin({
buffer: true,
process: true,
}),
NodeModulesPolyfillPlugin()
]
}
},
build: {
rollupOptions: {
plugins: [
rollupNodePolyFill()
]
}
}
})
```
Be careful to use `rollup-plugin-polyfill-node` which is an updated and maintained fork of `rollup-plugin-node-polyfills`.
| null | CC BY-SA 4.0 | null | 2022-10-19T10:32:04.280 | 2023-01-13T17:34:15.680 | 2023-01-13T17:34:15.680 | 4,300,054 | 4,300,054 | null |
74,123,945 | 2 | null | 72,055,173 | 0 | null | you should addthis line in .env file in your project
GENERATE_SOURCEMAP=false
| null | CC BY-SA 4.0 | null | 2022-10-19T10:39:35.080 | 2022-10-19T10:39:35.080 | null | null | 8,406,359 | null |
74,124,138 | 2 | null | 74,123,993 | 0 | null | You should mention the method in a list to `diagonal` like this:
```
library(car)
#> Loading required package: carData
scatterplotMatrix(~ mpg + disp + drat + wt | cyl, data=mtcars,
spread=FALSE, diagonal=list(method ="histogram"),
main="Scatter Plot Matrix via car Package")
```

[reprex v2.0.2](https://reprex.tidyverse.org)
| null | CC BY-SA 4.0 | null | 2022-10-19T10:55:41.013 | 2022-10-19T10:55:41.013 | null | null | 14,282,714 | null |
74,124,481 | 2 | null | 74,124,095 | 0 | null | It looks like you are not setting the style to your `TextInputEditText` view. Add the attribute `style="@style/textInputEditTextStyle"` and it should work. Like this:
```
<com.google.android.material.textfield.TextInputEditText
style="@style/textInputEditTextStyle"
android:id="@+id/etEmail"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:inputType="textEmailAddress"
android:maxLength="30"
android:layout_marginTop="25dp"/>
```
| null | CC BY-SA 4.0 | null | 2022-10-19T11:22:45.720 | 2022-10-19T11:22:45.720 | null | null | 12,761,873 | null |
74,124,591 | 2 | null | 28,997,381 | 0 | null | For me what fixed the issue was updating the string parameter I passed to the script. it was missing "" at the end of the path (i.e. "e:\arcive" - needed to add "" at the end)
| null | CC BY-SA 4.0 | null | 2022-10-19T11:31:46.350 | 2022-10-19T11:31:46.350 | null | null | 7,636,609 | null |
74,124,766 | 2 | null | 74,124,671 | 1 | null | Open your setting and search for `Dart Line Length` and increase the number.
[](https://i.stack.imgur.com/Z6CtS.png)
| null | CC BY-SA 4.0 | null | 2022-10-19T11:45:45.037 | 2022-10-19T11:45:45.037 | null | null | 10,157,127 | null |
74,125,110 | 2 | null | 74,124,555 | 0 | null | Here is one way to do it
if you post data as a code (preferably) or text, i would be able to share the result
```
# create a temporary column 'c' by grouping on Customer No
# and assigning count to it using transform
# finally, using loc to select rows that has a count eq 6
(df.loc[df.assign(
c=df.groupby(['Customer No'])['Customer No']
.transform('count'))['c'].eq(6]
)
```
| null | CC BY-SA 4.0 | null | 2022-10-19T12:11:05.247 | 2022-10-19T12:11:05.247 | null | null | 3,494,754 | null |
74,125,201 | 2 | null | 26,617,041 | 0 | null | After get access token from `http://127.0.0.1:8000/accounts/login/`
such this :
```
{
"email": "[email protected]",
"tokens": {
"refresh": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ0b2tlbl90eXBlIjoicmVmcmVzaCIsImV4cCI6MTY2NjI2NTAxMSwiaWF0IjoxNjY2MTc4NjExLCJqdGkiOiJjZWM3MzJmNDZkMGE0MTNjOTE3ODM5ZGYxNzRiNzMxZCIsInVzZXJfaWQiOjcwfQ.5Rd25s6msp72IHyU1BxE4ym24YIEbhyFsBdUztGXz0I",
"access": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ0b2tlbl90eXBlIjoiYWNjZXNzIiwiZXhwIjoxNjY2MjY1MDExLCJpYXQiOjE2NjYxNzg2MTEsImp0aSI6IjgyOWFmZGE5MWY2ODRhNDZhMDllZGMzMmI0NmY0Mzg5IiwidXNlcl9pZCI6NzB9.TYhi0INai293ljc5zBk59Hwet-m9a1Mc1CtA56BEE_8"
},
"id": 70
}
```
copy content of "access" key in response, then in post man in Headers add new item by key : `Authorization` and value such this:
`Bearer eyJ0eXAi....`
that eyJ0eXAi.... is value of access key.
[](https://i.stack.imgur.com/IA0a0.png)
then send the request.
| null | CC BY-SA 4.0 | null | 2022-10-19T12:17:47.713 | 2022-10-19T12:17:47.713 | null | null | 4,340,411 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.