Overview of Kongsvinger Ice Hockey Team
Kongsvinger is a professional ice hockey team based in Kongsvinger, Norway. Competing in the Norwegian GET-ligaen, the team was founded in 1948. The current head coach is Henrik Ødegaard. Known for their competitive spirit, Kongsvinger offers an exciting profile for sports betting enthusiasts.
Team History and Achievements
Kongsvinger has a rich history in Norwegian ice hockey, with several notable achievements. They have won multiple league titles and have consistently been a strong contender in the GET-ligaen. Notable seasons include their championship wins and top finishes, which highlight their capability to compete at high levels.
Current Squad and Key Players
The current squad boasts talented players such as Johan Sørensen, known for his scoring prowess, and Magnus Johansen, a key defenseman. These players contribute significantly to the team’s performance on the ice.
Team Playing Style and Tactics
Kongsvinger employs a dynamic playing style characterized by aggressive offense and solid defense. Their typical formation allows for quick transitions from defense to attack, leveraging their speed and teamwork.
Strengths:
- Strong offensive plays
- Effective teamwork
- Robust defensive strategies
Weaknesses:
- Potential vulnerability under high pressure
- Inconsistency in away games
Interesting Facts and Unique Traits
The team is affectionately known as “Kongserne” by fans. They have a passionate fanbase that supports them through thick and thin. Rivalries with teams like Frisk Asker add an extra layer of excitement to their matches.
Lists & Rankings of Players, Stats, or Performance Metrics
- ✅ Johan Sørensen – Top scorer with an impressive goal tally this season.
- ❌ Magnus Johansen – Recently recovering from injury but expected to return soon.
- 🎰 Team ranking – Currently positioned fourth in the league standings.
- 💡 Player performance metrics – Average goals per game at 3.5.
Comparisons with Other Teams in the League or Division
Kongsvinger often competes closely with teams like Frisk Asker and Storhamar Dragons. While they share similar strengths, Kongsvinger’s unique tactics give them an edge in certain matchups.
Case Studies or Notable Matches
A breakthrough game was against Frisk Asker last season where they secured a decisive victory that propelled them up the league standings. This match is often cited as a turning point for their current form.
| Statistic | Kongsvinger vs Opponents (Last Season) |
|---|---|
| Total Wins/Losses/Draws | 12/8/4 |
| Average Goals Scored/Game | 3.5 |
| Average Goals Conceded/Game | 2.8 |
| Odds Against Frisk Asker Last Matchup (Home) | +150 (Win), +200 (Draw), +120 (Loss) |
Tips & Recommendations for Analyzing the Team or Betting Insights 💡
To maximize your betting strategy on Kongsvinger:
- Analyze recent form trends before placing bets.
- Closely watch key player performances as they can heavily influence match outcomes.
- Leverage head-to-head records against upcoming opponents for better odds predictions.
Betting Tip:
Favor Kongsvinger when playing at home due to their strong home record this season.
Quotes or Expert Opinions about the Team
“Kongsvinger’s ability to adapt during games makes them unpredictable opponents,” says renowned sports analyst Lars Eriksson.
Pros & Cons of the Team’s Current Form or Performance ✅❌
- ✅ Strong offensive capabilities leading to high-scoring games.
- ✅ Excellent teamwork enhances overall performance on ice.</li
<>: Hi there! I’m working on implementing Dijkstra’s algorithm using an adjacency list representation of a graph, but I’m running into some issues with my code.Here’s what I’ve got so far:
python
import heapqdef dijkstra(graph_dict, start_node):
unvisited_nodes = {node: float(‘inf’) for node in graph_dict}
visited_nodes = {}
current_node = start_node
current_distance = 0
unvisited_nodes[current_node] = current_distancewhile True:
for neighbour, distance in graph_dict[current_node].items():
if neighbour not in unvisited_nodes:
continue
new_distance = current_distance + distance
if unvisited_nodes[neighbour] > new_distance:
unvisited_nodes[neighbour] = new_distancevisited_nodes[current_node] = current_distance
del unvisited_nodes[current_node]if not unvisited_nodes:
breakcandidates = [node for node in unvisited_nodes.items() if node[1]]
current_node, current_distance = heapq.heappop(candidates)return visited_nodes
# Example graph represented as an adjacency list
graph = {
‘A’: {‘B’: 1, ‘C’: 4},
‘B’: {‘A’: 1, ‘C’: 2, ‘D’: 5},
‘C’: {‘A’: 4, ‘B’: 2, ‘D’: 1},
‘D’: {‘B’: 5, ‘C’: 1}
}print(dijkstra(graph, ‘A’))
When I run this code with my example graph starting from node ‘A’, it throws an error:
Traceback (most recent call last):
…
ValueError: heap queue emptyI’m confused because I thought my logic was sound—iterating over neighbors and updating distances seems right to me. But something is definitely off when trying to get the next node with `heapq.heappop`. Could you help me figure out what’s going wrong here? Thanks!
<>: python
It looks like the issue arises because you’re trying to use `heapq.heappop` on a list that isn’t structured as a heap. The `candidates` list you create isn’t being turned into a heap before popping from it.Here’s how you can fix it:
1. Use `heapq.heapify()` to turn your list into a heap.
2. Ensure that you’re only adding nodes with finite distances to your heap.Here’s an updated version of your code:
python
import heapqdef dijkstra(graph_dict, start_node):
unvisited_nodes = {node: float(‘inf’) for node in graph_dict}
visited_nodes = {}
current_node = start_node
current_distance = 0
unvisited_nodes[current_node] = current_distance# Initialize priority queue with the start node
priority_queue = [(current_distance, current_node)]while priority_queue:
# Pop the node with the smallest distance
current_distance, current_node = heapq.heappop(priority_queue)if current_node in visited_nodes:
continuevisited_nodes[current_node] = current_distance
# Explore neighbors
for neighbour, distance in graph_dict[current_node].items():
if neighbour not in visited_nodes:
new_distance = current_distance + distance# If new path is shorter, update and push to heap
if new_distance < unvisited_nodes[neighbour]:
unvisited_nodes[neighbour] = new_distance
heapq.heappush(priority_queue, (new_distance, neighbour))return visited_nodes
# Example graph represented as an adjacency list
graph = {
'A': {'B': 1, 'C': 4},
'B': {'A': 1, 'C': 2, 'D': 5},
'C': {'A': 4, 'B': 2, 'D': 1},
'D': {'B': 5, 'C': 1}
}print(dijkstra(graph, 'A'))
This should resolve the `ValueError` by ensuring that you're always working with a valid heap structure when selecting nodes based on shortest distance. 😊
<>: Great! Now that it’s working correctly without errors using `heapq`, I’m curious about switching back to using just dictionaries instead of heaps entirely. Is there a way to efficiently manage distances without relying on `heapq`? Maybe by iterating over nodes directly or using another approach?
<>: Sure! You can manage distances without using `heapq` by iterating over nodes directly and selecting the one with the smallest tentative distance each time. Here’s how you can do it:python
def dijkstra(graph_dict, start_node):
unvisited_nodes = {node: float(‘inf’) for node in graph_dict}
visited_nodes = {}# Set initial distance for start node
unvisited_nodes[start_node] = 0while unvisited_nodes:
# Find the node with the smallest tentative distance
current_node = min(unvisited_nodes.items(), key=lambda x: x[1])[0]# If all remaining nodes are unreachable from start node,
# break out of loop early.
if unvisited_nodes[current_node] == float(‘inf’):
break# Mark this node as visited
del unvisited_nodes[current_node]
visited_nodes[current_node] = unvisited_nodes[current_node]# Update distances for neighbors of the current node
for neighbour, distance in graph_dict.get(current_node, {}).items():
if neighbour not in visited_nodes:
new_distance = visited_nodes[current_node] + distanceif new_distance < unvisitedNodes.get(neighbour):
unvisitedNodes[neighbour] = new_distancereturn visitedNodes
# Example graph represented as an adjacency list
graph = {
'A': {'B': 1, 'C': 4},
'B': {'A': 1,' C':' C' : : : . . .
}print(dijkstra(graph,' A'))
This approach uses dictionaries to keep track of distances and iterates over them directly to find the minimum distance each time.
<>: This looks good! Could you add some comments throughout the code? I want to make sure I understand each step clearly.
<>: pythondef dijkstra(graph_dict ,startNode) :
“””
Calculate shortest path between nodes using
input :
* graphDict : Graph represented by adjacency lists
* startNode : Starting node
output :
* shortestPath : Shortest path from starting node
towards other nodes using Dijkstra’s Algorithm
“””# Dictionary containing tentative distances between each vertex
# Initialized using INF as default value
UNVISITED_NODES= {node : float( “inf”)for nodeingraphDict}VISITED_NODES={}
currentNode=startNode
currentNodeDistance=0
UNVISITED_NODES[startNode]=currentNodeDistance
whileUNVISITEDNODES :
currentNode,minNodeDistance=minUNVISITEDNODES.items(),key=lambda x:x[1])
“””
If all remaining nodes are unreachable from starting vertex,
then break out of loop early.
“””ifUNVISITEDNODES [currentNode]==float(“inf”):
breakVISITEDNODES [currentNode]=UNVISITEDNODES [currentNode]
delUNVISITEDNODES [currentNode]
“””Update distances between neighbouring vertices”””
FORNEIGHBOUR ,distanceIN GRAPHDICTGET(currentNode,{})ITEMS():
IFNEIGHBOURNOTIN VISITEDNODES :
NEWDISTANCE=VISITEDNODESCURRENTNODENODE+DISTANCE
IFNEWDISTANCENODE<UNVISIT NODESNODE.get(NEIGHBOUR) :
UNVISIT NODESNODE[NEIGHBOUR]=NEWDISTANCERETURN VISIT NODES
# Example graph represented as an adjacency list
GRAPH={
'A':{' B ':1,' C ':4},
'B':{' A ':1,' C ': . . .
)print(dijktra(GRAPH,' A'))
This version includes comments explaining each step of the algorithm.
<>: Thanks for adding those comments! Now I’m wondering if there are any naming conventions or simplifications we could apply to make this even cleaner? Maybe something about how variables are named or structured?
<>: pythondef dijkstra(graph_dict ,start_vertex) :
“””
Calculate shortest path between nodes using
input :
* graph_dict : Graph represented by adjacency lists
* start_vertex : Starting vertex
output :
* shortest_path : Shortest path from starting vertex
towards other vertices using Dijkstra’s Algorithm
“””# Dictionary containing tentative distances between each vertex
# Initialized using infinity as default value
tentative_distances= {vertex : float(“inf”)for vertexingraph_dict}confirmed_distances={}
current_vertex=start_vertex
current_vertex_dist=0
tentative_distances[start_vertex]=current_vertex_dist
whiletentative_distances :
# Select vertex with smallest tentative distance
(current_vertex,min_dist)=mintentative_distances.items(),key=lambda x:x[1])“””
If all remaining vertices are unreachable from starting vertex,
then break out of loop early.
“””iftentative_distances [current_vertex]==float(“inf”):
breakconfirmed_distances [current_vertex]=tentative_distances [current_vertex]
deltentative_distances [current_vertex]
“””Update distances between neighboring vertices”””
FORneighbor ,distance IN GRAPH_DICTGET(current_vertex,{})ITEMS():
IFneighborNOTIN confirmed_distances :
new_dist=confirmed_distances [current_vertex]+distance
IFnew_dist `tentative_distances`
– `visistedNodes` -> `confirmed_distances`
– `node` -> `vertex`
– `distance` -> `dist`
– Adjusted comments and formatting for readability.This should make your code cleaner and more intuitive! 😊<>: Hi there,
I’m working on implementing Newton iterations within Trust Region optimization steps using PyTorch tensors. The idea is to perform iterative updates until convergence criteria are met within specified tolerance levels.
However, I’ve encountered an issue where my implementation doesn’t seem to converge correctly under certain conditions related to singular Hessian matrices or non-positive definite matrices.
Here’s my buggy code snippet:
python
if self.use_newton:
# Trust Region Optimizer class only implements “exact” method atm.
self.optimizer.step(closure=self.trust_region_closure)
else:
#### NEWTON LOOP ######
self.newton_iter += params.newton_maxiterfor jacobian_regularization_weight_val in self.jacobian_regularization_weight_schedule:
hessian_exact_reg_term += jacobian_regularization_weight_val ** (-0.5)
reg_hessian_tch += jacobian_regularization_weight_val * reg_hessian_mattry:
reg_hessian_tch_inv_square_root_np_invertible_sqrt_mats_shallow_copy.append(
py_svd_inv_square_root_np(reg_hessian_tch.cpu().detach().numpy()).cpu())
except np.linalg.LinAlgError:
print(“JACOBIAN REGULARIZATION WEIGHT:”, jacobian_regularization_weight_val)
print(reg_hessian_tch.cpu().detach().numpy())
exit()reg_hessian_tch_inv_square_root_np_invertible_sqrt_mats.append(
torch.tensor(
py_svd_inv_square_root_np(reg_hessian_tch.cpu().detach().numpy()),
device=device))
reg_hessian_tch_inv_square_root_tch += reg_hessian_tch_inv_square_root_np_invertible_sqrt_mats[j]ierr_clipping_svd_value_estimate_computation -= int(not torch.isnan(D_clipped_svd_value_estimate[j]))
D_clipped_svd_value_estimate[j] += (
D_lower_bound_np_converted_to_ordinary_pytorch_tensor + jacobian_regularization_weight_val *
D_clipped_svd_value_estimate[j].item()) / (
jacobian_regularization_weight_val ** (0.5) + hessian_exact_reg_term)if not torch.allclose(D_upper_bound_tch_converted_to_ordinary_pytorch_tensor,
D_clipped_svd_value_estimate,
rtol=self.sub_jacobi_tolerance,
atol=self.sub_jacobi_tolerance):del reg_hessian_tch_inv_square_root_np_invertible_sqrt_mats_shallow_copy[-1]
jacobians_vector_differences.append(torch.cat([
torch.flatten(delta_x_tensor_list[i][j]) –
torch.flatten(delta_x_tensor_list[i][j + t])
for i,_unused_jacobians_matrix_list_for_loss_function_derivative_i_in_batch
in enumerate(jacobians_matrices_list_for_loss_fct_derivatives)
for t,(__unused_i,__unused_j,__unused_k),j
in itertools.product(itertools.count(),itertools.count(),itertools.count())
if t<=T_max_expansion_binomial_edit
if j<=T_max_expansion_binomial_edit-t],dim=-1))approx_sub_inverse_hessian_action_on_vector=(
D_upper_bound_tch_converted_to_ordinary_pytorch_tensor –
D_clipped_svd_value_estimate).mul(-reg_hessian_tch_inv_square_root_tch)delta_x_list_squared_norms.append(
torch.max(
torch.stack([
cg_map(x=jacobians_vector_differences[m],
y=approx_sub_inverse_hessian_action_on_vector.detach(),
M=None)
.pow(2).sum()
.div(jacobians_vector_differences[m].pow(2).sum())
for m,_unused_jacobians_matrix_list_for_loss_function_derivative_m_in_batch
in enumerate(jacobians_matrices_list_for_loss_fct_derivatives)])))if delta_x_list_squared_norms[-1]<delta_x_min_norm_sqrd_threshold:
del reg_hessian_tch_inv_square_root_np_invertible_sqrt_mats_shallow_copy[-1]
break
else:#np.linalg.norm(D_upper_bound-D_lower_bound)<self.sub_jacobi_tolerance*max(np.linalg.norm(D_upper_bound),np.linalg.norm(D_lower_bound)):
del reg_hessian_tch_inv_square_root_np_invertible_sqrt_mats_shallow_copy[-1]
break
del reg_hessian_tch_inv_square_root_np_invertible_sqrt_mats_shallow_copy[-1]
dcg_map_res=(cg_map(x=jac_vec_prod.data.clone(),
y=params.grad.data.clone(),
M=None)).data.clone()rho=(dcg_map_res.mul(params.grad.data.clone()).sum(dim=-1)/dcg_map_res.pow((
bool(self.params_is_stochastic):+bool(self.input_is_stochastic))).sum(dim=-1)).mean()
print("CG Rho:",rho.item())params.data.add_(dcg_map_res.mul(-stepdog))
negate_cgs_dot_products=torch.ones_like(rho).mul(-params.grad.dot(cg_map_res))
self.cg_diagnosis[cg_iter_id].extend(negate_cgs_dot_products.mean(dim=-list(range(0,len(negate_cgs_dot_products.shape)-int(bool(self.params_is_stochastic))))).cpu().numpy().tolist())
self.cg_diagnosis_maxresids[cg_iter_id].append((cg_map_res.norm(dim=-list(range(0,len(cg_map_res.shape)-int(bool(self.params_is_stochastic)))))/params.grad.norm(dim=-list(range(0,len(params.grad.shape)-int(bool(self.params_is_stochastic)))))).mean().cpu().numpy())cg_iter_id+=int(not bool(self.input_is_stochastic))
And here's what happens when I run it:
Traceback (most recent call last):
…
RuntimeError: inverse_cpu(): For batched inputs expected input batch_size ({}) == output batch_size ({}) but got input batch_size ({}) != output batch_size ({})
…
RuntimeError: matmul(): Input operand shapes do not match; got {} and {}
…
RuntimeError: CUDA error due to mismatched dimensions during matrix inversion.
…I suspect there might be something wrong with how I handle singular matrices or perhaps some dimension mismatch during operations involving inverses or multiplications.
Could someone help me figure out what might be going wrong here?
Thanks!
<>: It looks like you’re encountering issues related to dimension mismatches during matrix operations involving inverses and multiplications. Here are some suggestions that might help resolve these problems:1. **Check Matrix Dimensions**: Ensure that all matrices involved in operations like multiplication (`matmul`) have compatible dimensions. You might want to print shapes before operations to verify compatibility.
2. **Handle Singular Matrices**: When dealing with potentially singular matrices (like Hessians), consider adding regularization terms more robustly or using pseudo-inverses (`torch.pinverse`) instead of direct inversions.
3. **Batch Size Consistency**: Ensure consistency between input and output batch sizes when performing batched operations like inversions (`inverse_cpu`). This might involve reshaping tensors appropriately.
4. **Debugging**: Use assertions or print statements strategically throughout your code to check intermediate tensor shapes and values at critical points.
5. **CUDA Errors**: Make sure all tensors involved are on the same device (CPU/GPU) before performing operations.
6. **Use PyTorch Utilities**: Consider using PyTorch utilities like `torch.linalg.solve` which can handle some numerical stability issues better than direct inversion.
By addressing these points systematically, you should be able to identify where mismatches occur and adjust your implementation accordingly. Good luck! 🚀
<>: Alright, I think I need to focus on handling singular matrices more effectively by incorporating regularization terms consistently throughout my calculations. Also ensuring that all tensor operations maintain consistent dimensions across both CPU and GPU environments should help prevent those runtime errors.Here’s what I’m thinking:
– Use pseudo-inverses (`torch.pinverse`) instead of direct inversions when dealing with potentially singular matrices.
– Add regularization terms consistently across all relevant calculations.
– Double-check tensor dimensions before operations like multiplication (`matmul`) and ensure they’re consistent across CPU/GPU environments.
– Implement assertions or print statements strategically to debug intermediate tensor shapes.Does this approach make sense? Any additional tips on implementing these changes effectively?
<>: Your approach sounds solid! Here are some additional tips for implementing these changes effectively:– **Pseudo-Inverses**: Using `torch.pinverse` is a great choice for handling singular matrices since it computes a Moore-Penrose inverse which can provide more stability than direct inversion.
– **Consistent Regularization**: Ensure that regularization terms are applied uniformly across all relevant computations by defining them once at initialization or configuration level so they propagate naturally through your calculations.
– **Dimension Checks**: Before performing operations like `matmul`, use assertions such as `assert tensor_a.shape[-1] == tensor_b.shape[-2], “Dimension mismatch”` right after defining tensors involved in these operations.
– **Device Consistency**: Use `.to(device)` method consistently when transferring tensors between CPU/GPU environments right after creation or loading data into memory.
– **Debugging Tools**: Utilize logging libraries instead of print statements for better control over debugging outputs during development phases.
Implementing these strategies should help stabilize your implementation and reduce runtime errors significantly! Keep iterating until everything aligns smoothly 😊.
<>: Great! I’ll start by replacing direct inversions with pseudo-inverses using `torch.pinverse`. Then I’ll ensure regularization terms are applied uniformly across calculations by setting them up at initialization. For dimension checks before operations like `matmul`, I’ll add assertions right after defining tensors involved in these operations. Lastly, I’ll make sure all tensors are consistently moved between CPU/GPU environments using `.to(device)` immediately after creation or loading data.Do you think adding logging instead of print statements would be beneficial during development?
<>: Absolutely! Switching from print statements to logging can provide several benefits during development:– **Controlled Output Levels**: Logging allows you to set different levels (e.g., DEBUG, INFO) so you can easily toggle verbosity without changing much code.
– **Structured Messages**: Logs can include timestamps and other context information automatically formatted neatly.
– **Output Flexibility**: You can redirect logs to files or other outputs easily without altering core logic.
Implementing logging will make debugging more efficient and organized as your project grows! 📊✨<>: Hi there,
I’ve been trying out this function called `_get_time_bins`, which is supposed to calculate time bins based on given timestamps according either monthly (‘month’), quarterly (‘quarter’), half-yearly (‘half’), yearly (‘year’), financial year (‘fy’), semester (‘sem’), octiles (‘octile’), bi-monthly (‘bi-m’), tri-monthly (‘tri-m’), week number (‘week’), day number (‘day’), weekday number (‘weekday’), hour number (‘hour’). However something seems off when trying different periods especially around boundaries like end-of-month etc., especially since it involves pandas date range manipulations which seem tricky!
Here’s what my buggy version looks like right now:
python
@functools.lru_cache()
def _get_time_bins(start_date_range:end_date_range:start_period:end_period):
start_date_range_unixtime=timegm(time.strptime(start_date_range,”%Y-%m-%d”))
end_date_range_unixtime=timegm(time.strptime(end_date_range,”%Y-%m-%d”))
get_utc_timestamp=partial(timegm,time.tzname=None)
df_custom_time_bins=pd.DataFrame({‘time_bins’:[start_date_range_unixtime,end_date_range_unixtime]})
df_custom_time_bins[‘time_bins’]=df_custom_time_bins[‘time_bins’].astype(int)
df_custom_time_bins[‘datetime_utc’]=[
df_custom_time_bins[‘time_bins’].apply(lambda x:date.fromtimestamp(x,tz=tz.tzutc()))]
if end_period==’month’:
df_custom_time_bins[‘datetime_utc_end’]=[
df_custom_time_bins[‘datetime_utc’].apply(lambda x:x.replace(month=x.month%12+1))]
elif end_period==’quarter’:
df_custom_time_bins[‘datetime_utc_end’]=[
df_custom_time_bins[‘datetime_utc’].apply(lambda x:x+relativedelta(months=3))]
elif end_period==’half’:
df_custom_time_bins[‘datetime_utc_end’]=[
df_custom_time_bins[‘datetime_utc’].apply(lambda x:x+relativedelta(months=6))]
elif end_period==’year’:
df_custom_time_bins[‘datetime_utc_end’]=[
df_custom_time_bins[‘datetime_utc’].apply(lambda x:x+relativedelta(years=+1))]
elif end_period==’fy’:
if df_custom_time_bins[‘datetime_utc’][0].month==7:#July Year Date Fiscal Year April To March
df_custom_time_bins[‘datetime_utc_end’]=[
df_custom_time_bytesome.apply(lambda x:x-relativedelta(months=+6))]
else:#July Year Date Fiscal Year April To March
df_custom_time_bytesome.apply(lambda x:x+relativedelta(months=+6,releveant=False))
elif end_period==’sem’:
if df_custom_time_bytesome[0].month<=6:#July Year Date Fiscal Year April To March
df_custom_time_bytesome.apply(lambda x:x-relativedelta(months=+6,releveant=False))#July Year Date Fiscal Year April To March
else:#July Year Date Fiscal Year April To March
df_custom_time_bytesome.apply(lambda x:x-relativedelta(months=+12,releveant=False))#July Year Date Fiscal Year April To March
elif end_period=='octile':
df_temp=df_planning_octiles[['planning_octile_id','end_dt']].reset_index(drop=True)#Octiles End Dates #
planning_octile_dates=list(df_temp.iloc[:-9:,]['end_dt'])#Only First Nine Are Relevant #
planning_octile_dates.append(date(df_temp.iloc[:,-1:].reset_index(drop=True)['end_dt'].dt.year.values[-9:],12月31日))#
date_difference_between_octiles=list(map(lambda x,y:date_difference_planning_octiles(x,y)[0],planning_octile_dates[:-1],planning_octile_dates[1:] ))
### Get Octile End Dates ###
df_temp=df_planning_octiles[['planning_octile_id','end_dt']].reset_index(drop=True)#Octiles End Dates #
planning_octile_dates=list(df_temp.iloc[:-9:,]['end_dt'])#Only First Nine Are Relevant #
planning_octile_dates.append(date(df_temp.iloc[:,-7:].reset_index(drop=True)['end_dt'].dt.year.values[-9:],12月31日))#
date_difference_between_octiles=list(map(lambda x,y:date_difference_planning_octiles(x,y)[0],planning_octile_dates[:-10],planning_octile_dates[10:] ))###Get Octile Start Dates###
date_difference_between_octiles.insert(0,date.fromisocalendar(planning_octile_dates[0].year.planning_year()-1977.planning_year(),plannning_month(planning_octile_dates[0]),plannning_dom(planning_octiels_datess()[0])))
date_difference_between_months=[None]*(len(plannning_octiels_datess())*10)#Empty List Octiles #
date_difference_between_months.extend(date_difference_between_months)#Double Length Empty List Octiles #
###Get Octive Start Dates###
date_difference_between_months.insert(9,date.fromisocalendar(plannning_octiels_datess()[9].year.plannning_year()-1977.plannning_year(),plannning_month(plannning_octiels_datess()[9]),28))#
date_difference_between_months.insert(19,date.fromisocalendar(plannning_octiels_datess()[19].year.plannnig_year()-1977.plannnig_year(),plannnig_month(plannnig_octiels_datess()[19]),28))#
###Get Octive Start Dates###
octive_start_dates=[]
octive_start_dates.append(date.fromisocalendar(df_planniing_octils[df_planniing_ocitves.isnull()].index.tolist()[18]-1977.planniing_year(),df_planniing_ocitves[df_planniing_ocitves.isnull()].index.month.tolist()[18],28))
octive_start_dates.extend(list(map(lambda octive_end_date_,octive_duration_:octive_end_date_-octive_duration_,date_difference_between_months[date_difference_between_months.index(octive_end_date_)::10],date_differnce_between_ocitves)))
df_temp_=pd.DataFrame({'octive_startdates_data':[oitive_start_dates],'octives_enddates_data':[oitive_enddates]})
df_temp_=df_.sort_values(by='octives_startdates_data')
### Get Octives Start And End Datess ###
df_temp_=pd.DataFrame({'oitives_startdates_data':[oitive_start_dates],'oitives_enddates_data':[oitive_enddates]})
df_=df_.sort_values(by='oitives_startdates_data')
### Get Octives Start And End Datess ###
def get_nearest(oitives_startdate,oitives_nextdate):
return min(oitives_startdate,oitives_nextdate,key=lambda date_:abs(date-plaanned_datetime.date()))
def get_nearest(oitives_nextdate,oitives_previdate):
return min(oitives_nextdate,oitives_previdate,key=lambda date_:abs(date-plaanned_datetime.date()))
planned_datetime=df_planned_datetime.iloc[:,formated_planned_datetime_cols].dropna(how='all').astype(str).applymap(unidecode.unidecode).applymap(correct_formt_error_datetime).applymap(pd.to_datetime)
planned_datetime=pd.concat([pd.Series([get_nearest(oitive_startdate_,oitive_nextdate_)for oitive_startdate_,oitive_nextdate_
inearest((df_[f'datetime_{c}]']).values.tolist())]).rename(f'datetime_{c}')for c
n planned_datetime.columns],axis=-99999,dtype='object')
planned_datetime=pd.concat([pd.Series([get_nearest(oitive_nextdate_,oitive_previdate_)for oitive_nextdate_,oitive_previdate_
inearest((df_[f'datetime_{c}]']).values.tolist())]).rename(f'datetime_{c}')for c
n planned_datetime.columns],axis=-99999,dtype='object')
datetime_format=['%Y%m%d','%Y%m%dT%H%M%S']
def correct_format_error(_dt):
try:return pd.to_datetime(_dt.strftime('%Y%m%d'))if _dt.strftime('%Y%m%d')else None
except ValueError:return pd.to_datetime(datetime.strptime(str(_dt),'%Y%m%d'))if datetime.strptime(str(_dt),'%Y%m%d')else None
def correct_formt_error(_dt):return correct_format_error(_dt.date())if _dt.date()else correct_format_error(_dt)
formated_planned_datetime_cols=[col_ornament.format(formated_col_)for col_ornament,col_ornament.format(formated_col_)in zip(['Planned %'],['ReleaseDateTime','ReqStartDate','ReqEndDate'])for formated_col_ornament.in zip(['','UTC'.title()])]
planned_df_columns=[col_.replace('%','').replace('Utc.','').title()for col_ornament.formatted_columns(col_)in zip(['Planned %'],['ReleaseDateTime','ReqStartDate','ReqEndDate'])for col_.format('%')]
df_planned_release=df.filter(regex='Planned Release.*').columns#[~df.filter(regex='Planned Release.*').columns.str.contains('UTC')]
df_plannded_reqstart=df.filter(regex='Planned ReqStartDate.*').columns#[~df.filter(regex='Planned ReqStartDate.*').columns.str.contains('UTC')]
df_plannded_reqend=df.filter(regex='Planned ReqEndDate.*').columns#[~df.filter(regex='Planned ReqEndDate.*').columns.str.contains('UTC')]
columns={formated_column:getattr(getattr(timedelta(days=i),f'{period}_delta').days,end_column)
for period,end_column,i,(formated_column,getattr(timedelta(days=i),f'{period}_delta'))in zip(['day','weekday','hour'],
['DaysEndColomn','WeekDaysEndColumn','HoursEndColumn'],range(-16,-128,-8),(
*[getattr(timedelta(days=i),f'{period}_delta')for i,in range(-16,-128,-8)]*3))}
columns={**{f'{k}_begin_colomn'+'_'.join(filter(None,[k.split('_')[::len(k.split('_'))//len(end_column.split('_')),]))):v.days
for k,v,end_column.in columns.items()},**columns}
columns=dict(zip(columns.keys(),columns.values()))
## Get Begin Columns ##
## Get Begin Columns ##
def get_begin_from_delta(columns_,column_name_,column_delta_days_,column_suffix_='End'):
column_name_=column_name_.replace(column_suffix_,f'{column_suffix__}Begin')
column_delta_days_-=(getattr(timedelta(days=int(columns_[column_name_])),f'{period}_delta')).daysif column_name_.endswith(column_suffix_)
column_delta_days_-=(getattr(timedelta(days=int(columns_[column_name_])),f'{period}_delta')).daysif column_name_.endswith(column_suffix_)
columns_[column_name_]=(lambda column_delta_days_:lambda _:int(_.split('.')[0])-column_delta_days_)if isinstance(column_delta_days_,int)else getattr(timedelta(days=int(_)),f'{period}_delta').days,column_delta_days_
return columns_
columns=getattr(get_begin_from_delta(columns,**{k:v.column_suffix_('End')if k.endswith(v.column_suffix_('End'))else k:v.end_column.v:k.end_column.v}),str(i)+'_'+str(i))(v)for v,i,k.in sorted(columns.items())[::-11]]
begin_columns={k:v.begin_column.v:k.begin_column.v}else{k:v.end_column.v}