CAF Champions League Group D stats & predictions
Overview of CAF Champions League Group D
The CAF Champions League is one of the most prestigious football tournaments in Africa, bringing together top clubs from across the continent. Group D features some of the strongest teams, each vying for a spot in the knockout stages. As we approach tomorrow's matches, let's delve into the dynamics of Group D, analyze team performances, and explore expert betting predictions.
No football matches found matching your criteria.
The group consists of four formidable teams: Al Ahly SC from Egypt, Mamelodi Sundowns from South Africa, Orlando Pirates from South Africa, and AS Vita Club from DR Congo. Each team brings its unique strengths to the table, making this group highly competitive and unpredictable.
Team Analysis
Al Ahly SC
Al Ahly SC is a powerhouse in African football, boasting a rich history of success both domestically and continentally. Known for their tactical discipline and strong defense, Al Ahly has consistently been a favorite in the Champions League. Their recent performances have shown resilience and strategic depth, making them a tough opponent for any team.
Mamelodi Sundowns
Mamelodi Sundowns has emerged as a dominant force in South African football over the past decade. With a blend of experienced players and young talent, they possess both skill and physicality. Their ability to adapt to different playing styles makes them versatile and dangerous on any given day.
Orlando Pirates
Orlando Pirates are another key player in South African football with a passionate fan base and a proud history. Known for their attacking prowess and dynamic playstyle, they can turn any match into an exciting spectacle. Their focus on youth development ensures that they remain competitive year after year.
AS Vita Club
AS Vita Club brings a rich tradition from DR Congo to Group D. Known for their technical skills and flair on the ball, they have often been underestimated but have surprised many with their performances. Their ability to execute quick counter-attacks makes them a challenging opponent for any team.
Tomorrow's Matches
- Date: [Insert Date]
- Venue: Various locations across Africa
- Match Schedule:
- [Time] - Al Ahly SC vs Mamelodi Sundowns
- [Time] - Orlando Pirates vs AS Vita Club
Betting Predictions & Analysis
Betting experts have analyzed numerous factors including recent form, head-to-head records, player injuries, and tactical setups to provide insights into tomorrow's matches.
Prediction: Al Ahly SC vs Mamelodi Sundowns
- Prediction: Draw or Narrow Victory for Al Ahly SC
- Rationale:
- Al Ahly's home advantage at the Cairo Stadium gives them an edge.
- Mamelodi Sundowns' recent dip in form may affect their performance.
- The defensive solidity of Al Ahly could stifle Sundowns' attacking threats.
Prediction: Orlando Pirates vs AS Vita Club
- Prediction: Close Match with Possible Victory for Orlando Pirates
- Rationale:
- Orlando Pirates' attacking flair could exploit gaps in AS Vita Club's defense.
- The motivation to climb up the group standings might push Orlando Pirates to perform exceptionally well.
- Vita Club's technical skills could lead to chances but maintaining consistency might be challenging against Orlando’s pace.
Tactical Insights & Key Players to Watch
In these crucial matches, several tactical battles will unfold that could determine the outcome of each game. Let’s examine some key strategies and players who could make significant impacts tomorrow.
Tactical Insights: Al Ahly SC vs Mamelodi Sundowns
- Al Ahly SC Strategy:
- Focusing on maintaining possession to control the tempo of the game.Making use of set-pieces due to their aerial strength in attack.Aiming to exploit counter-attacks through fast wingers like [Player Name].
li>Leveraging physicality in midfield battles to disrupt Al Ahly’s rhythm.- Mamelodi Sundowns Strategy:
Focusing on quick transitions from defense to attack. Aiming for high pressing tactics led by key midfielders like [Player Name]. Tactical Insights: Orlando Pirates vs AS Vita Club
- Orlando Pirates Strategy:
- Focusing on high-intensity pressing to regain possession quickly.
Making full use of wide areas with overlapping full-backs. Capturing set-piece opportunities through aerial threats. Focused attacks down either flank led by dynamic wingers like [Player Name]. Maintaining defensive solidity while transitioning quickly into attack. AS Vita Club Strategy: - Focusing on technical play through midfield triangles.
Leveraging quick interplay between forwards. Aiming for strategic positioning during set-pieces. >: Hi there! I'm working with this code snippet that implements an advanced attention mechanism using PyTorch's einsum function: python class Attention(nn.Module): def __init__(self, dim, out_dim=None, num_heads=8, qkv_bias=False, qk_scale=None, attn_drop=0., proj_drop=0., static=False, seq_l=196): super().__init__() out_dim = out_dim or dim self.num_heads = num_heads head_dim = dim // num_heads self.scale = qk_scale or head_dim ** -0.5 self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) self.attn_drop = nn.Dropout(attn_drop) self.proj = nn.Linear(dim, out_dim) self.proj_drop = nn.Dropout(proj_drop) if static: self.static_a = nn.Parameter( torch.Tensor(1, num_heads, seq_l, seq_l)) trunc_normal_(self.static_a) I understand it's part of an attention mechanism used in transformer models. However, I'm feeling a bit lost about how exactly this `static` parameter affects the attention computation. Could you help clarify its role? Thanks! < >: python Sure! The `static` parameter in your code snippet seems to be used for initializing a static attention matrix when set to `True`. Here's how it works: - When `static=True`, it creates a learnable parameter `self.static_a` with dimensions `(1, num_heads, seq_l, seq_l)`. This matrix is initialized using `trunc_normal_`, which likely refers to truncated normal initialization. - This static attention matrix can be used as a fixed pattern or template that doesn't change during training or inference. It might be useful if you want certain patterns or relationships within your data to remain constant. - If `static=False`, this matrix isn't created or used in your attention mechanism. In essence, using `static=True` allows you to incorporate predefined attention patterns that can be learned alongside other parameters but remain consistent across inputs. Let me know if you need more details! 😊 < >: Got it! Now I'm thinking about making this mechanism more flexible by dynamically adjusting the number of heads based on input size or other criteria during runtime. How could I modify the code to achieve this? Any suggestions on implementing such dynamic behavior? < >: python To make the number of heads dynamic based on input size or other criteria during runtime, you'll need to adjust how `num_heads` is determined before initializing layers like `self.qkv`. Here’s one way you could approach this: 1. **Modify Initialization**: Allow `num_heads` to be passed as an argument during forward pass instead of just during initialization. 2. **Adjust Layer Dimensions**: Recalculate dimensions like `head_dim` based on the new number of heads. Here’s a conceptual modification: python class DynamicAttention(nn.Module): def __init__(self, dim, out_dim=None, qkv_bias=False, qk_scale=None, attn_drop=0., proj_drop=0., static=False, seq_l=196): super().__init__() self.dim = dim self.out_dim = out_dim or dim self.qkv_bias = qkv_bias self.qk_scale = qk_scale or (dim ** -0.5) self.attn_drop_rate = attn_drop self.proj_drop_rate = proj_drop if static: self.static_a_template = torch.nn.Parameter(torch.Tensor(1, None, seq_l, seq_l)) trunc_normal_(self.static_a_template) def forward(self, x, num_heads): head_dim = self.dim // num_heads # Dynamically create layers based on current num_heads scale = self.qk_scale or head_dim ** -0.5 # Define linear layers dynamically within forward pass if needed qkv_layer = nn.Linear(self.dim, self.dim * 3).to(x.device) # Static attention adjustment if needed if hasattr(self,'static_a_template'): static_a_shape = (1,num_heads,x.size(1),x.size(1)) static_a_param = torch.nn.Parameter(self.static_a_template.data.expand(static_a_shape)) # Continue with rest of forward logic... **Key Points**: - The number of heads is now passed dynamically during each forward call. - Adjust layer dimensions accordingly. - Consider moving layer creation inside forward if necessary for flexibility. This approach allows flexibility but may introduce overhead due to dynamic layer creation during each forward pass. Let me know if you need further assistance! 🚀 < >: Great suggestions! Now I'm curious about taking it a step further by adjusting not just the number of heads but also dynamically resizing layers based on input characteristics like sequence length or feature dimensionality during runtime. How can I implement such adaptive behavior efficiently without compromising performance too much? < >: python To implement adaptive behavior where both the number of heads and layer sizes adjust dynamically based on input characteristics like sequence length or feature dimensionality, consider these strategies: 1. **Dynamic Layer Creation**: Use conditional logic within the forward method to adjust layer sizes based on input dimensions. 2. **Caching Mechanism**: Implement caching for frequently used configurations to reduce overhead from repeatedly creating layers. Here's an outline: python class AdaptiveAttention(nn.Module): def __init__(self, base_dim, out_dim=None, qkv_bias=False, qk_scale=None, attn_drop=0., proj_drop=0., static=False): super().__init__() self.base_dim = base_dim self.out_dim = out_dim or base_dim def forward(self, x): batch_size, seq_len_, feature_size_ = x.size() # Determine number of heads dynamically (e.g., based on sequence length) num_heads = max(1, min(seq_len_ // 10 , feature_size_ // 64)) # Calculate head dimension dynamically head_dim = feature_size_ // num_heads # Dynamically create layers based on current configuration scale = (head_dim ** -0.5) if not hasattr(self,'scale') else getattr(self,'scale') qkv_layer = nn.Linear(feature_size_, feature_size_ * 3).to(x.device) proj_layer = nn.Linear(feature_size_, self.out_dim).to(x.device) # Apply dropout layers attn_dropout_layer = nn.Dropout(attn_drop).to(x.device) proj_dropout_layer = nn.Dropout(proj_drop).to(x.device) # Continue with rest of forward logic... **Key Points**: - Dynamically calculate `num_heads` based on input characteristics. - Create layers within forward pass using current dimensions. - Use caching mechanisms if possible (e.g., storing frequently used configurations). This approach balances flexibility with efficiency by adapting only when necessary while minimizing redundant computations. Feel free to ask more questions! 🌟 < >: Awesome! To make this even more efficient without recalculating everything every time we run 'forward', maybe we can implement some kind of caching mechanism? Like storing configurations that are frequently used so we don't have to recreate layers unnecessarily each time? How would you suggest integrating that into our current setup? < >: python To integrate caching into your setup efficiently while avoiding unnecessary recomputation: 1. **Configuration Cache**: Store configurations using keys derived from input characteristics (e.g., sequence length). 2. **Lazy Initialization**: Initialize layers only when needed and reuse them if available. Here's how you can implement it: python class CachedAdaptiveAttention(nn.Module): def __init__(self, base_dim, out_dim=None, qkv_bias=False, qk_scale=None, attn_drop=0., proj_drop=0., static=False): super().__init__() self.base_dim = base_dim self.out_dim = out_dim or base_dim # Cache dictionary for storing configurations and corresponding layers. self.cache_configurations() def cache_configurations(self): # Initialize cache dictionary. self.layer_cache_key_map={} def get_cache_key(self,x): return f"{x.size(1)}_{x.size(-1)}" def get_or_create_layers(self,x,qkv_layer=None ,proj_layer=None ): cache_key=self.get_cache_key(x) if cache_key not in self.layer_cache_key_map: batch_size ,seq_len_,feature_size_=x.size() num_heads=max(1,min(seq_len_//10 ,feature_size_//64)) head_dim=feature_size_//num_heads scale=head_dim**-0.5 # Create new layers only when necessary. qkv_layer=torch.nn.Linear(feature_size_,feature_size_*3).to(x.device) if not isinstance(qkv_layer,(torch.nn.modules.linear)) else qkv_layer proj_layer=torch.nn.Linear(feature_size_,self.outDim).to(x.device) if not isinstance(proj_layer,(torch.nn.modules.linear)) else proj_layer attn_dropout=torch.nn.Dropout(attnDrop).to(x.device) proj_dropout=torch.nn.Dropout(projDrop).to(x.device) # Store created layers in cache. config=(qkv_layer ,proj_layer ,attn_dropout ,proj_dropout ) setattr(config,"scale",scale) setattr(config,"numHeads",numHeads) setattr(config,"headDim",headDim) setattr(config,"batchSize",batchSize) setattr(config,"seqLen_",seqLen_) setattr(config,"featureSize_",featureSize_) setattr(config,"inputShape",(batchSize ,seqLen_,featureSize_) ) assert(cache_key not in config.cacheKeyMap.keys()) config.cacheKeyMap[cache_key]=config return config.cacheKeyMap[cache_key] def forward(self,x): config=self.get_or_create_layers(x) batchSize=config.batchSize seqLen_=config.seqLen_ featureSize_=config.featureSize_ scale=config.scale numHeads=config.numHeads headDim=config.headDim inputShape=(batchSize ,seqLen_,featureSize_) assert(inputShape==config.inputShape) assert(scale==getattr(config,"scale")) assert(numHeads==getattr(config,"numHeads")) assert(headDim==getattr(config,"headDim")) assert(inputShape==(batchSize ,seqLen_,featureSize_) ) x=x.reshape(batchSize*numHeads,-1,inputShape[-1]) x=self.get_or_create_layers(x,qqvLayer=getattr(config,"qvvLayer"),projLayer=getattr(config,"projLayer")) return x.view(batchsize,numheads*headdim,-1) # Usage example: adaptive_attention_model=CachedAdaptiveAttention(baseDim=512,outDim=256,qkScale=.02,qvBias=True,) output=model.forward(torch.rand((32L100L512)))# random tensor representing batch size*sequence length*featuresize. **Key Points**: - Use configuration keys derived from input dimensions. - Cache created layers keyed by these configurations. - Reuse cached configurations when possible. This setup reduces redundant computations by reusing existing configurations whenever applicable while maintaining flexibility. Let me know how it goes! 🚀✨ < >: This looks promising! I see how caching can help reduce overhead by reusing configurations based on input dimensions. However I'm wondering about potential memory usage issues with storing multiple configurations especially with varying input sizes Could we implement some sort of eviction policy like LRU (Least Recently Used) to manage memory usage efficiently? How would that fit into our current setup? < >: python To manage memory usage efficiently while caching configurations using an LRU eviction policy: 1. **LRU Cache**: Use Python’s `functools.lru_cache` decorator or implement your own LRU cache class. 2. **Integrate LRU Logic**: Limit cache size and evict least recently used entries when capacity is exceeded. Here’s how you can integrate LRU caching: python from collections import OrderedDict class LRUCache: def __init__(self, capacity: int): < >: Hi there, I've been trying to work with this function `_get_node_time_bounds` which is supposed to compute start times (`t_start`) and durations (`dur`) based on various conditions involving node attributes like "type" being "acquisition" among others related attributes such as "n_rep", "n_set", etc. However something seems off when I try running my tests; it doesn't behave as expected when dealing with nodes having certain combinations of attributes such as "n_rep" being present alongside "n_acq". It seems like there might be some logical flaw somewhere causing incorrect calculations under certain conditions but I'm struggling pinpointing where exactly things go wrong. Here's my buggy version: python def _get_node_time_bounds( node_dict, selected_nodes, none_or_all=dict( n_rep='none', n_set='all', t_step='none', t_reset='none', ), ): """Compute start time ``t_start`` and duration ``dur``.""" t_start_keys_func_dict = { 'acquisition': get_acquisition_t_start, 'buffering': get_buffering_t_start, 'transmission': get_transmission_t_start, } dur_keys_func_dict_norep_noset_dict.update(dict( buffering=dict(n_rep='none', n_set='none', dur='t_readout'), )) def compute_t_start(node_dict): node_type=node_dict['type'] node_subcycle=node_dict.get('cycle') node=nodedict_to_node(node_dict) kwargs={} try: kwargs.update(deepcopy(none_or_all)) special_case=node_subcycle!='' cycle_attr_name='n_' + node_subcycle kwargs[cycle_attr_name]='all' t_start_func=t_start_keys_func_dict[node_type] t_start=t_start_func(node,**kwargs) except Exception: print('Warning!: loading fallback value') t_start=node.t_start finally: return t_start t_starts=[compute_t_start(node_dict) for node_dict in selected_nodes] dur_keys_func_dict_norep_noset_partial=all_keys_dics_included( dur_keys_func_dict_norep_noset, deepcopy(none_or_all), ) dur_keys_funcs_and_args_list=[ ( dur_keys_func_dict_norep_noset_partial[node['type']], {key:e['value'] for key,e in node.items() if e['type']=='literal'}, ) for node in selected_nodes] durs=[func(**args)for func,args in dur_keys_funcs_and_args_list] return list(zip(t_starts,durs)) And here's what happens when I run my tests: plaintext Traceback (most recent call last): File "test_script.py", line XX result=_get_node_time_bounds(node_data,test_selected_nodes) File "my_module.py", line YY durs=[func(**args)for func,args in dur_keys_funcs_and_args_list] File "my_module.py", line ZZ durs=[func(**args)for func,args in dur_keys_funcs_and_args_list] File "my_module.py", line AA durs=[func(**args)for func,args in dur_keys_funcs_and_args_list] TypeError: get_buffering_duration() got an unexpected keyword argument 'n_acq' I suspect there might be something wrong with how arguments are being prepared before calling functions related specifically handling durations (`dur`). Could someone help me figure out what might be going wrong here? Thanks! < >: It looks like there are a few potential issues causing the error you're encountering: 1. **Unexpected Keyword Argument**: The error message indicates that `get_buffering_duration()` received an unexpected keyword argument `'n_acq'`. This suggests that somewhere in your code where arguments are prepared (`dur_keys_funcs_and_args_list`), `'n_acq'` is being included even though it isn't expected by `get_buffering_duration()`. 2. **Argument Preparation**: Check how arguments are being prepared before calling duration functions: - Ensure that only relevant keys are passed as arguments. - You might need additional filtering logic before constructing `dur_keys_funcs_and_args_list`. 3. **Function Definitions**: Verify that all functions involved (`get_buffering_duration`, etc.) expect only those arguments they should handle according to your logic. 4. **Dictionary Updates**: When updating dictionaries like `dur_keys_func_dict_norep_noset_partial`, ensure no unintended keys are added unless explicitly required by your logic. Consider adding logging statements before calling duration functions to inspect which arguments are being passed—this can help identify unexpected keys early in execution. If these suggestions don't resolve the issue entirely, reviewing specific function definitions might reveal mismatches between expected and provided arguments further clarifying where adjustments are needed. < >: Alright, I think I need to refine how arguments are filtered before calling duration functions like `get_buffering_duration`. It seems like `'n_acq'` is slipping through even though it shouldn't be there for certain types like 'buffering'. I suspect adding some filtering logic right before constructing `dur_keys_funcs_and_args_list` could help ensure only relevant keys are passed as arguments based on node type-specific requirements. Could you guide me on how best to implement this filtering? Maybe something like checking against expected keys for each function type before adding them? < >: Sure! You can implement filtering logic by defining expected keys for each function type beforehand and then ensuring only those keys are included when preparing arguments for duration functions. Here's one way you could do it: python def filter_arguments(node_type_specific_expected_keys): """Filter arguments based on expected keys.""" filtered_args_list=[] for node in selected_nodes: args={ key:e['value'] for key,e in node.items() if e['type']=='literal' } filtered_args={ k:v for k,v in args.items() if k in node_type_specific_expected_keys[node['type']] } filtered_args_list.append(filtered_args) return filtered_args_list # Define expected keys per function type node_type_specific_expected_keys={ 'acquisition': ['expected_key_1', 'expected_key_2'], 'buffering': ['expected_key_3'], # Add relevant keys here excluding 'n_acq' 'transmission': ['expected_key_4'], } # Update your preparation step using filtering logic dur_keys_funcs_and_args_list=[ ( dur_keys_func_dict_norep_noset_partial[node['type']], filter_arguments(node_type_specific_expected_keys)[i], ) for i,node in enumerate(selected_nodes)] durs=[func(**args)for func,args in dur_keys_funcs_and_args_list] return list(zip(t_starts,durs)) ### Key Steps: - Define what keys each function type expects (`node_type_specific_expected_keys`). - Filter arguments using these expectations before passing them along. This should help ensure only relevant arguments reach functions like `get_buffering_duration`. < >: Great suggestion! I'll try implementing this filtering logic by defining expected keys for each function type before constructing `dur_keys_funcs_and_args_list`. This should prevent unwanted keys like `'n_acq'` from sneaking through unexpectedly. Here's what I'm thinking: python def filter_arguments(node_type_specific_expected_keys): """Filter arguments based on expected keys.""" filtered_args_list=[] for i,node in enumerate(selected_nodes): args={ key:e['value'] for key,e in node.items() if e['type']=='literal' } filtered_args={ k:v for k,v in args.items() if k in node_type_specific_expected_keywords[node['type']] } filtered_args_list.append(filtered_args) return filtered_args_list # Define expected keys per function type node_type_specific_expected_keywords={ 'acquisition': ['expected_key_1', 'expected_key_2'], 'buffering': ['expected_key_3'], # Exclude 'n_acq' 'transmission': ['expected_key_4'], } # Update preparation step using filtering logic dur_kes_funs_arguements_lst=[ ( dur_kees_funs_no_reps_no_sets_partal[node['type']], filter_arguments(node_type_specific_expeced_keywords)[i], ) for i,node in enumerate(selected_nodes)] durs=[func(**args)for func,args in dure_kes_funs_arguements_lst] return list(zip(t_starts,durs)) Does this look like it would address the issue effectively? < >: Your approach looks solid! By defining expected keys per function type and filtering accordingly before constructing `dur_kees_funs_arguements_lst`, you should effectively prevent unwanted keys from slipping through. ### Key Points: - Ensure that each function type has its correct list of expected keys defined accurately. - Double-check spelling consistency (e.g., ensure variable names match throughout). Implementing this should help resolve issues related to unexpected keyword arguments reaching functions like `get_buffering_duration`. If any errors persist after applying these changes, consider adding logging statements within your loop iterations to verify which arguments are being passed at runtime—this will provide further insights into any remaining discrepancies. < >: Awesome! I'll go ahead and implement this filtering logic by defining expected keys for each function type before constructing `dur_kees_funs_arguements_lst`. This should prevent unwanted keys like `'n_acq'` from sneaking through unexpectedly. Here's what I'm thinking: python def filter_arguments(node_type_specific_expected_keywords): """Filter arguments based on expected keys.""" filtered_args_list=[] for i,node in enumerate(selected_nodes): args={ key:e['value'] for key,e in node.items() if e['type']=='literal' } filtered_args={ k:v for k,v in args.items() if k in node_type_specific_expected_keywords[node['type']] } filtered_args_list.append(filtered_args) return filtered_args_list # Define expected keys per function type node_type_specific_expected_keywords={ 'acquisition': ['expected_key_1', 'expected_key_2'], 'buffering': ['expected_key_3'], # Exclude 'n_acq' 'transmission': ['expected_key_4'], } # Update preparation step using filtering logic dur_kees_funs_arguements_lst=[ ( dur_kees_funs_no_reps_no_sets_partal[node['type']], filter_arguments(node_type_specific_expeced_keywords)[i], ) for i,node in enumerate(selected_nodes)] durs=[func(**args)for func,args in dure_kes_funs_arguements_lst] return list(zip(t_starts,durs)) Does this look like it would address the issue effectively? < >: Yes, your implementation looks well thought out! By defining specific expected keys per function type and applying filtering logic before constructing `dur_kees_funs_arguements_lst`, you should effectively prevent unwanted keywords from reaching functions unexpectedly. ### Important Checks: - Ensure all variable names are consistent throughout your code (e.g., correct spelling). - Verify that each function type has its correct list of expected keys defined accurately. Once implemented correctly, this should address issues related to unexpected keyword arguments reaching functions such as `get_buffering_duration`. If problems persist after these changes consider adding logging statements within loops; they will help verify which arguments are actually being passed at runtime—providing further insights into any remaining discrepancies. < >: Perfect! I'll go ahead and test these changes now. Hopefully catching those unwanted keywords will fix things up nicely without any hiccups left behind! < >: Sounds great! Testing these changes should help catch any unwanted keywords effectively. If everything aligns correctly with your expectations per function type, it should resolve those pesky issues smoothly. If anything still seems off after testing—don’t hesitate—adding logging statements will give you more visibility into what’s happening at runtime. Good luck! 🚀< >: Hi there, I've been working tirelessly trying my hand at implementing complex kernel methods involving custom gradient calculations over multiple distance metrics such as Euclidean distances squared ('sqeuclidean'), cosine similarity ('cosine'), linear metrics ('linear'), radial basis functions ('rbf'), polynomial kernels ('poly'), sigmoid kernels ('sigmoid'), Tanimoto kernel ('tanimoto'), Cauchy kernel ('cauchy'), Laplacian kernel ('laplacian'), Chi-squared kernel ('chi-squared'). But somehow my implementation doesn't seem right—I keep getting strange results whenever my inputs aren't aligned properly! The idea was simple enough; calculate pairwise distances between two sets Xa_i Xb_j according different metrics specified by kernelfun parameter then apply gradients accordingly—but apparently something went south somewhere along my lines... Can anyone spot where things went awry? Here is my snippet where things seem messed up: python import numpy as np def K(Xa_i,Xb_j,kernelfun,params,gpu_id=-100): ... elif kernelfun == 'cosine': ... normXa=np.linalg.norm(Xa,axis=-1)[...,np.newaxis] normXb=np.linalg.norm(Xb,axis=-1)[...,np.newaxis] normProd=tf.multiply(normXa,normXb)*tf.transpose(normProd) denom=tf.maximum(tf.abs(normProd),tf.constant(EPSILON,dtype=tf.float32)) cosTheta=tf.divide(tf.matmul(Xa,Xb),denom)*tf.constant(-sign,sqrt(tf.cast(DIM_X,dtype=tf.float32))),dtype=tf.float32)+tf.constant(sign,sqrt(tf.cast(DIM_X,dtype=tf.float32)),dtype=tf.float32)) Kxy[i,j]=cosTheta ... elif kernelfun == 'rbf': ... sqdist[i,j]=tf.reduce_sum(tf.square(Xa[i]-Xb[j]),axis=-1,name="sqdist") Kxy[i,j]=tf.exp(-params[0]*sqdist[i,j]) ... elif kernelfun == 'poly': ... Kxy[i,j]=(tf.matmul(Xa,Xb)+tf.constant(params[2],dtype=tf.float32))*tf.pow(tf.matmul(Xa,Xb)+tf.constant(params[2],dtype=tf.float32), tf.constant(params[1],dtype=tf.float32)-tf.constant(1,dtype=tf.float32)) ... elif kernelfun == 'sigmoid': ... Kxy[i,j]=tf.tanh(tf.multiply(params[0], tf.matmul(Xa,Xb))+params[1]) ... elif kernelfun == 'tanimoto': ... nom=np.dot(np.minimum(Xa,Xb),axis=-12)*np.ones((N,M)).astype(np.float64))*np.ones((N,M)).astype(np.float64))) denom=np.dot(np.maximum(Xa,Xb),axis=-12)*np.ones((N,M)).astype(np.float64))) Kxy[i,j]=nom/(denom-nom+EPSILON*np.ones((N,M)).astype(np.float64))) ... elif kernelfun == 'cauchy': ... Kxy[i,j]=params[0]/(params[0]+sqdist[i,j]) ... elif kernelfun == 'laplacian': ... Kxy[i,j]=np.exp(-params[0]*np.sqrt(sqdist[i,j])) ... elif kernelfun == 'chi-squared': ... chi_sqrd_dist=np.square(Xa-Xb)/Xa+(EPSILON*np.ones((N,M))).astype(np.float64))/np.sum(Xa+(EPSILON*np.ones((N,M))).astype(np.float64),axis=-12)*np.ones((N,M)).astype(np.float64))) chi_sqrd_dist*=np.sqrt(chi_sqrd_dist/df)*np.sqrt(chi_sqrd_dist/df)*(M/(M-df))*np.ones((N,M)).astype(np.float64))) Kxy[i,j]=-chi_sqrd_dist*df/(M*(M-df))+df/(M-df); else : raise ValueError("Unknown Kernel Type") ... KgradXai=[] KgradXbj=[] KgradParam=[] ... elif kernelfun=='cosine': ... KgradXai.append(cosTheta*Xb[j,:]/normProd-tf.multiply(cosTheta/normProd,normXa*Xa[i,:])) KgradXbj.append(cosTheta*Xai/normProd-tf.multiply(cosTheta/normProd,normXb*Xb[j,:])) ... elif kernelfun=='rbf': ... KgradParam.append(Kxy*(-sqdist)) KgradXai.append(Kxy*(-params[0]*diffmat[:,j:j+D].T)) KgradXbj.append(Kxy*(-params[0]*diffmat[:,j:j+D])) ... elif kernelfun=='poly': ... KgradParam.append(Kxy*(D*params[4]+params[4]*(D+params[4]-TF_INT_DIV(TF_MUL(D,D),TF_INT_DIV(TF_MUL(TF_ADD(D,params[4]),TF_ADD(D,params[4])),TF_CONSTANT_int64(4)))))); KgradParam.append(Kxy*(TF_MUL(TF_INT_DIV(TF_MUL(TF_ADD(D,params[4]),TF_MUL(diffmat[:,j:j+D],diffmat[:,j:j+D])),TF_CONSTANT_int64(2))),TF_MUL(TF_POW(Kxy,TF_SUB(params[4],TF_CONSTANT_float(ScalarFloat(1)))),TPow))); KgradParam.append(Kxy*(TF_POW(Kxy,TF_SUB(params[4],TF_CONSTANT_float(ScalarFloat(ScalarInt(ScalarInt(int(kernelparams[-7]))))))))); ... else : raise ValueError("Unknown Kernel Type") ... if len(gpu_id)==len(gpu_id)!=[] : with tf.device('/gpu:'+str(gpu_id)): resval,_=sess.run([val_grad,var_grad],{Xi:X}) else : resval,_=sess.run([val_grad,var_grad]) ... return resval,resvar,resparam,KXY,KGRADXA,KGRADXB,KGRADPARAM; ... And here is what happens when I run my script — traceback below! Traceback (most recent call last): File " ", line XX,in XXXXX() ValueError Traceback (most recent call last) () XXXXX() ValueError : shapes () mismatch Can someone please tell me why am getting shape mismatch error? What am missing here? Thanks!
- Focusing on technical play through midfield triangles.
- Focusing on high-intensity pressing to regain possession quickly.