The efficient removal of the typical antibiotic pollutant sulfamethoxazole (SMX) currently poses a major challenge in the field of water treatment. The practical application of pure Bi₂WO₆ is severely hampered by its low photocatalytic efficiency,which is a consequence of inadequate interfacial interaction with pollutants and rapid charge carrier recombination. To tackle this issue,we designed and synthesized a layered Bi₄O₅Br₂/Bi₂WO₆ heterojunction photocatalyst through a solvothermal–insitu growth approach. Characterization results confirmed the formation of intimate heterojunction interfaces between the two components. Among them,Bi₄O₅Br₂/Bi₂WO₆⁃0.3 (BOB/BWO⁃0.3) exhibits the optimal performance. Under visible light irradiation for 90 minutes,its degradation rate of sulfamethoxazole reaches as high as 83%,with an apparent reaction kinetic constant (kapp) of 0.0183 min⁻¹. Compared with pure Bi₂WO₆ (degradation rate:3%,kapp=0.000277 min⁻¹),the degradation performance of this composite material is improved by approximately 26.7 times,and kapp is enhanced by about 65 times. Active species trapping experiments indicated that superoxide radicals (
The sharing of Electronic Medical Records (EMRs) is crucial for supporting diagnosis and improving the quality of medical services. However,the risks of privacy breaches and data security vulnerabilities during the sharing process often deter data holders from participating in data sharing. Incentivizing data holders to share data in a trustworthy manner while ensuring data security,thereby dismantling data silos,has emerged as a significant research topic in academia. To address this challenge,this paper proposes a distributed EMR sharing model based on a consortium chain and Stackelberg game. The model enables secure and efficient data sharing while preserving user privacy by leveraging the consortium chain's distributed architecture and the strategic optimization mechanism of the Stackelberg game. First,the model establishes a consortium chain architecture to enable trusted and secure sharing of EMRs data through collaboration among multiple participants. Second,it utilizes Stackelberg game theory to analyze decision⁃making conflicts during the sharing process and designs incentive mechanisms to ensure equitable returns for all parties involved. The model formulates objective functions that capture the dual goals of maximizing data providers' profits and consumers' utility. It leverages the decomposition property and rapid convergence of the Alternating Direction Method of Multipliers (ADMM) to solve these optimization problems in a distributed manner,thereby meeting the demands of large⁃scale EMRs sharing across institutions. Lastly,the model employs the Agglomerative Hierarchical Clustering (AHC) algorithm to cluster consensus nodes and modifies the traditional PBFT consensus algorithm into AHC⁃PBFT. This enhancement improves data sharing efficiency and reduces communication overhead. Experimental results demonstrate that,compared to traditional chain sharing models,the proposed model can provide secure EMRs management and transaction services,reduce communication overhead and latency in data sharing among medical nodes,increase throughput,and foster trustworthy data sharing among medical institutions and relevant departments.
Oral squamous cell carcinoma (OSCC) is the most common malignant tumor of the oral cavity,with distant metastasis leading to poor prognosis in patients. The basement membrane and lncRNAs have a significant impact on OSCC metastasis,but related research remains limited. This study employed WGCNA,differential expression analysis,and various machine learning methods to screen for key basement membrane⁃related genes,followed by co⁃expression analysis to identify associated lncRNAs. Subsequently,univariate Cox, LASSO,and multivariate Cox regression analysis were used to select lncRNAs and construct a prognostic risk model. The model can accurately and reliably predict the prognosis of OSCC patients,with the high⁃risk group showing significantly worse outcomes than the low⁃risk group. Functional differences between risk groups were primarily enriched in ECM⁃receptor interaction. Moreover,the immune microenvironment differs significantly between high⁃ and low⁃risk groups. The high⁃risk group shows higher sensitivity to 5⁃fluorouracil,cisplatin,oxaliplatin,and tamoxifen,while the low⁃risk group is more sensitive to dactolisib and staurosporine. In summary,the basement membrane⁃related lncRNA model is a valuable biomarker for predicting prognosis in OSCC patients and has important implications for guiding clinical treatment.
Currently,artificial intelligence methods for predicting antigen⁃antibody binding affinity predominantly rely on either sequence⁃based or structure⁃based unimodal modeling approaches,which limit their ability to capture comprehensive interaction information. Therefore,we present a structure⁃based multi⁃modal feature fusion framework for antigen⁃antibody affinity prediction. The framework mainly consists of three components: a multi⁃modal antibody information mining module,a multi⁃modal antigen information mining module,and a fusion prediction module. In the antibody information mining module,we employ the Roformer network to separately extract heavy chain and light chain information while utilizing the GearNet to capture structural information of antibodies. The sequence and structural information is then adaptively fused through a cross⁃attention mechanism. For antigen information extraction,we implement the ESM2 (Evolutionary Scale Modeling v2) protein language model to process antigen sequence data. The fusion prediction module incorporates a multi⁃scale feature extraction network based on CNN (Convolutional Neural Networks) to enhance the discriminative power of antibody and antigen representations,enabling comprehensive and efficient affinity prediction. Finally,the multi⁃scale representations of antibodies and antigens are fed into a fusion layer to generate robust affinity prediction results. Experimental results demonstrate that our proposed model surpasses all baseline methods on benchmark datasets and shows excellent performance on independent test sets,demonstrating the method's robust predictive power and generalization capability.
Parkinson's disease (PD) is a common neurodegenerative disorder,and early diagnosis is crucial for slowing disease progression. Magnetic resonance imaging (MRI) has been widely used in the diagnosis of PD due to its non⁃invasive nature and high⁃resolution capabilities. However,existing methods often rely on information from a single domain,resulting in insufficient information modeling. Furthermore,the pathological changes in PD are not isolated,existing approaches frequently fail to account for regional correlations between image patches,thereby neglecting the functional interactions among brain regions. To address these limitations,we propose a two⁃branch deep learning framework that integrates spatial and frequency⁃domain information. The spatial branch employs a Vision Transformer to capture global spatial relationships in MRI images,while the frequency branch utilizes GFNet (Global Filter Network) to extract frequency⁃domain features. An adjacency matrix is constructed using Gaussian⁃weighted Euclidean distance,and a graph convolutional network (GCN) is introduced to model the topological relationships between image patches. During model training,axial 2D slices are selected and fine⁃tuned using pre⁃trained weights from ImageNet through transfer learning. A majority voting strategy is then applied to aggregate predictions from multiple slices of a single subject to produce a subject⁃level classification result. The proposed method was evaluated on a PD dataset comprising both patients and healthy controls. Experimental results demonstrate that our approach outperforms several state⁃of⁃the⁃art methods in terms of key metrics including accuracy,specificity,and F1⁃score,thereby confirming its potential for effective clinical application.
The rapid urbanization process has significantly exacerbated traffic congestion in metropolitan areas,creating an urgent need for intelligent traffic management solutions. In this context,DRL (Deep Reinforcement Learning) has emerged as a prominent research focus due to its superior dynamic adaptability in complex traffic environments. However,existing approaches face critical limitations: single⁃agent DRL models lack coordination capabilities among intersections,while multi⁃agent systems often suffer from high computational complexity and poor scalability. To address these challenges,this paper proposes a novel DRL⁃based traffic signal control framework that integrates congestion attribution analysis with optimization strategies. First,the Shapley value from cooperative game theory is applied to analyze congestion attribution. Considering intersection signal strategies as game players and road network congestion as the cooperative result,it quantifies each intersection's contribution to congestion. Secondly,it proposes a Shapley value⁃based attribution⁃assisted DRL optimization framework. During multi⁃agent synchronous decision⁃making,it jointly trains only the Top⁃k high⁃contribution intersection agents,approaching full network joint⁃training performance. To address synchronous decision⁃making's stability issue,it develops an attribution⁃assisted sequential decision⁃making approach,where decision⁃order selection is based on Shapley value⁃based attribution analysis results. Experimental results verify the effectiveness of Shapley value⁃based congestion attribution. Compared with baseline methods,the proposed framework improves the training efficiency and the overall traffic efficiency.
Traffic prediction is crucial in urban traffic management and flow monitoring,but the complex spatial⁃temporal relationships in traffic flow bring great challenges to accurate prediction. Spatial⁃temporal graph neural networks and attention mechanisms have become effective methods to solve dependencies in traffic roads. However,most GNN⁃based models rely on predefined static adjacency matrices to model spatial dependencies,and the extraction of spatial features relies on fixed graph structure weights. Moreover,the existing attention mechanism ignores the characteristics of traffic flow data,and it is difficult to capture similar traffic patterns between nodes. To solve the above problems,this paper proposes the TD⁃ADGAT model,which uses an adaptive graph diffusion attention network to model spatial relationships. It does not need to explicitly calculate the weights of predefined graph structures,and can adaptively generate trainable adjacency matrix weights,thereby significantly reducing the time complexity. In addition,according to the characteristics of traffic flow time series data,the attention mechanism of time dimension is redesigned,and the traffic flow data is decomposed into trend and seasonal factors. Multilayer Perceptron (MLP) is used to capture the trend change,and Fourier attention is used to model the seasonal changes,so as to better model the temporal relationship of traffic flow and the traffic pattern between nodes. Experimental results on three public datasets demonstrate that TD⁃ADGAT outperforms existing baseline models in prediction accuracy.
With the rapid development of intelligent transportation systems and shared mobility services,the demand for accurate travel time prediction has been increasing. As a result,accurate travel time estimation has become a crucial task for improving traffic efficiency and optimizing user experience. Traditional travel time estimation methods mostly focus on predicting the mean value and provide point estimates,while ignoring the uncertainty caused by complex and dynamically changing traffic conditions. Quantifying the uncertainty of travel time and providing results with confidence intervals can offer more comprehensive and trustworthy predictions for users and mobility platforms. However,due to the dynamically varying travel time distributions of road segments and the accumulated uncertainty across multiple segments,it remains challenging to quantify travel time uncertainty accurately. To address this issue,this paper proposes a travel time prediction and uncertainty quantification method based on dynamic traffic conditions. A novel model,Distribution Aware Travel Time Estimation (DATE),is designed,which consists of a road network partitioning module,a global distribution⁃aware module,and a distribution fusion⁃based uncertainty estimation module. This model not only improves the accuracy of travel time prediction but also provides reliable confidence intervals for comprehensive uncertainty quantification. Experimental results on two real⁃world datasets demonstrate that DATE outperforms existing methods in terms of both prediction accuracy and reliability,offering robust decision support for intelligent transportation systems.
The core homophily assumption of Graph Neural Network (GNN) holds that connected nodes are more likely to have similar labels. However,in heterophilic settings (where connected nodes usually have dissimilar labels),this assumption becomes a critical limitation,and the conventional neighborhood aggregation mechanism significantly degrades model performance. Current improvement approaches that adopt higher⁃order neighborhoods or reweighting schemes not only introduce a large amount of structural noise from dissimilar nodes but also fail to capture subtle contextual structural patterns due to insufficient ability to distinguish local subgraph variations. To solve these intertwined problems,we propose a new framework called Selective Graph Convolution Network with Contextual Structure Awareness (SGC⁃CSA). It models structural context and achieves adaptive selective propagation simultaneously through integrated design. The former guides ego network partitioning with group fairness constraints to extract domain invariant patterns and avoid contextual blindness,while the latter calculates similarity measures through neighborhood distribution and uses a gating mechanism to control the fusion ratio of homophilic candidate nodes inferred from attribute topology alignment,direct neighbors and the features of the core node itself. This framework enables nodes to dynamically filter irrelevant information and ensures structural coherence in scenarios with different levels of homophily and heterophily. Tests on ten real⁃world network datasets confirm that it successfully mitigates the problems of aggregation bias and structural distribution shift.
Frequent Pattern Mining (FPM) on large⁃scale graphs has garnered significant attention due to its broad applications in areas such as social network analysis. However,constrained by traditional pattern semantics,conventional FPM techniques struggle to meet the diverse demands of data analysis. To address this challenge,this paper introduces Parameterized Patterns (p⁃patterns) and their mining framework. By incorporating parameters into patterns,the matching semantics are extended,enabling the effective capture of complex relationships within graphs. An efficient mining algorithm,PMiner,is designed and implemented to discover frequent p⁃patterns from large graphs. Furthermore,this paper proposes Graph Association Rules (GAR) based on p⁃patterns and designs the GARGen algorithm to uncover latent associations between nodes. Experiments on real⁃world graph datasets not only validate the computational efficiency of the proposed algorithms but also highlight the distinctions between p⁃patterns and traditional patterns,as well as the effectiveness of GAR in tasks such as link prediction.
To address the shortcomings of existing multi⁃UAV task allocation models,which use linear distance as a metric and ignore environmental constraints such as terrain and threat sources,and to solve the problem of the traditional Bald Eagle optimization algorithm,which suffers from insufficient population diversity and proneness to local optima,a multi⁃UAV task allocation method based on Hybrid Bald Eagle⁃Aquila Optimization (HBAO) is proposed. First,a multi⁃traveling salesman task allocation model is constructed that integrates three⁃dimensional terrain,threat sources,and UAV physical constraints. Task allocation and trajectory planning are tightly coupled via a cost function. Then,a task allocation encoding is designed and the optimization strategy is improved. The expand⁃contract search strategy of the Aquila optimization algorithm is integrated into the global search phase of the Bald Eagle algorithm to improve exploration efficiency. A refractive back⁃learning mechanism is introduced to enhance population diversity,effectively balancing algorithm development and exploration capabilities. Finally,dual⁃model experiments are designed to validate the algorithm's performance. Results show that the proposed HBAO algorithm achieves high solution accuracy and convergence speed in complex battlefield environments. Its overall performance outperforms five competing algorithms,with significantly reduced global cost, while generating low⁃energy,highly adaptable task allocation solutions.
With the rapid advancement of LLMs (Large Language Models) in recent years,the granularity of event representation has progressively shifted from the traditional sentence level to the document level. Events are no longer confined to single⁃sentence expressions but are increasingly embedded across multiple sentences or even entire documents. While this change enhances semantic modeling capabilities,it also introduces new challenges. In particular,due to the high degree of flexibility and widespread lexical ambiguity in Chinese,these models often struggle to accurately identify argument roles of words within context,especially in document⁃level settings where explicit syntactic structures are less apparent. To address this issue,we propose a novel method for document⁃level event argument representation,termed SS⁃EAR (Semantic⁃Syntactic Feature Fusion for Document⁃Level Event Argument Representation). This approach begins by analyzing the syntactic structures within a document and constructing a dependency syntactic graph. Multi⁃level representations of entities are then encoded as node features within a structure⁃aware graph. Finally,a GNN (Graph Neural Network) is employed to integrate syntactic and semantic features through its message⁃passing mechanism. This design enables the model to better handle complex sentence patterns and semantic ambiguity,thereby improving its performance on event argument extraction at the document level. Experimental results on two authoritative Chinese document⁃level event argument extraction datasets demonstrate that the proposed method outperforms six strong baselines and achieves the optimal F1⁃value,validating the effectiveness of our approach.
Hierarchical classification tasks typically face multiple challenges,such as high⁃dimensional feature space,a complex label hierarchy,and label sparsity. Among these,label sparsity can lead to insufficient supervision,thereby degrading the effectiveness of feature selection.To address this issue,this paper proposes a novel hierarchical feature selection method:Hierarchical Feature Selection Based on Label Fuzzification (HFSLF). The core idea of this method is to improve supervision by enhancing the semantic expressiveness of sparse labels. Specifically,HFSLF first uses sibling relationships to construct fuzzy similarities among categories and transforms the original sample labels into label distributions. This transformation effectively expands the coverage of supervisory information and strengthens semantic supervision in sparse scenarios. Then,the proposed algorithm employs the mutual information between features and label distributions as a supervisory signal,guiding the feature weights to approximate their corresponding mutual information values,thereby enhancing the model's preference for highly relevant features. Experiments on six hierarchical datasets demonstrate the effectiveness of the proposed algorithm.
